File gerrit.config
The audit configuration can be defined in the main gerrit.config
in a specific section dedicated to the audit-sl4j plugin.
gerrit.audit-sl4j.format
: Output format of the audit record. Can be set to either JSON
or CSV. By default, CSV.
gerrit.audit-sl4j.logName
: Write audit to a separate log name under Gerrit logs directory.
By default, audit records are put into the error_log.
How can I write the section gerrit.audit-sl4j.logName?
I have tried this :
But it doesn't work.
You forgot to paste the example that doesn't work for you. While you update it I can share a working example in case it can be of any help.
This is the audit-sl4j configuration part of a working gerrit.config:
[plugin "audit-sl4j"]
format = JSON
logName = audit_log
In this example, we are writing the audit logs to a file called audit_log in JSON format.
I hope this help.
Related
this is first time I work with XML so maybe it is very easy problem, but I would like to ask what is the best way I should create XML filled with data from DB when I know the schema.
Of course there is possibility to do it manually, but I would like to do something like this:
create configuration file which would specify column name, xpath, default value (if DB column is not populated) and based on this configuration file to create XML based on known schema.
Is there some tool in java which would allow be something like this?
MagicXMLTool tool = new MagicXMLTool(mySchema.xsd);
tool.set("some/xpath/",value);
tool.set("another/xpath",anotherValue)
String xml = tool.generateXML();
Thanks a lot!
I am trying this marklogic spark connector tutorial.
https://developer.marklogic.com/blog/marklogic-spark-example
I was able to execute this. What I found is, it picks the documents database by default.
Question is:
Given code looks like this:
JavaPairRDD<DocumentURI, MarkLogicNode> mlRDD = context.newAPIHadoopRDD( hdConf, Configuration DocumentInputFormat.class, InputFormat DocumentURI.class, Key Class MarkLogicNode.class, Value Class );
I was wondering how I can pass the specific Document URI and Database to just get a specific document in a database.
For Example;
Documents database with xml files created on importing a csv file. Mentioned below: Marklogic : Multiple XML files created on document on importing a csv. How to get root Document URI path?
Can some one share a sample code on how to pass the document URI and database name as parameters?
If you refer to documentation for MarkLogic Connector for Hadoop, specifically
Input Configuration Properties - You will find the property mapreduce.marklogic.input.documentselector which takes the XQuery path expression that allows you to select sepcific documents from the database.
The sample uses The Hadoop Connector.
Using MarkLogic 8, I believe you can set the database like this: com.marklogic.output.databasename in the job configuration.
http://docs.marklogic.com/guide/mapreduce/quickstart#id_38329
I'm new to Google Dataflow, and can't get this thing to work with JSON. I've been reading throughout the documentation, but can't solve my problem.
So, following the WordCount example i figured how data is loaded from .csv file with next line
PCollection<String> input = p.apply(TextIO.Read.from(options.getInputFile()));
where inputFile in .csv file from my gcloud bucket. I can transform read lines from .csv with:
PCollection<TableRow> table = input.apply(ParDo.of(new ExtractParametersFn()));
(Extract ParametersFn defined by me). So far so good!
But then I realize my .csv file is too big and had to convert it to JSON (https://cloud.google.com/bigquery/preparing-data-for-bigquery).
Since BigQueryIO is supposedly better for reading JSON, I tried with the following code:
PCollection<TableRow> table = p.apply(BigQueryIO.Read.from(options.getInputFile()));
(inputFile is then JSON file and the output when reading with BigQuery is PCollection with TableRows) I tried with TextIO too (which returns PCollection with Strings) and neither of the two IO options work.
What am I missing? The documentation is really not that detailed to find an answer there, but perhaps some of you guys already dealt with this problem before?
Any suggestions would be very appreciated. :)
I believe there are two options to consider:
Use TextIO with TableRowJsonCoder to ingest the JSON files (e.g., like it is done in the TopWikipediaSessions example);
Import the JSON files into a bigquery table (https://cloud.google.com/bigquery/loading-data-into-bigquery), and then use BigQueryIO.Read to read from the table.
I'm trying to read an N-Quads file with Jena, but all I get is an empty model. The file I'm trying to read is taken from the example in N-Quads documentation:
<http://example.org/#spiderman> <http://www.perceive.net/schemas/relationship/enemyOf> <http://example.org/#green-goblin> <http://example.org/graphs/spiderman> .
(I saved it as a file named file.nq).
The way I'm loading the model is using the RDFDataMgr. But it didn't work with Model.read either.
RDFDataMgr.loadModel("file.nq", Lang.NQUADS)
yields an empty model.
What am I missing? Doesn't Jena support N-Quads out-of-the-box?
Yes, Jena supports N-Quads. Try loadDataset.
N-Quads is for multiple graphs and you have read it into one graph. What you get is just the default graph triples, in this case, none.
There is a warning emitted:
WARN riot :: Only triples or default graph data expected : named graph data ignored
If you didn't get that then (1) you are running an old copy (2) you have turned logging off (3) the file is empty.
I am integrating data between two systems using Apache Camel. I want the resulting xml to be written to an xml file. I want to base the name of that file on some data which is unknown when the integration chain starts.
When I have done the first enrich step the data necessary is in the Exchange object.
So the question is how can I get data from the exchange.getIn().getBody() method outside of the process chain in order to generate a desirable filename for my output file and as a final step, write the xml to this file? Or is there some other way to accomplish this?
Here is my current Process chain from the routebuilders configuration method:
from("test_main", "jetty:server")
.process(new PiProgramCommonProcessor())
.enrich("piProgrammeEnricher", new PiProgrammeEnricher())
// after this step I have the data available in exchange.in.body
.to(freeMarkerXMLGenerator)
.to(xmlFileDestination)
.end();
best regards
RythmiC
The file component takes the file name from a header (if present). So you can just add a header to your message with the desired file name.
The header should use the key "CamelFileName" which is also defined from Exchange.FILE_NAME.
See more details at: http://camel.apache.org/file2