log4j1.2 - central appender on top of several appenders - java

I'm quite new at Java and log4j, and i have been asked to create a new log on top of those existing.
The situation is : batchLog takes care of BATCH, wesLog takes cares of WES. Both INFO level. That's easy. My task is to create an errorLog that will gather ERROR from BOTH Logs. And it is getting weird now. Here are the main lines of my code :
log4j.rootLogger=INFO, stdout, errorLog
log4j.logger.batchLog=INFO, batchLog
log4j.logger.wesLog=INFO, wesLog
log4j.appender.stdout.Threshold=INFO
log4j.appender.wesLog.File=/opt/apache-tomcat-8.0.18/logs/ECL_WES.log
log4j.additivity.wesLog=false
log4j.appender.errorLog.File=/opt/apache-tomcat-8.0.18/logs/ECL_ERROR.log
log4j.additivity.errorLog=false
log4j.appender.batchLog.File=/opt/apache-tomcat-8.0.18/logs/ECL_BATCH.log
log4j.additivity.batchLog=false
And I got some issues, with WES writing in wesLog (as intended) but BATCH was writing in batchLog AND wesLog (errorLog working fine).
I have been tempted by creating each first Logs its own ERROR appender and both writing in the same file, but i heard it is working badly.
Help will greatly appreciated !
Alex
PS : in the main program, they keep referring to the batchLog and wesLog as batchLogger and wesLogger (see the +ger) and it seems to be working fine, and i don't get why, as for me it refers to another, non existing, non described, object. Any idea ?

Related

Why is getConfiguration occasionally throwing File cannot be null?

This might seem to be a simple question (if that's the case, please excuse me), but after 20 minutes of online searching I did not find any sensible answer.
I have several cron jobs to be executed via QuartzRunner, let's call the first FooBean and the second BarBean for now. FooBean is running daily at 00:00 for 6 (!) hours and sometimes it is not executed properly. After carefully studying the logs I found out that FooBean fails to execute when BarBean fails to execute. BarBean is executed daily at 03:00 and sometimes it throws:
22866 java.lang.NullPointerException: File cannot be <null>
22867 at org.jconfig.FileWatcher.<init>(FileWatcher.java:54)
22868 at org.jconfig.handler.AbstractHandler.addFileListener(AbstractHandler.java:39)
22869 at org.jconfig.ConfigurationManager.addFileListener(ConfigurationManager.java:180)
22870 at org.jconfig.ConfigurationManager.getConfiguration(ConfigurationManager.java:122)
sometimes it does not throw it and then FooBean is executed properly. If BarBean fails, then the log shows some Transaction deadlock issue repeatedly for ten minutes and then JDBC connection failures repeated again and again for almost three hours. I do not understand what file is being involved. The line throwing the error looks like:
Configuration config = ConfigurationManager.getConfiguration("inventory");
and org.jconfig namespaces are involved here. Intuitively this seems to be a misconfiguration, but I did not find any sources which explain the issue.
The ConfigurationManagers getConfiguration-Method tries to load a config file from your classpath. The function concatenates the given name with '_config.xml'.
In your case this would be 'inventory_config.xml' this file should be in available on your classpath (main/resources) because ConfigurationManager tries to load it from there.

Deeplearning4j Disable Logging

I have a deeplearning for java project which is producing huge amounts of logger output on STDO. I want to disable that but I cant seem to figure out how to do it.
I have a log4j.properties file in my src/main/resources folder which looks like this:
log4j.rootLogger=ERROR, Console
log4j.logger.play=WARN
log4j.appender.Console=org.apache.log4j.ConsoleAppender
log4j.appender.Console.layout=org.apache.log4j.PatternLayout
log4j.appender.Console.layout.ConversionPattern=%d{ABSOLUTE} %-5p ~ %m%n
log4j.appender.org.springframework=WARN
log4j.appender.org.nd4j=WARN
log4j.appender.org.canova=WARN
log4j.appender.org.datavec=WARN
log4j.appender.org.deeplearning4j=WARN
log4j.appender.opennlp.uima=OFF
log4j.appender.org.apache.uima=OFF
log4j.appender.org.cleartk=OFF
log4j.logger.org.springframework=WARN
log4j.logger.org.nd4j=WARN
log4j.logger.org.canova=WARN
log4j.logger.org.datavec=WARN
log4j.logger.org.deeplearning4j=WARN
log4j.logger.opennlp.uima.util=OFF
log4j.logger.org.apache.uima=OFF
log4j.logger.org.cleartk=OFF
log4j.logger.org.deeplearning4j.optimize.solvers.BaseOptimizer=OFF
slf4j.logger.org.deeplearning4j.optimize.solvers.BaseOptimizer=OFF
The specific output that is far too much is:
21:26:34.860 [main] DEBUG o.d.optimize.solvers.BaseOptimizer - Hit termination condition on iteration 0: score=1.2894165074915344E19, oldScore=1.2894191699433697E19, condition=org.deeplearning4j.optimize.terminations.EpsTermination#55f111f3
which happens multiple times a second while training.
The output of the log entry that you have provided look very much as the SLF4J output with Logback format (not LOG4J output).
Also dependencies of deeplearning4j-core advice SLF4J is used for logging.
Hence your log4j.properties have no effect on deeplearning4j logging. Try to add logback.xml configuration to the resources as well and switch to WARN or ERROR level for root logger, see https://logback.qos.ch/manual/configuration.html
There are some properties in the framework DL4J (1.0.0-beta7) that activate/deactivate the logs. I found some of them:
import org.nd4j.common.config.ND4JSystemProperties;
System.setProperty(ND4JSystemProperties.LOG_INITIALIZATION, "false");
System.setProperty(ND4JSystemProperties.ND4J_IGNORE_AVX, "true");
System.setProperty(ND4JSystemProperties.VERSION_CHECK_PROPERTY, "false");
Notice that this is an unconventional solution. On the other hand, there are some messages impossible to avoid:
MultiLayerNetwork.init()
In this method you can find a OneTimeLogger without validations:
OneTimeLogger.info(log, "Starting MultiLayerNetwork with WorkspaceModes set to [training: {}; inference: {}], cacheMode set to [{}]",
layerWiseConfigurations.getTrainingWorkspaceMode(),
layerWiseConfigurations.getInferenceWorkspaceMode(),
layerWiseConfigurations.getCacheMode());
If you find a better way to disable log messages inside DL4J please share it. There are some other ways outside the DL4J library.

apache PIG with datafu: Cannot resolve UDF's

I'm trying the quickstart from here: http://datafu.incubator.apache.org/docs/datafu/getting-started.html
I tried nearly everything, but I'm sure it must be my fault somewhere. I tried already:
exporting PIG_HOME, CLASSPATH, PIG_CLASSPATH
starting pig with -cpdatafu-pig-incubating-1.3.0.jar
registering datafu-pig-incubating-1.3.0.jar locally and in hdfs => both succesful (at least no error shown)
nothing helped
Trying this on pig:
register datafu-pig-incubating-1.3.0.jar
DEFINE Median datafu.pig.stats.StreamingMedian();
data = load '/user/hduser/numbers.txt' using PigStorage() as (val:int);
data2 = FOREACH (GROUP data ALL) GENERATE Median(data);
or directly
data2 = FOREACH (GROUP data ALL) GENERATE datafu.pig.stats.StreamingMedian(data);
I get this name-resolve error:
2016-06-04 17:22:22,734 [main] ERROR org.apache.pig.tools.grunt.Grunt
- ERROR 1070: Could not resolve datafu.pig.stats.StreamingMedian using imports: [, java.lang., org.apache.pig.builtin.,
org.apache.pig.impl.builtin.] Details at logfile:
/home/hadoop/pig_1465053680252.log
When I look into the datafu-pig-incubating-1.3.0.jar it looks OK, everything in place. I also tried some Bag functions, same error then.
I think it's kind of a noob-error which I just don't see (as I did not find particular answers for datafu in SO or google), so thanks in advance for shedding some light on this.
Pig script is proper, the only thing that could break is that while registering datafu there were some class dependencies that coudn't been met.
Try to run locally (pig -x local) and see a detailed log.
Check also the version of pig - it should be newer than 0.14.0.

Compress map output result exception in hadoop program

In Hadoop program, I tried to compress the map result, I wrote the following code:
conf.setBoolean("mapred.compress.map.output",true);
conf.setClass("mapred.map.output.compression.codec",GzipCodec.class,CompressionCodec.class);
and run it, I got the below exception, anybody know the reason?
WARN mapred.LocalJobRunner: job_local1149103367_0001
java.io.IOException: not a gzip file
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.processBasicHeader(BuiltInGzipDecompressor.java:495)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.executeHeaderState(BuiltInGzipDecompressor.java:256)
at org.apache.hadoop.io.compress.zlib.BuiltInGzipDecompressor.decompress(BuiltInGzipDecompressor.java:185)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:91)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:85)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:72)
at java.io.DataInputStream.readByte(DataInputStream.java:265)
at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
at org.apache.hadoop.io.WritableUtils.readVInt(WritableUtils.java:329)
at org.apache.hadoop.mapred.IFile$Reader.positionToNextRecord(IFile.java:400)
at org.apache.hadoop.mapred.IFile$Reader.nextRawKey(IFile.java:425)
at org.apache.hadoop.mapred.Merger$Segment.nextRawKey(Merger.java:323)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:613)
at org.apache.hadoop.mapred.Merger$MergeQueue.merge(Merger.java:558)
at org.apache.hadoop.mapred.Merger.merge(Merger.java:70)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:385)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:445)
today, I tested it again, I found that if the put the 2 lines before the job object was created,
Job job = new Job(conf, "MyCounter");
the error will happen, if after that, no error will occur, why this happen?
are you using MRv1 or MRv2. If you are using MRv2 then use the following job config.
config.setBoolean("mapreduce.output.fileoutputformat.compress", true);
config.setClass("mapreduce.output.fileoutputformat.compress.codec",GzipCodec.class,CompressionCodec.class);
additionally you can set
config.set("mapreduce.output.fileoutputformat.compress.type",CompressionType.NONE.toString());
BLOCK|NONE|RECORD are three types of compression.

Run an ABCL code that uses cl-cppre

With reference to my previous question,
Executing a lisp function from Java
I was able to call lisp code from Java using ABCL.
But the problem is, the already existing lisp code uses CL-PPCRE package.
I can not compile the code as it says 'CL-PPCRE not found'.
I have tried different approaches to add that package,
including
1) how does one compile a clisp program which uses cl-ppcre?
2)https://groups.google.com/forum/#!topic/cl-ppcre/juSfOhEDa1k
Doesnot work!
Other thing is, that executing (compile-file aima.asd) works perfectly fine although it does also require cl-pprce
(defpackage #:aima-asd
(:use :cl :asdf))
(in-package :aima-asd)
(defsystem aima
:name "aima"
:version "0.1"
:components ((:file "defpackage")
(:file "main" :depends-on ("defpackage")))
:depends-on (:cl-ppcre))
The final java code is
interpreter.eval("(load \"aima/asdf.lisp\")");
interpreter.eval("(compile-file \"aima/aima.asd\")");
interpreter.eval("(compile-file \"aima/defpackage.lisp\")");
interpreter.eval("(in-package :aima)");
interpreter.eval("(load \"aima/aima.lisp\")");
interpreter.eval("(aima-load 'all)");
The error message is
Error loading C:/Users/Administrator.NUIG-1Z7HN12/workspace/aima/probability/domains/edit-nets.lisp at line 376 (offset 16389)
#<THREAD "main" {3A188AF2}>: Debugger invoked on condition of type READER-ERROR
The package "CL-PPCRE" can't be found.
[1] AIMA(1):
Can anyone help me?
You need to load cl-ppcre before you can use it. You can do that by using (asdf:load-system :aima), provided that you put both aima and cl-ppcre into locations that your ASDF searches.
I used QuickLisp to add cl-ppcre (because nothing else worked for me).
Here is what I did
(load \"~/QuickLisp.lisp\")")
(quicklisp-quickstart:install)
(load "~/quicklisp/setup.lisp")
(ql:quickload :cl-ppcre)
First 2 lines are only a one time things. Once quickLisp is installed you can start from line 3.

Categories

Resources