I have some doubts about the categories of log4j.
I have three categories ...
Program
Program.BUILD
Program.QUERY
When I define the following log4j.properties:
log4j.logger.program = DEBUG, stdout, file
log4j.logger.program.BUILD = DEBUG, file
and in Java I call:
Logger logger = Logger.getLogger("program.BUILD");
assume that the stdout and file are the appender to console and file respectively.
My problem is that when I specify the two categories, as shown, `program.BUILD log's are written to console and file. But he was only specified for the file appender. The log4j then makes it an inheritance?
I would like to specify three categories, but that when specified he caught the program.BUILD ONLY what was specified in that category, without taking the generic category (program).
But if not specified, the categories program.QUERY and program.BUILD, was picking up the program category, because it would represent the two that were not specified.
How can I do this?
Yes, Log4j has an inheritance system. You can disable it (that is, not let the log messages bubble up to the parent category) with the "additivity=false" flag.
Each enabled logging request for a given logger will be forwarded to
all the appenders in that logger as well as the appenders higher in
the hierarchy. In other words, appenders are inherited additively from
the logger hierarchy. For example, if a console appender is added to
the root logger, then all enabled logging requests will at least print
on the console. If in addition a file appender is added to a logger,
say C, then enabled logging requests for C and C's children will print
on a file and on the console. It is possible to override this default
behavior so that appender accumulation is no longer additive by
setting the additivity flag to false.
(See http://logging.apache.org/log4j/1.2/manual.html)
Related
I have a class that extends LayoutBase. In the doLayout method, I add a key to my logging called MSG and the value is set to ILoggingEvent event.getMessage().
I'm seeing values added to my logging but they're not consistent; some logging messages and some exception stack traces.
Can anyone tell me where ILoggingEvent event.getMessage() gets its value from?
Please refer to Logback architecture
It mentions that each log will go through series of step
Get Filter Chain Decision
If it exists, the TurboFilter chain is invoked. Turbo filters can set a context-wide threshold, or filter out certain events based on information such as Marker, Level, Logger, message, or the Throwable that are associated with each logging request.
Apply the selection rule
At this step, logback compares the effective level of the logger with the level of the request. If the logging request is disabled according to this test, then logback will drop the request without further processing.
Create a LoggingEvent object
If the request survived the previous filters, logback will create a LoggingEvent object containing all the relevant parameters of the request
Invoking appenders
After the creation of a LoggingEvent object, logback will invoke the doAppend() methods of all the applicable appenders, that is, the appenders inherited from the logger context.
Formatting the output
It is the responsibility of the invoked appender to format the logging event. However, some (but not all) appenders delegate the task of formatting the logging event to a layout.
Sending out the LoggingEvent
After the logging event is fully formatted it is sent to its destination by each appender.
I'm seeing values added to my logging but they're not consistent; some logging messages and some exception stack traces.
Now, coming to your query, it seems that you are receiving events which have exception information (Throwable) during the formatting step.
You can create a CustomFilter to filter out such events. All you have to do is extends Filter<ILoggingEvent>
public class DenyExceptionFilter extends Filter<ILoggingEvent> {
#Override
public FilterReply decide(ILoggingEvent iLoggingEvent) {
final IThrowableProxy throwableProxy = iLoggingEvent.getThrowableProxy();
if (throwableProxy != null && throwableProxy instanceof ThrowableProxy)
return FilterReply.DENY;
return FilterReply.ACCEPT;
}
}
This can be much more powerful, can filter specific types of exception. You can take this as your homework :P
Then you can add this Custom Filter to your appender as
<appender name="APPENDER_NAME" class="ch.qos.logback.classic.AsyncAppender">
<filter class="com.stackoverflow.DenyExceptionFilter" />
</appender>
Of course, add your layout as well after the filter.
I have the following logging.properties file:
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern = /tmp/file.log
java.util.logging.FileHandler.count = 1
java.util.logging.FileHandler.append = true
java.util.logging.FileHandler.formatter = java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$td.%1$tm.%1$tY %1$tH:%1$tM:%1$tS [%4$s] %5$s%6$s%n
fileclass = INFO, FileHandler
Currently everything goes to the console, plus 'fileclass' package into the file,
but I want 'fileclass' package to be excluded from the console.
I could define main package to go to the console, but the main program doesnt
have the package.
Is it possible to have such case within logging.properties:
- Everything goes to the console, except 'fileclass' which goes to the file
I haven't used Java Util Logger but normally that's related to "Additivity" of logger.
The way to achieve what you want is, set additivity of fileclass logger to false (which means excluding usage of parent appenders/handlers. Then add only the appender/handler you want for fileclass
The way to control additivity in Java Util Logging is by using useParentHandlers in your config. So it looks like:
fileclass.useParentHandlers=false
fileclass = INFO, FileHandler
By doing so, fileclass logger is not going to inherit handlers of its parent, and only use whatever you set to it.
One thing to note is, because logger is hierarchical. Therefore it will affect fileclass logger and all its children logger. For example if you have a logger called fileclass.foo, it will also be using only the FileHandler, which may or may not be what you want.
This can be done a couple ways.
Let's assume you want to leave the default console handler.
You can remove java.util.logging.FileHandler from the global handlers list, thus leaving you with the following:
handlers=java.util.logging.ConsoleHandler
This will cause all applications to log to the console. To complete your case, we must ensure 'fileclass' does not log to the console, yet does log to a file.
You can do this programmatically or declaratively:
Programmatically
Within the 'fileclass' app. Create a custom logger.
e.g.
Logger logger = Logger.getLogger(FileClass.class.getName());
// set logger level
logger.setLevel(Level.INFO);
FileHandler fileHandler = new FileHandler("file.log");
// Set the handler level
// NOTE: This setting will ignore INFO records sent by the logger
fileHandler.setLevel(Level.WARNING);
logger.addHandler(fileHandler);
Now that you've added a FileHandler manually. All that is left to do is disconnect the java.util.logging.ConsoleHandler from your 'fileclass'. You can do this by calling logger.removeHandler(consoleHandler), where consoleHandler is the instance of java.util.logging.ConsoleHandler
Declaratively
You can also declaratively add a file handler and disable parents and global handlers
# disable parents, e.g. in this case the console handler
com.some.package.useParentHandlers=false
# add the file handler
com.some.package.logger.MyLogger.level=INFO
com.some.package.MyFileHandler.pattern=%h/CoolLog%g.log
com.some.package.handler.MyFileHandler.limit=20000000
com.some.package.handler.MyFileHandler.count=20
This will enable you add add a file handle specifically for that application, however it won't remove the console handler since its declared globally.
You may want to take a look at my blog posts on Java Logging:
understand java logging
progammatically overriding logging.properties
I'm using logback as my logging framework and have a couple of jobs that run the same main function with different parameters and would like to create a log file for each job and name the log file with the job's name.
For example, if I had jobs a,b,c that all run MyClass.main() but with different parameters, then I'd like to see a-{date}.log, b-{date}.log, c-{date}.log.
I can achieve the {date} part by specifying a <fileNamePattern>myjob-%d{yyyy-MM-dd}.log</fileNamePattern> in my logback.xml, but I'm not sure how to (or if it is even possible) create the prefix of the file names dynamically (to be the job's name).
Is there a way to dynamically name logfiles in logback? Is there another logging framework that makes this possible?
As a follow up question, am I just taking a bad approach for having multiple jobs that call the same main function with different parameters and wanting a log file named after each job? If so is there a standard/best practice solution for this case?
EDIT: The reason why I want to name each log file after the name of the job is that each job naturally defines a "unit of work" and it is easier for me to find the appropriate log file in case one of the job fails. I could simply use a rolling log file for jobs a,b,c but I found it harder for me to look through the logs and pinpoint where each job started and ended.
I would use you own logging.
public static PrintWriter getLogerFor(String prefix) {
SimpleDatFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
String filename= prefix + sdf.format(new Date());
return new PrintWriter(filename, true); // auto flush.
}
You can write a simple LRU cache e.g. with LinkedHashMap to reuse the PrintWriters.
Is there a way to dynamically name logfiles in logback? Is there another logging framework that makes this possible?
I don't believe this is possible using the out of the box appenders (File, RollingFile etc) configured by a standard logback.xml file. To do what you want, you would need to dynamically create appenders on the fly and assign loggers to different appenders. Or you would need to invent a new appender that was smart enough to write to multiple files at the same time, based on the logger name.
am I just taking a bad approach for having multiple jobs that call the same main function with different parameters and wanting a log file named after each job?
The authors of logback address this issue and slightly discourage it in the section on Mapped Diagnostic Context
A possible but slightly discouraged approach to differentiate the logging output of one client from another consists of instantiating a new and separate logger for each client. This technique promotes the proliferation of loggers and may increase their management overhead. ... A lighter technique consists of uniquely stamping each log request servicing a given client.
Then they go on to discuss mapped diagnostic contexts as a solution to this problem. They give an example of a NumberCruncherServer which is crunching numbers, for various clients in various threads simultaneously. By setting the mapped diagnostic context and an appropriate logging pattern it becomes easy to determine which log events originated from which client. Then you could simply use a grep tool to separate logging events of interest into a separate file for detailed analysis.
Yes you can.
First you have to familiarize your self with these 2 concepts: Logger and Appender. Generally speaking, your code obtains a Logger, and invoke logging method such as debug(), warn(), info() etc. Logger has Appender attached to it, and Appender presents the logging information to the user according to the configuration set to it.
Once you're familiar, what you need to do is to dynamically create a FileAppender with a different file name for each different job type, and attach it to your Logger.
I suggest you spend some time with logback manual if none of above make sense.
You can make use of the logback discriminators, as discriminators' keys can be used in the <FileNamePattern> tag. I can think of two options:
Option One:
You can use the Mapped Diagnostic Context discriminator to implement your logging separation, you'll need to set a distinct value from each job using MDC.put();
Once you've done that your appender on logback configuration would look something like:
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator class="ch.qos.logback.classic.sift.MDCBasedDiscriminator">
<key>jobName</key> <!-- the key you used with MDC.put() -->
<defaultValue>none</defaultValue>
</discriminator>
<sift>
<appender name="jobsLogs-${jobName}" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<FileNamePattern>${jobName}.%d{dd-MM-yyyy}.log.zip</FileNamePattern>
.
.
.
</rollingPolicy>
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>...</Pattern>
</layout>
</appender>
</sift>
</appender>
Option Two:
Implement your own discriminator -implementing ch.qos.logback.core.sift.Discriminator-, to discriminate based on the thread name. Would look something like this:
public class ThreadNameDiscriminator implements Discriminator<ILoggingEvent> {
private final String THREAD_NAME_KEY = "threadName";
#Override
public String getDiscriminatingValue(ILoggingEvent event) {
return Thread.currentThread().getName();
}
#Override
public String getKey() {
return THREAD_NAME_KEY;
}
// implementation for more methods
.
.
.
}
The logging appender would look like option one with the discriminator class being ThreadNameDiscriminator and the key being threadName. In this option there is no need to set a value to the MDC from your jobs, hence, no modification on them is required.
I am using java.util.logging.Logger Class for logging in my application. I have added FileHandler so that the application log is stored directly in log.txt file.
But for some reason, after the application is terminated the log is far from complete. On cmd, I can see all the statements but they are never appended to the file.
I have set FileHandler to the Logger by:
private void setLogger() {
try {
FileHandler hand = new FileHandler("log/log.txt", true);
hand.setFormatter(new SimpleFormatter());
Logger log = Logger.getLogger(ImageRename.MAIN_LOG);
//log.setUseParentHandlers(false);
log.addHandler(hand);
log.setLevel(Level.ALL);
} catch (IOException e) {
System.out.println("Could Not set logger");
}
}
Any problem with flushing? How to solve it? Thanks.
PS: On debugging, I have noticed that in between
Logger.getLogger(ImageRename.MAIN_LOG).getHandlers().length
returns 0. Where as it should return 1. Initially it was printing 1, but somewhere down the line it becomes zero.
The problem is ... garbage collection.
What is happening is likely the following:
You call Logger.getLogger(ImageRename.MAIN_LOG);
You setup the logger.
Java notices it is unreferenced, and discards it.
You call Logger.getLogger(ImageRename.MAIN_LOG); and expect to get the same logger.
A fresh logger is set up with default configuration.
You can avoid this by two measures:
Use a configuration file logging.properties for configuration. When creating the logger, the Java logging API will consult the configuration, and thus recreate it appropriately.
Use static references. This is a best practise anyway. Equip each class with a logger:
private final static Logger LOG =
Logger.getLogger(ExampleClass.class.getName());
While the class is loaded it then should not be garbage collected AFAICT.
See e.g. http://www.massapi.com/class/fi/FileHandler.html for an example (found via Google)
Note the following line, which may be your problem:
fileHandler.setLevel(Level.ALL);
(Note: this is the level of the Handler, not of the Logger or message.)
For debugging, first try to get messages at an ERROR level logged. Messages at level INFO and below are often supressed by default.
Also try setting the logging level as soon as possible. In my experience, the most reliable way of configuring Java logging is by using a properties file, and invoking Java with:
-Djava.util.logging.config.file=path/to/file/logging.properties
The reason is that the settings you do sometimes are not applied to loggers created before you loaded the settings, once some changes have been made to the logging.
Using the standard java logging API (import java.util.logging.Logger), after the construction:
Logger l = Logger.getLogger("mylogger");
I am already able to log something. Since it has not a FileHandler, it doesn't write anything to disk.
l.severe("test with no handler");
It writes (some, not all) the log messages to output.
How can I disable this feature?
thanks in advance
Agostino
The question arises if you don't know the default configuration of java util logging.
Architectural fact:
0)Every logger whatever its name is has the root logger as parent.
Default facts:
1) the logger property useParentHandlers is true by default
2) the root logger has a ConsoleHandler by default
So. A new logger, by default sends its log records also to his parent(point 1) that is the root logger(point 0) wich, by default, logs them to console(point 2).
Remove console logging is easy as:
Logger l0 = Logger.getLogger("");
l0.removeHandler(l0.getHandlers()[0]);
Standard Loggers in Java are in a hierarchical structure and child Loggers by default inherit the Handlers of their parents. Try this to suppress parent Handlers from being used:
l.setUseParentHandlers(false);
By disable this feature you mean you don't want to log anything at all? If that's the case you have to set the level of that logger to NONE
For instance, if you're using a logging properties file you can set:
mylogger.level=NONE