Am trying to understand if there are any best practices/utilities/out of the box functionality for logging messages that are quite long (typically json file responses/xmls/CSVs etc).
While logging as part of my application I do something like
log.info("Incompatible order found {}", order.asJson())
The problem being that the asJson() representation could be pretty long. In the log files the actual json is only relevant say 1% of the time. So it is important enough to retain but big enough to make me lose focus when I am trawling through logs.
Is there anyway I can could do something like
log.info("Incompatible order found, file dumped at {}", SomeUtility.dumpString(order.asJson()));
where the utility dumps the file into a location consistent with other log files and then in my log file I can see the following
Incompatible order found, file dumped at /abc/your/log/location/tmpfiles/xy23nmg
Key things to note being
It would be preferable to simply use the existing logging api to somehow configure this so that the location of these temp files are the same as log itself this way it goes through the cleanup cycle after N days etc just like the actual log files.
I can obviously write something but I am keen on existing utilities or features if already available withing log4j
I am aware that when logs such as these are imported into analysis systems like Splunk then only the filenames will be present without the actual files and this is okay.
(Suggestion was given based on logback instead of log4j2. However I believe similar facilities exists in log4j2)
In Logback, there is a facility called SiftingAppender, which allow separate appender (e.g. file appender) to be created according to some discriminator (e.g. in MDC).
So, by configuring an appender (e.g. orderDetailAppender) which separates file based on a discriminator (e.g. by putting order ID in MDC), and make use of a separate logger to connect to the appender, this should give you the result you want:
pseudo code:
logback config:
<appender name="ORDER_DETAIL_APPENDER" class="ch.qos.logback.classic.sift.SiftingAppender">
<!-- use MDC disciminator by default, you may choose/develop other appropriate discrimator -->
<sift>
<appender name="ORDER-DETAIL-${ORDER_ID}" class="ch.qos.logback.core.FileAppender">
....
</appender>
</sift>
</appender>
<logger name="ORDER_DETAIL_LOGGER">
<appender-ref name="ORDER_DETAIL_APPENDER"/>
</logger>
and your code looks like:
Logger logger = LoggerFactory.getLogger(YourClass.class); // logger you use normally
Logger orderDetailLogger = LoggerFactory.getLogger("ORDER_DETAIL_LOGGER");
.....
MDC.put("ORDER_ID", order.getId());
logger.warn("something wrong in order {}. Find the file in separate file", order.getId());
orderDetailLogger.warn("Order detail:\n{}", order.getJson());
// consider making order.getJson() an lambda, or wrap the line with logger
// level checking, to avoid json construction even not required to log
MDC.remove("ORDER_ID");
The simplest approach is to use a separate logger for the JSON dump, and configure that logger to put its output in a different file.
As an example:
private static final Logger log = LogManager.getLogger();
private static final Logger jsonLog = LogManager.getLogger("jsondump");
...
log.info("Incompatible order found, id = {}", order.getId());
jsonLog.info("Incompatible order found, id = {}, JSON = {}",
order.getId(), order.asJson());
Then, configure the logger named jsondump to go to a separate file, possibly with a separate rotation schedule. Make sure to set additivity to false for the JSON logger, so that messages sent to that logger won't be sent to the root logger as well. See Additivity
in the Log4J docs for details.
Example (XML configuration, Log4J 2):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
<Appenders>
<!-- By setting only filePattern, not fileName, all log files
will use the pattern for naming. Files will be rotated once
an hour, since that's the most specific unit in the
pattern. -->
<RollingFile name="LogFile" filePattern="app-%d{yyyy-MM-dd HH}00.log">
<PatternLayout pattern="%d %p %c [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
<!-- See comment for appender named "LogFile" above. -->
<RollingFile name="JsonFile" filePattern="json-%d{yyyy-MM-dd HH}00.log">
<!-- Omitting logger name for this appender, as it's only used by
one logger. -->
<PatternLayout pattern="%d %p [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<!-- Note that additivity="false" to prevent JSON dumps from being
sent to the root logger as well. -->
<Logger name="jsondump" level="debug" additivity="false">
<appender-ref ref="JsonFile"/>
</Logger>
<Root level="debug">
<appender-ref ref="LogFile"/>
</Root>
</Loggers>
</Configuration>
Related
I am trying to setup log4j2 to write logs using the RollingFileAppender. I want to configure the logging system programmatically instead of using XML files.
Here is what I tried (mostly the same as the docs at https://logging.apache.org/log4j/2.x/manual/customconfig.html#Configurator):
public static void configure(String rootLevel, String packageLevel) {
ConfigurationBuilder<BuiltConfiguration> builder = ConfigurationBuilderFactory
.newConfigurationBuilder();
builder.setConfigurationName("RollingBuilder");
builder.setStatusLevel(Level.TRACE);
// Create a rolling file appender
LayoutComponentBuilder layoutBuilder = builder.newLayout("PatternLayout")
.addAttribute("pattern", "%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p %c{1}:%L - %m%n");
ComponentBuilder triggeringPolicy =
builder
.newComponent("Policies")
.addComponent(
builder
.newComponent("SizeBasedTriggeringPolicy")
.addAttribute("size", "200M")
);
AppenderComponentBuilder appenderBuilder =
builder
.newAppender("rolling", "RollingFile")
.addAttribute("fileName", "log")
.addAttribute("filePattern", "log.%d.gz")
.add(layoutBuilder)
.addComponent(triggeringPolicy);
builder.add(appenderBuilder);
// Create new logger
LoggerComponentBuilder myPackageLoggerBuilder =
builder.newLogger("com.mypackage", packageLevel)
.add(builder.newAppenderRef("rolling"))
.addAttribute("additivity", false);
builder.add(myPackageLoggerBuilder);
RootLoggerComponentBuilder rootLoggerBuilder =
builder
.newRootLogger(rootLevel)
.add(builder.newAppenderRef("rolling"));
builder.add(rootLoggerBuilder);
// Initialize logging
Configurator.initialize(builder.build());
}
I call the configure() method at the start of the main method. A file named log is created when I run my program, but all log output goes to standard out and the log file remains empty.
Can someone help figure out what is wrong with my config?
I am not using any log4j configuration file, if that makes a difference. Also using the slf4j API in my code. Dependencies -
org.apache.logging.log4j:log4j-api:2.11.1
org.apache.logging.log4j:log4j-core:2.11.1
org.apache.logging.log4j:log4j-slf4j-impl:2.11.1
org.slf4j:slf4j-api:1.7.25
First, this answer is in response to the additional information provided in your comment.
My requirement is that I want to control log levels for different
packages via command line flags when I launch my program
Since you want to use program arguments to control your log level I suggest you take a look at the Main Arguments Lookup and the Routing Appender. Using these two features together you can set up your logging configuration to send log events to the appropriate appender based on program arguments.
I will provide a simple example to help guide you.
First here is an example Java class that generates some log events:
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.lookup.MainMapLookup;
public class SomeClass {
private static Logger log = LogManager.getLogger();
public static void main(String[] args){
MainMapLookup.setMainArguments(args);
if(log.isDebugEnabled())
log.debug("This is some debug!");
log.info("Here's some info!");
log.error("Some error happened!");
}
}
Next the configuration file for log4j2 (see comments in the code for details):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn">
<Appenders>
<Routing name="myRoutingAppender">
<!-- log events are routed to appenders based on the logLevel program argument -->
<Routes pattern="$${main:logLevel}">
<!-- If the logLevel argument is followed by DEBUG this route is used -->
<Route ref="DebugFile" key="DEBUG" />
<!-- If the logLevel argument is omitted or followed by any other value this route is used -->
<Route ref="InfoFile" />
</Routes>
</Routing>
<!-- This appender is not necessary, was used to test the config -->
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
</Console>
<!-- Below are the 2 appenders used by the Routing Appender from earlier -->
<File name="DebugFile" fileName="logs/Debug.log" immediateFlush="true"
append="false">
<PatternLayout
pattern="%d{yyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
</File>
<File name="InfoFile" fileName="logs/Info.log" immediateFlush="true"
append="false">
<PatternLayout
pattern="%d{yyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n" />
<LevelRangeFilter minLevel="FATAL" maxLevel="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
</File>
</Appenders>
<Loggers>
<!-- Root logger is set to DEBUG intentionally so that debug events are generated.
However, events may be ignored by the LevelRangeFilter depending on where they
are routed by the Routing Appender
-->
<Root level="DEBUG">
<AppenderRef ref="Console" />
<AppenderRef ref="myRoutingAppender" />
</Root>
</Loggers>
</Configuration>
Using this configuration if you do not provide a "logLevel" argument then by default log events are routed to the "InfoFile" appender and any events that are more specific than INFO are ignored via the LevelRangeFilter.
If a "logLevel" argument is provided and is followed by "DEBUG" then log events are routed to the "DebugFile" appender and none of the events are ignored.
Note that I did try to use lookups to set the log level but it appears that log level parameters can't be configured via lookups. That is why I had to use this alternative approach. My one concern with this approach is that as I noted in the comments within the configuration file the log level must be kept at DEBUG which means you always generate the DEBUG events even if you don't use them. This might impact performance. A workaround would be to use your program argument to determine whether you need to generate the debug events. For example:
Normally you would use:
if(log.isDebugEnabled())
log.debug("This is some debug!");
but when using the configuration above you would use something like:
if("DEBUG".equals(args[1]))
log.debug("This is some debug!");
and you could make that more efficient using an enum (maybe even use the Level class provided by log4j2) if you needed to.
I hope this helps get you started.
Please find the sample Root log and finally needs to call reconfigure function.
RootLoggerComponentBuilder rootLogger = builder
.newRootLogger(Level.ALL)
.add(builder.newAppenderRef("LogToRollingFile"));
LoggerComponentBuilder logger = builder
.newLogger("MyClass",Level.ALL)
.addAttribute("additivity", false);
Configurator.reconfigure(builder.build());
I have defined a Marker and would like to log this only to a specific file. Therefore I'm trying to set additivity = false. But it does not work and is still also logged to my global logfile in addition. What might be wrong here?
<Configuration>
<Appenders>
<RollingFile name="TEST" fileName="test.log" filePattern="test1.log">
<MarkerFilter marker="TEST_LOG" onMatch="ACCEPT" onMismatch="DENY"/>
</RollingFile>
<RollingFile name="GLOBAL" fileName="global.log" filePattern="global.log">
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
<MarkerFilter marker="TEST_LOG" onMatch="DENY" onMismatch="ACCEPT"/>
</RollingFile>
</Appenders>
<Loggers>
<logger name="foo" additivity="false">
<appender-ref ref="TEST" />
</logger>
</Loggers>
</Configuration>
usage:
LogManager.getRootLogger().info(MarkerManager.getMarker("TEST_LOG"), "test log");
In the sample code the marker is named "TEST", but in the configuration the MarkerFilter only accepts log events with marker "TEST_LOG". Config and code should use the same string.
In addition, you probably need to define two Appenders: one for your global log file and one for your specific (marked) log events. Then ACCEPT the marked log events only in your specific appender, and DENY the marked log events in the global appender.
Additivity is not going to work in your case, coz additivity is controlling a specific logger not to inherit parent's appender.
In your config, you are setting logger foo to have additivity = false, which means, unless you are using foo or its children loggers for your logging, you will still follow root loggers' config. (I can't see your root logger config in your post, I suspect it is referring to both appenders). From your quoted code, you are using the root logger for your logging, the configuration of foo logger simply won't take effect.
There are two solution I can suggest:
For all log messages you log with TEST_LOG marker, make sure you log it using foo logger. or,
If you need to use any logger for your TEST_LOG messages, then you should reconfigure your appender, so that your GLOBAL file appender accept anything BUT TEST_LOG. (Currently you are only denying SELL_FAILURE marker)
Correct solution depends on your actual requirement. Make sure you understand the basic concepts so you can choose the right solution for you.
Edit:
First, even with your "correct" config you mentioned in the comment, the issue of different loggers still holds. Which mean, coz you are using root logger to do your logging, your config for foo logger has nothing to do with your log message, and additivity is out of picture in your case.
I am not using Log4J2, a brief check on the usage of filter lead me to two issues:
First, I believe proper way to define more than 1 filter is to make use of Composite Filter (which means defining in a <Filters> element, though I don't quite get the syntax from Log4J's doc).
Second, even you define it in composite filter, your config will still have problem. When a log event is having INFO or higher level, you declare in filter it is an ACCEPT or DENY which will prohibit subsequent filter evaluation. If you want to log messages with INFO and not containing TEST_LOG marker, you should do something like:
<MarkerFilter marker="TEST_LOG" onMatch="DENY" onMismatch="NETURAL"/>
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
NEUTRAL means current filter cannot determine whether to accept or deny the message, and will evaluate next filter to determine it.
Couple of things I noticed:
You should use the same marker string in the code, and in the config for both MarkerFilters.
You need to add AppenderRef elements to connect your logger to both appenders
You can specify a level in the AppenderRef, this is easier than using a ThresholdFilter
Which gives you this config:
<Configuration>
<Appenders>
<RollingFile name="MARKED_ONLY" fileName="markedOnly.log" filePattern="markedOnly1.log">
<MarkerFilter marker="MY_MARKER" onMatch="ACCEPT" onMismatch="DENY"/>
</RollingFile>
<RollingFile name="GLOBAL" fileName="global.log" filePattern="global.log">
<MarkerFilter marker="MY_MARKER" onMatch="DENY" onMismatch="ACCEPT"/>
</RollingFile>
</Appenders>
<Loggers>
<root level="trace">
<appender-ref ref="MARKED_ONLY" level="trace" />
<appender-ref ref="GLOBAL" level="info" />
</root>
</Loggers>
</Configuration>
And this code:
Logger logger = LogManager.getLogger(MyClass.class); // or .getRootLogger()
logger.info(MarkerManager.getMarker("MY_MARKER"), "test log");
Note it is important to use the same "MY_MARKER" string in the code, and in the filters for both appenders. That way, the appender for the global file can use that filter to DENY the marked log events.
I use log4j2 in my project something like this:
logger.log(Level.ERROR, this.logData);
My configuration file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="ERROR" DLog4jContextSelector="org.apache.logging.log4j.core.async.AsyncLoggerContextSelector">
<Appenders>
<!-- Async Loggers will auto-flush in batches, so switch off immediateFlush. -->
<RandomAccessFile name="RandomAccessFile" fileName="C:\\logs\\log1.log" immediateFlush="false" append="false">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m %ex%n</Pattern>
</PatternLayout>
</RandomAccessFile>
</Appenders>
<Loggers>
<Root level="error" includeLocation="false">
<AppenderRef ref="RandomAccessFile"/>
</Root>
</Loggers>
It creates my file, I log something to it, but it's still empty. When I trying to delete this file, OS told me that it in use (if app currently working), but even if I stop application, file still empty.
So which settings should I change to make it work correctly?
I suspect that asynchronous logging is not switched on correctly.
As of beta-9 it is not possible to switch on Async Loggers in the XML configuration, you must set the system property Log4jContextSelector to "org.apache.logging.log4j.core.async.AsyncLoggerContextSelector".
The reason you are not seeing anything in the log is that your log messages are still in the buffer and have not been flushed to disk yet. If you switch on Async Loggers the log messages will be flushed to disk automatically.
I share a cleaner and easier solution.
https://stackoverflow.com/a/33467370/3397345
Add a file named log4j2.component.properties to your classpath. This can be done in most maven or gradle projects by saving it in src/main/resources.
Set the value for the context selector by adding the following line to the file.
Log4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
Log4j will attempt to read the system property first. If the system property is null, then it will fall back to the values stored in this file by default.
Below is the log4j2.xml file that I have created. I have configured async_file.log for Asynchronous Logging and regular_file.log for regular and synchronous logging. The problem is that the log files get created, but the size of the files is zero and with no logs. All logs are coming to server.log file (JBOSS) and not to the 2 files that I had got configured for (async_file.log and regular_file.log).
Please let me know why the logs are NOT going to the log files that I have configured. Please help me with this or give me some direction or hint.
I am calling the two different loggers in the same class file by name DCLASS as shown below:
private static final transient Logger LOG = Logger.getLogger(DCLASS.class);
private static final transient Logger ASYNC_LOG = Logger.getLogger("ASYNC");
I have included the following jars in the Class Path:
1. log4j-api-2.0-beta8.jar
2. log4j-core-2.0-beta8.jar
3. disruptor-3.0.0.beta1.jar
My log4j2.xml is as below:
<?xml version="1.0" encoding="UTF-8"?>
<configuration status="INFO">
<appenders>
<!-- Async Loggers will auto-flush in batches, so switch off immediateFlush. -->
<FastFile name="AsyncFastFile" fileName="../standalone/log/async_file.log"
immediateFlush="false" append="true">
<PatternLayout>
<pattern>%d %p %class{1.} [%t] %location %m %ex%n</pattern>
</PatternLayout>
</FastFile>
<FastFile name="FastFile" fileName="../standalone/log/regular_file.log"
immediateFlush="true" append="true">
<PatternLayout>
<pattern>%d %p %class{1.} [%t] %location %m %ex%n</pattern>
</PatternLayout>
</FastFile>
</appenders>
<loggers>
<!-- pattern layout actually uses location, so we need to include it -->
<asyncLogger name="ASYNC" level="trace" includeLocation="true">
<appender-ref ref="AsyncFastFile"/>
</asyncLogger>
<root level="info" includeLocation="true">
<appender-ref ref="FastFile"/>
</root>
</loggers>
</configuration>
The reason why the logs were not coming to the log files is because, I was using 'Logger' instead of 'LogManager'.
In the code, I had
private static final transient Logger ASYNC_LOG = Logger.getLogger("ASYNC");
The code should have been
private static final transient Logger ASYNC_LOG = Logmanager.getLogger("ASYNC");
When it is 'logger', then the compiler is looking into 'Log4j API' and when it is 'LogManager' it is looking into 'Log4j2 API'. Since I have configured everything to use Log4j2, by changing logger to LogManager, the logs started coming to the log files as expected.
We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.
So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.
We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below:
http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/
I also tried playing around with these alternatives:
http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html
http://www.mail-archive.com/log4j-user#logging.apache.org/msg05099.html
This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.
I anticipate some help from you folks. Thanks in advance....
Here's my question on the same topic:
dynamically creating & destroying logging appenders
I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list:
http://www.qos.ch/pipermail/logback-user/2009-August/001220.html
Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.
We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.
Look at SiftingAppender shipping with logback (log4j's successor), it is designed to handle the creation of appenders on runtime criteria.
If you application needs to create just one log file per session, simply create a discriminator based on the session id. Writing a discriminator involves 3 or 4 lines of code and thus should be fairly easy. Shout on the logback-user mailing list if you need help.
This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.
Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.
To split your files on requestId, you could do something like this:
logback.xml
<configuration>
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>requestId</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
<file>${requestId}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
</layout>
</appender>
</sift>
</appender>
<root level="DEBUG">
<appender-ref ref="SIFT" />
</root>
</configuration>
As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).
At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):
RequestProcessor.java
public class RequestProcessor {
private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);
public void process(Request request) {
MDC.put("requestId", request.getId());
log.debug("Request received: {}", request);
}
}
And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.
using filePattern
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>