I have a function which executes HTTP request/response and I would like to log the content of them.
Such code can be invoked multiple times within the same execution (classified by an "executionId") and each time can have a different "activityId".
What I would like to get is:
A new log file each time the executionId changes (so having execution1.log, execution2.log, execution3.log etc.)
To append into the potentially already existing file all logs for the same execution id (so if file execution1.log has already been created and I'm asked to log again something when executionId == execution1, then appending to that file).
With this purpose in mind, I have configured the following log4j2.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<properties>
<property name="logDir">logs/</property>
</properties>
<appenders>
<RollingFile name="HttpActivityTracingAppender" filePattern="${logDir}/http-logs/${ctx:executionId}.log" append="true">
<PatternLayout pattern="%d{ISO8601} [%t] %c [execution:%X{executionId},activity:%X{activityId}] : %p - %m%n"/>
<Policies>
<OnStartupTriggeringPolicy />
</Policies>
</RollingFile>
</appenders>
<loggers>
<logger name="HttpActivityTracing" level="trace">
<appender-ref ref="HttpActivityTracingAppender"/>
</logger>
</loggers>
</configuration>
... and then I'm invoking the logger as follows:
ThreadContext.put("id", UUID.randomUUID().toString());
ThreadContext.putAll(Map.of("executionId", ..., "activityId", ...);
ThreadContext.pop();
myLogger.trace("bla bla bla");
ThreadContext.clearAll();
The result is that the first time executionId is defined (let's say execution1), then a log file named execution1.log is created under logs/ as expected.
However, on consecutive executions with new values of executionId, I see the new executionId inside the log file but the name of the file used is still execution1.log.
So for example, inside execution1.log I will find stuff like:
2022-09-20T22:52:02,639 [main] HttpActivityTracing [execution:execution1,activity:activity1] : TRACE - ... (ok)
2022-09-20T22:52:02,639 [main] HttpActivityTracing [execution:execution1,activity:activity2] : TRACE - ... (ok)
2022-09-20T22:52:02,639 [main] HttpActivityTracing [execution:execution1,activity:activity3] : TRACE - ... (ok)
2022-09-20T22:52:02,639 [main] HttpActivityTracing [execution:execution2,activity:activity1] : TRACE - ... (NO, I WANT THIS INTO ANOTHER FILE "execution2.log")
2022-09-20T22:52:02,639 [main] HttpActivityTracing [execution:execution2,activity:activity2] : TRACE - ... (NO, I WANT THIS INTO ANOTHER FILE "execution2.log")
I have found several examples all over the web but I couldn't make this one work, I feel I'm not far but I can't make it work.
Can anyone please help?
P.s. I've tried to use fileName instead of filePattern in the RollingFile appender configuration, but it tries to create a file with the name ${ctx:executionId} in it which ends up being rejected by Windows because it contains an illegal character :.
I ended up solving the issue with the exact same configuration than illustrated in the question, plus the following two lines of code after the ThreadContext.pop():
LoggerContext context = (LoggerContext) LogManager.getContext(false);
context.reconfigure();
Related
Am trying to understand if there are any best practices/utilities/out of the box functionality for logging messages that are quite long (typically json file responses/xmls/CSVs etc).
While logging as part of my application I do something like
log.info("Incompatible order found {}", order.asJson())
The problem being that the asJson() representation could be pretty long. In the log files the actual json is only relevant say 1% of the time. So it is important enough to retain but big enough to make me lose focus when I am trawling through logs.
Is there anyway I can could do something like
log.info("Incompatible order found, file dumped at {}", SomeUtility.dumpString(order.asJson()));
where the utility dumps the file into a location consistent with other log files and then in my log file I can see the following
Incompatible order found, file dumped at /abc/your/log/location/tmpfiles/xy23nmg
Key things to note being
It would be preferable to simply use the existing logging api to somehow configure this so that the location of these temp files are the same as log itself this way it goes through the cleanup cycle after N days etc just like the actual log files.
I can obviously write something but I am keen on existing utilities or features if already available withing log4j
I am aware that when logs such as these are imported into analysis systems like Splunk then only the filenames will be present without the actual files and this is okay.
(Suggestion was given based on logback instead of log4j2. However I believe similar facilities exists in log4j2)
In Logback, there is a facility called SiftingAppender, which allow separate appender (e.g. file appender) to be created according to some discriminator (e.g. in MDC).
So, by configuring an appender (e.g. orderDetailAppender) which separates file based on a discriminator (e.g. by putting order ID in MDC), and make use of a separate logger to connect to the appender, this should give you the result you want:
pseudo code:
logback config:
<appender name="ORDER_DETAIL_APPENDER" class="ch.qos.logback.classic.sift.SiftingAppender">
<!-- use MDC disciminator by default, you may choose/develop other appropriate discrimator -->
<sift>
<appender name="ORDER-DETAIL-${ORDER_ID}" class="ch.qos.logback.core.FileAppender">
....
</appender>
</sift>
</appender>
<logger name="ORDER_DETAIL_LOGGER">
<appender-ref name="ORDER_DETAIL_APPENDER"/>
</logger>
and your code looks like:
Logger logger = LoggerFactory.getLogger(YourClass.class); // logger you use normally
Logger orderDetailLogger = LoggerFactory.getLogger("ORDER_DETAIL_LOGGER");
.....
MDC.put("ORDER_ID", order.getId());
logger.warn("something wrong in order {}. Find the file in separate file", order.getId());
orderDetailLogger.warn("Order detail:\n{}", order.getJson());
// consider making order.getJson() an lambda, or wrap the line with logger
// level checking, to avoid json construction even not required to log
MDC.remove("ORDER_ID");
The simplest approach is to use a separate logger for the JSON dump, and configure that logger to put its output in a different file.
As an example:
private static final Logger log = LogManager.getLogger();
private static final Logger jsonLog = LogManager.getLogger("jsondump");
...
log.info("Incompatible order found, id = {}", order.getId());
jsonLog.info("Incompatible order found, id = {}, JSON = {}",
order.getId(), order.asJson());
Then, configure the logger named jsondump to go to a separate file, possibly with a separate rotation schedule. Make sure to set additivity to false for the JSON logger, so that messages sent to that logger won't be sent to the root logger as well. See Additivity
in the Log4J docs for details.
Example (XML configuration, Log4J 2):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
<Appenders>
<!-- By setting only filePattern, not fileName, all log files
will use the pattern for naming. Files will be rotated once
an hour, since that's the most specific unit in the
pattern. -->
<RollingFile name="LogFile" filePattern="app-%d{yyyy-MM-dd HH}00.log">
<PatternLayout pattern="%d %p %c [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
<!-- See comment for appender named "LogFile" above. -->
<RollingFile name="JsonFile" filePattern="json-%d{yyyy-MM-dd HH}00.log">
<!-- Omitting logger name for this appender, as it's only used by
one logger. -->
<PatternLayout pattern="%d %p [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<!-- Note that additivity="false" to prevent JSON dumps from being
sent to the root logger as well. -->
<Logger name="jsondump" level="debug" additivity="false">
<appender-ref ref="JsonFile"/>
</Logger>
<Root level="debug">
<appender-ref ref="LogFile"/>
</Root>
</Loggers>
</Configuration>
I know there are lots of Q's regarding this, but I did go through all of them, and kinda confused myself more, I am listing the steps I followed, please let me know where I messed it up.
1) I just want to use Log4j on application level, so need to copy WL_HOME/server/lib/wllog4j.jar, and the log4j.jar in Domail_Home/Lib?
2) I am using Maven, I added the Log4j dependency in my pom.xml [war].I have my WAR wrapped in EAR.
3) Since I want to write the logs in weblogic managed server logs file, I created a custom appender, to use weblogic, NonCatalogAppender as mention in the link - https://blog.desgrange.net/2010/02/15/logging-in-weblogic-console-with-log4j.html
4) I copied the log file in my war/src/main/resources and i see maven added them to classpath i.e war/target/classes, see below for my lo4j xml
http://jakarta.apache.org/log4j/'>
<!-- stdout appender settings -->
<appender name="STDOUT" class="com.xyz.logging.util.WeblogicAppender">
<param name="Threshold" value="DEBUG"/>
<layout class="org.apache.log4j.PatternLayout">
<!-- notes on patterns:
%p - priority (i.e. level) of message
%c - class that threw error
%m - message in logger's method, e.g. "Exiting application HERE..."
%n - carriage return
%d - date information
-->
<param name="ConversionPattern" value="%c{1} %m"/>
</layout>
</appender>
<!-- settings for root debugger -->
<root>
<level value="DEBUG"/>
<appender-ref ref="STDOUT"/>
</root>
5) Now I didn't change anything on the config level, but I don't see anything appending to server logs. When I initialize NonCatlogLogger manually and call the logger it works fine:
NonCatalogLogger logger =new NonCatalogLogger("XYZ");
logger.debug("This is the debug message")
6) When I debug the application in eclipse, it looks like my custom appender is never called.
Got it, just need to put below into arguments under "server start".
-Dweblogic.log.Log4jLoggingEnabled=true
1. I am trying to add log message at beginning of file so that I can see latest message first on log file.
2. I want to add month and year after log file but I am not getting that in current active file. eg - test_2015-04-22.log
property file is given below -
log4j.appender.APP=org.apache.log4j.DailyRollingFileAppender
log4j.appender.APP.File=${catalina.base}/logs/test.log
log4j.appender.APP.Append=true
log4j.appender.APP.Encoding=UTF-8
log4j.appender.APP.DatePattern='.'yyyy-MM
log4j.appender.APP.layout = org.apache.log4j.PatternLayout
log4j.appender.APP.layout.ConversionPattern =%d{yyyy-MM-dd HH:mm} - %m%n
log4j.appender.APP.filePattern =Test_%d{yyyy-MM-dd}.log
Your question shows a log4j-1.2 configuration, but since the question also has a log4j2 tag, I feel free to answer this by showing you how to accomplish this with log4j2.
In log4j2, you can declare a property that formats the date, then use this property to configure your appender file name.
Also, you can use the header attribute of the pattern layout to set a header that is output at the beginning of a file. For RollingFileAppender this header will be output on every rollover.
Use one of the lookups built-in to log4j2 to dynamically change the output of your header at rollover time.
Example:
<Configuration status="WARN"><!-- use TRACE to troubleshoot your config if needed-->
<Properties>
<property name="yyyyMMdd">${date:yyyyMMdd}</property>
</Properties>
<Appenders>
<RollingFile name="Application"
fileName="${sys:catalina.base}/logs/test${sys:yyyyMMdd}.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout header="File: ${main:--file}">
<Pattern>%d{yyyy-MM-dd HH:mm} - %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy />
</Policies>
</RollingFile>
<Appenders>
<Loggers>
<root level="trace">
<AppenderRef ref="Application" />
</root>
</Loggers>
</Configuration>
it's not possible , think of the file as a stack which respects LAST In FIRST Out which makes perfect sense when debugging to see the last message at the end.
The log for today will have the name test.log but tomorrow it will be renamed to Test_2015-04-22.log
I use log4j2 in my project something like this:
logger.log(Level.ERROR, this.logData);
My configuration file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="ERROR" DLog4jContextSelector="org.apache.logging.log4j.core.async.AsyncLoggerContextSelector">
<Appenders>
<!-- Async Loggers will auto-flush in batches, so switch off immediateFlush. -->
<RandomAccessFile name="RandomAccessFile" fileName="C:\\logs\\log1.log" immediateFlush="false" append="false">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m %ex%n</Pattern>
</PatternLayout>
</RandomAccessFile>
</Appenders>
<Loggers>
<Root level="error" includeLocation="false">
<AppenderRef ref="RandomAccessFile"/>
</Root>
</Loggers>
It creates my file, I log something to it, but it's still empty. When I trying to delete this file, OS told me that it in use (if app currently working), but even if I stop application, file still empty.
So which settings should I change to make it work correctly?
I suspect that asynchronous logging is not switched on correctly.
As of beta-9 it is not possible to switch on Async Loggers in the XML configuration, you must set the system property Log4jContextSelector to "org.apache.logging.log4j.core.async.AsyncLoggerContextSelector".
The reason you are not seeing anything in the log is that your log messages are still in the buffer and have not been flushed to disk yet. If you switch on Async Loggers the log messages will be flushed to disk automatically.
I share a cleaner and easier solution.
https://stackoverflow.com/a/33467370/3397345
Add a file named log4j2.component.properties to your classpath. This can be done in most maven or gradle projects by saving it in src/main/resources.
Set the value for the context selector by adding the following line to the file.
Log4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
Log4j will attempt to read the system property first. If the system property is null, then it will fall back to the values stored in this file by default.
We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.
So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.
We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below:
http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/
I also tried playing around with these alternatives:
http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html
http://www.mail-archive.com/log4j-user#logging.apache.org/msg05099.html
This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.
I anticipate some help from you folks. Thanks in advance....
Here's my question on the same topic:
dynamically creating & destroying logging appenders
I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list:
http://www.qos.ch/pipermail/logback-user/2009-August/001220.html
Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.
We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.
Look at SiftingAppender shipping with logback (log4j's successor), it is designed to handle the creation of appenders on runtime criteria.
If you application needs to create just one log file per session, simply create a discriminator based on the session id. Writing a discriminator involves 3 or 4 lines of code and thus should be fairly easy. Shout on the logback-user mailing list if you need help.
This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.
Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.
To split your files on requestId, you could do something like this:
logback.xml
<configuration>
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>requestId</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
<file>${requestId}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
</layout>
</appender>
</sift>
</appender>
<root level="DEBUG">
<appender-ref ref="SIFT" />
</root>
</configuration>
As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).
At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):
RequestProcessor.java
public class RequestProcessor {
private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);
public void process(Request request) {
MDC.put("requestId", request.getId());
log.debug("Request received: {}", request);
}
}
And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.
using filePattern
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>