I use log4j2 in my project something like this:
logger.log(Level.ERROR, this.logData);
My configuration file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="ERROR" DLog4jContextSelector="org.apache.logging.log4j.core.async.AsyncLoggerContextSelector">
<Appenders>
<!-- Async Loggers will auto-flush in batches, so switch off immediateFlush. -->
<RandomAccessFile name="RandomAccessFile" fileName="C:\\logs\\log1.log" immediateFlush="false" append="false">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m %ex%n</Pattern>
</PatternLayout>
</RandomAccessFile>
</Appenders>
<Loggers>
<Root level="error" includeLocation="false">
<AppenderRef ref="RandomAccessFile"/>
</Root>
</Loggers>
It creates my file, I log something to it, but it's still empty. When I trying to delete this file, OS told me that it in use (if app currently working), but even if I stop application, file still empty.
So which settings should I change to make it work correctly?
I suspect that asynchronous logging is not switched on correctly.
As of beta-9 it is not possible to switch on Async Loggers in the XML configuration, you must set the system property Log4jContextSelector to "org.apache.logging.log4j.core.async.AsyncLoggerContextSelector".
The reason you are not seeing anything in the log is that your log messages are still in the buffer and have not been flushed to disk yet. If you switch on Async Loggers the log messages will be flushed to disk automatically.
I share a cleaner and easier solution.
https://stackoverflow.com/a/33467370/3397345
Add a file named log4j2.component.properties to your classpath. This can be done in most maven or gradle projects by saving it in src/main/resources.
Set the value for the context selector by adding the following line to the file.
Log4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
Log4j will attempt to read the system property first. If the system property is null, then it will fall back to the values stored in this file by default.
Related
I have a spring boot application and using log4j2 to generate console and persists logs in a centos linux.
I wanted to maintain only 5mb of log files in archive.
But the problem is, my archived log files are 5mb in total. but my main console log which is saving in the main log file i.e wc-notification.out is going beyond 1mb.
so my disk gets full and it causes an issue.
The brute force method solution is:
whenever restarting(hard stop and start) my spring boot application, the log is cleared in from wc-notification.out.
my log4j2 configuration xml file:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO" monitorInterval="30">
<Properties>
<Property name="LOG_PATTERN">
[ %d{yyyy-MMM-dd HH:mm:ss a} ] - [%t] %-5level %logger{36} - %msg%n
</Property>
</Properties>
<Appenders>
<Console name="ConsoleAppender" target="SYSTEM_OUT" follow="true">
<PatternLayout pattern="${LOG_PATTERN}"/>
</Console>
<RollingFile name="FileAppender" fileName="/home/ec2-user/apps/wc-notification-service/wc-notification.out"
filePattern="/home/ec2-user/apps/wc-notification-service/archives_test/wc-notification.out-%d{yyyy-MM-dd}-%i">
<PatternLayout>
<Pattern>${LOG_PATTERN}</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1MB" />
</Policies>
<DefaultRolloverStrategy>
<Delete basePath="logs" maxDepth="1">
<IfFileName glob="wc-notification.out-*.log" />
<IfLastModified age="1m" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="info">
<!--<AppenderRef ref="ConsoleAppender" /> -->
<AppenderRef ref="FileAppender" />
</Root>
</Loggers>
</Configuration>
somehow, the files are in range of 1mb, and the roll strategy is working, it is deleting the file
but, my disk space is still occupied with the space. what can be the reason?
As you are providing your own log4j2.xml configuration file, you are overwriting the Spring Boot default logging configuration, and it is safe to assume that it will be configuration used by Log4j2.
Your configuration is almost correct. If you want to achieve the desired behavior, I would suggest you the following changes:
Be aware that you are pointing your Delete action basePath to the wrong location, it should point to the directory in which your logs are stored.
The IfFileName glob pattern is wrong as well, it should match your logs filenames.
Finally, your are using the IfLastModified condition. As its name implies, this condition has to do with the last modification date of the logs files, not with their size. Please, consider to use instead IfAccumulatedFileSize (or maybe IfAccumulatedFileCount). See the relevant documentation.
For these reasons, your logs are not being deleted correctly and the disk space occupied is greater than the desired amount. Without deletion, your log files are being rotated every 1 MB as configured by your SizeBasedTriggeringPolicy, and will keep until the value of the max attribute of DefaultRolloverStrategy, 7 by default, is reached, and always, plus the amount of your current log file.
In summary, please, try a configuration like this:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO" monitorInterval="30">
<Properties>
<Property name="LOG_PATTERN">
[ %d{yyyy-MMM-dd HH:mm:ss a} ] - [%t] %-5level %logger{36} - %msg%n
</Property>
</Properties>
<Appenders>
<Console name="ConsoleAppender" target="SYSTEM_OUT" follow="true">
<PatternLayout pattern="${LOG_PATTERN}"/>
</Console>
<RollingFile name="FileAppender" fileName="/home/ec2-user/apps/wc-notification-service/wc-notification.out"
filePattern="/home/ec2-user/apps/wc-notification-service/wc-notification.out-%d{yyyy-MM-dd}-%i">
<PatternLayout>
<Pattern>${LOG_PATTERN}</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1MB" />
</Policies>
<DefaultRolloverStrategy>
<Delete basePath="/home/ec2-user/apps/wc-notification-service">
<IfFileName glob="wc-notification.out-*" />
<IfAccumulatedFileSize exceeds="5 MB" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="info">
<!--<AppenderRef ref="ConsoleAppender" /> -->
<AppenderRef ref="FileAppender" />
</Root>
</Loggers>
</Configuration>
The solution relies in storing all your logs in the same location. It will affect your RollingFile filePattern attribute.
Please, be careful with the delete action, it can delete not only your log files, but all that matches your glob pattern.
In addition, although maybe irrelevant fo your use case, be aware that if the filePattern of your archive files ends with ".gz", ".zip", ".bz2", among others, the resulting archive will be compressed using the compression scheme that matches the suffix, which can allow you to store more archives for the same space if required.
For your comments, it seems you encountered a problem when using large file sizes: please, see this bug, I think that clearly describes your problem.
My best advice will be to reduce the size of the logs files to one that it is properly working, or try a more recent version of the library.
I am aware that you are using Spring Boot to manage your dependencies:
please, verify your maven tree and see the actual version library you are using and change it if necessary.
You need to leverage the DeleteAction of the RollingFileAppender. I would recommend taking a look at the documentation as there are a lot of good examples.
The reason for your behavior is as follows: look at your parameters
appender.gateway.policies.size.size=1MB
appender.gateway.strategy.type = DefaultRolloverStrategy
appender.gateway.strategy.max = 5
What you are asking here is log file should be allowed to grow up to 1M. Once it reaches 1M size it is copied to file logfile.log-1 and a new file logfile.log is created. As the files get produced you required that you want to keep only 5 last files. The older files get deleted automatically. So it looks like the behavior is exactly as you configured. What you can do is to configure to keep more than 5 files back and possibly in a different folder where you have sufficient space.
Just for information to people who are facing similar issue, i just changing my logging mechanism to logback. it works like an charm and have no issue.
the reference link which i used: spring boot with logback
Am trying to understand if there are any best practices/utilities/out of the box functionality for logging messages that are quite long (typically json file responses/xmls/CSVs etc).
While logging as part of my application I do something like
log.info("Incompatible order found {}", order.asJson())
The problem being that the asJson() representation could be pretty long. In the log files the actual json is only relevant say 1% of the time. So it is important enough to retain but big enough to make me lose focus when I am trawling through logs.
Is there anyway I can could do something like
log.info("Incompatible order found, file dumped at {}", SomeUtility.dumpString(order.asJson()));
where the utility dumps the file into a location consistent with other log files and then in my log file I can see the following
Incompatible order found, file dumped at /abc/your/log/location/tmpfiles/xy23nmg
Key things to note being
It would be preferable to simply use the existing logging api to somehow configure this so that the location of these temp files are the same as log itself this way it goes through the cleanup cycle after N days etc just like the actual log files.
I can obviously write something but I am keen on existing utilities or features if already available withing log4j
I am aware that when logs such as these are imported into analysis systems like Splunk then only the filenames will be present without the actual files and this is okay.
(Suggestion was given based on logback instead of log4j2. However I believe similar facilities exists in log4j2)
In Logback, there is a facility called SiftingAppender, which allow separate appender (e.g. file appender) to be created according to some discriminator (e.g. in MDC).
So, by configuring an appender (e.g. orderDetailAppender) which separates file based on a discriminator (e.g. by putting order ID in MDC), and make use of a separate logger to connect to the appender, this should give you the result you want:
pseudo code:
logback config:
<appender name="ORDER_DETAIL_APPENDER" class="ch.qos.logback.classic.sift.SiftingAppender">
<!-- use MDC disciminator by default, you may choose/develop other appropriate discrimator -->
<sift>
<appender name="ORDER-DETAIL-${ORDER_ID}" class="ch.qos.logback.core.FileAppender">
....
</appender>
</sift>
</appender>
<logger name="ORDER_DETAIL_LOGGER">
<appender-ref name="ORDER_DETAIL_APPENDER"/>
</logger>
and your code looks like:
Logger logger = LoggerFactory.getLogger(YourClass.class); // logger you use normally
Logger orderDetailLogger = LoggerFactory.getLogger("ORDER_DETAIL_LOGGER");
.....
MDC.put("ORDER_ID", order.getId());
logger.warn("something wrong in order {}. Find the file in separate file", order.getId());
orderDetailLogger.warn("Order detail:\n{}", order.getJson());
// consider making order.getJson() an lambda, or wrap the line with logger
// level checking, to avoid json construction even not required to log
MDC.remove("ORDER_ID");
The simplest approach is to use a separate logger for the JSON dump, and configure that logger to put its output in a different file.
As an example:
private static final Logger log = LogManager.getLogger();
private static final Logger jsonLog = LogManager.getLogger("jsondump");
...
log.info("Incompatible order found, id = {}", order.getId());
jsonLog.info("Incompatible order found, id = {}, JSON = {}",
order.getId(), order.asJson());
Then, configure the logger named jsondump to go to a separate file, possibly with a separate rotation schedule. Make sure to set additivity to false for the JSON logger, so that messages sent to that logger won't be sent to the root logger as well. See Additivity
in the Log4J docs for details.
Example (XML configuration, Log4J 2):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
<Appenders>
<!-- By setting only filePattern, not fileName, all log files
will use the pattern for naming. Files will be rotated once
an hour, since that's the most specific unit in the
pattern. -->
<RollingFile name="LogFile" filePattern="app-%d{yyyy-MM-dd HH}00.log">
<PatternLayout pattern="%d %p %c [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
<!-- See comment for appender named "LogFile" above. -->
<RollingFile name="JsonFile" filePattern="json-%d{yyyy-MM-dd HH}00.log">
<!-- Omitting logger name for this appender, as it's only used by
one logger. -->
<PatternLayout pattern="%d %p [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<!-- Note that additivity="false" to prevent JSON dumps from being
sent to the root logger as well. -->
<Logger name="jsondump" level="debug" additivity="false">
<appender-ref ref="JsonFile"/>
</Logger>
<Root level="debug">
<appender-ref ref="LogFile"/>
</Root>
</Loggers>
</Configuration>
First time trying to use log4j version log4j-1.2.17.jar.
On an existing application the client has log4j in place and there is a log4j.properties file which specifies a light log output. What I want to do is depending on the log level (ERROR & WARN) output a more refined entry.
On the log4j site I came across this but I think it is to be in some .xml file. I need some assistance in understanding how I can put in place the formatting option to alter based on log level.
You don't need to declare separate loggers to achieve this. You can set the logging level on the AppenderRef element.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<File name="file" fileName="app.log">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m %ex%n</Pattern>
</PatternLayout>
</File>
<Console name="STDOUT" target="SYSTEM_OUT">
<PatternLayout pattern="%m%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="file" level="DEBUG"/>
<AppenderRef ref="STDOUT" level="INFO"/>
</Root>
</Loggers>
</Configuration>
Would I put this xml content into the web.xml file or another file?
a) If another file what file name and where would it go?
How do I get log4j to realize that I need it to use the xml file?
Will the use of the xml ignore the log4j.properties file?
I know it is a lot of questions but there is only me on the project and the client has a production crisis that needs to be figured out today so I don't have time to go off to read and do tutorials with the client calling me every hour. I figured it may help to get this logging more useful. As the logs are right now I have a date and message output to the log but no idea where the entries are created from without doing extensive searches through the code.
You could do this by defining multiple non-additive Loggers so that the first one only logs errors, the next one warnings and the final one infos and debug.
I have a JAX-RS 2.0 application running on a Tomcat 7 server, and I'm using log4j2 along with SLF4J to record the server logs to a file.
I can't seem to get any logs to show up properly in my log file when running the server in production, although when I run my integration tests, logs are output correctly.
In production, the logs are merely redirected to the console instead.
My log4j2.xml file is located in the WEB-INF/classes folder, and I've included all the necessary dependencies as well.
My configuration file looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="trace">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
<RollingFile name="file" fileName="log/trace.log" append="true" filePattern="log/trace.%i.log">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %-5level %X %logger{36} - %msg%n"/>
<Policies>
<OnStartupTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="10 MB" />
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="my.package" level="TRACE" additivity="false">
<AppenderRef ref="file"/>
</Logger>
<Root level="WARN">
<AppenderRef ref="file"/>
</Root>
</Loggers>
</Configuration>
The web.xml needs no configuration (I'm following the documentation found on the log4j2 website).
EDIT
I've tried setting the Root level to TRACE but everything still gets redirected to console. The file log/trace.log itself is created, it's just never written to. I also tried setting immediateFlush=true but that didn't have any impact either.
I noticed you have status logging (log4j internal logging) set to TRACE. This will dump log4j internal initialization messages to the console. It this what you mean?
Otherwise, the config you provide shows there is no logger that has an appender-ref pointing to the ConsoleAppender.
So, if you are also seeing your application logs being output to the console (in addition to log4j's status log messages), I suspect there is another log4j2.xml (or log4j2-test.xml) config file in the classpath somewhere.
Fortunately log4j's status logs should also show the location of the log4j config file, so you can confirm which config file is actually being loaded.
You can switch off status logging by setting <Configuration status="WARN"> after confirming all works correctly.
I figured it out!
Turns out I was using the gretty plug-in with gradle, which contains it's own logging package (the logback library).
It used it's own internal logback.xml which was redirecting to console. I fixed this by overwriting the internal logback.xml with my own (using logback's configuration) and now everything works as expected!
We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.
So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.
We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below:
http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/
I also tried playing around with these alternatives:
http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html
http://www.mail-archive.com/log4j-user#logging.apache.org/msg05099.html
This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.
I anticipate some help from you folks. Thanks in advance....
Here's my question on the same topic:
dynamically creating & destroying logging appenders
I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list:
http://www.qos.ch/pipermail/logback-user/2009-August/001220.html
Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.
We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.
Look at SiftingAppender shipping with logback (log4j's successor), it is designed to handle the creation of appenders on runtime criteria.
If you application needs to create just one log file per session, simply create a discriminator based on the session id. Writing a discriminator involves 3 or 4 lines of code and thus should be fairly easy. Shout on the logback-user mailing list if you need help.
This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.
Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.
To split your files on requestId, you could do something like this:
logback.xml
<configuration>
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>requestId</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
<file>${requestId}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
</layout>
</appender>
</sift>
</appender>
<root level="DEBUG">
<appender-ref ref="SIFT" />
</root>
</configuration>
As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).
At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):
RequestProcessor.java
public class RequestProcessor {
private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);
public void process(Request request) {
MDC.put("requestId", request.getId());
log.debug("Request received: {}", request);
}
}
And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.
using filePattern
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>