I have defined a Marker and would like to log this only to a specific file. Therefore I'm trying to set additivity = false. But it does not work and is still also logged to my global logfile in addition. What might be wrong here?
<Configuration>
<Appenders>
<RollingFile name="TEST" fileName="test.log" filePattern="test1.log">
<MarkerFilter marker="TEST_LOG" onMatch="ACCEPT" onMismatch="DENY"/>
</RollingFile>
<RollingFile name="GLOBAL" fileName="global.log" filePattern="global.log">
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
<MarkerFilter marker="TEST_LOG" onMatch="DENY" onMismatch="ACCEPT"/>
</RollingFile>
</Appenders>
<Loggers>
<logger name="foo" additivity="false">
<appender-ref ref="TEST" />
</logger>
</Loggers>
</Configuration>
usage:
LogManager.getRootLogger().info(MarkerManager.getMarker("TEST_LOG"), "test log");
In the sample code the marker is named "TEST", but in the configuration the MarkerFilter only accepts log events with marker "TEST_LOG". Config and code should use the same string.
In addition, you probably need to define two Appenders: one for your global log file and one for your specific (marked) log events. Then ACCEPT the marked log events only in your specific appender, and DENY the marked log events in the global appender.
Additivity is not going to work in your case, coz additivity is controlling a specific logger not to inherit parent's appender.
In your config, you are setting logger foo to have additivity = false, which means, unless you are using foo or its children loggers for your logging, you will still follow root loggers' config. (I can't see your root logger config in your post, I suspect it is referring to both appenders). From your quoted code, you are using the root logger for your logging, the configuration of foo logger simply won't take effect.
There are two solution I can suggest:
For all log messages you log with TEST_LOG marker, make sure you log it using foo logger. or,
If you need to use any logger for your TEST_LOG messages, then you should reconfigure your appender, so that your GLOBAL file appender accept anything BUT TEST_LOG. (Currently you are only denying SELL_FAILURE marker)
Correct solution depends on your actual requirement. Make sure you understand the basic concepts so you can choose the right solution for you.
Edit:
First, even with your "correct" config you mentioned in the comment, the issue of different loggers still holds. Which mean, coz you are using root logger to do your logging, your config for foo logger has nothing to do with your log message, and additivity is out of picture in your case.
I am not using Log4J2, a brief check on the usage of filter lead me to two issues:
First, I believe proper way to define more than 1 filter is to make use of Composite Filter (which means defining in a <Filters> element, though I don't quite get the syntax from Log4J's doc).
Second, even you define it in composite filter, your config will still have problem. When a log event is having INFO or higher level, you declare in filter it is an ACCEPT or DENY which will prohibit subsequent filter evaluation. If you want to log messages with INFO and not containing TEST_LOG marker, you should do something like:
<MarkerFilter marker="TEST_LOG" onMatch="DENY" onMismatch="NETURAL"/>
<ThresholdFilter level="INFO" onMatch="ACCEPT" onMismatch="DENY"/>
NEUTRAL means current filter cannot determine whether to accept or deny the message, and will evaluate next filter to determine it.
Couple of things I noticed:
You should use the same marker string in the code, and in the config for both MarkerFilters.
You need to add AppenderRef elements to connect your logger to both appenders
You can specify a level in the AppenderRef, this is easier than using a ThresholdFilter
Which gives you this config:
<Configuration>
<Appenders>
<RollingFile name="MARKED_ONLY" fileName="markedOnly.log" filePattern="markedOnly1.log">
<MarkerFilter marker="MY_MARKER" onMatch="ACCEPT" onMismatch="DENY"/>
</RollingFile>
<RollingFile name="GLOBAL" fileName="global.log" filePattern="global.log">
<MarkerFilter marker="MY_MARKER" onMatch="DENY" onMismatch="ACCEPT"/>
</RollingFile>
</Appenders>
<Loggers>
<root level="trace">
<appender-ref ref="MARKED_ONLY" level="trace" />
<appender-ref ref="GLOBAL" level="info" />
</root>
</Loggers>
</Configuration>
And this code:
Logger logger = LogManager.getLogger(MyClass.class); // or .getRootLogger()
logger.info(MarkerManager.getMarker("MY_MARKER"), "test log");
Note it is important to use the same "MY_MARKER" string in the code, and in the filters for both appenders. That way, the appender for the global file can use that filter to DENY the marked log events.
Related
I am planning to use log4j2's burstFilter in my application for burst logging management. The idea is to have it disabled until the administrator really wants to use it (I am planning to give an option in the application GUI to take parameters from the user and activate burstFilter accordingly).
I studied its documentation and realized that it's just a config change inside the log4j2.xml file. This xml config will be bundled along with the application anyway and I will include the filter like this..
<Console name="console">
<PatternLayout pattern="%-5p %d{dd-MMM-yyyy HH:mm:ss} %x %t %m%n"/>
<filters>
<Burst level="INFO" rate="16" maxBurst="100"/>
</filters>
</Console>
Now, here rate and maxBurst fields are set to some values which is not what I expect by default. One solution I thought of to just not use <filters> tag by default and explicitly write in the log4j2.xml once user sets these paramters in the GUI like below.
<Burst level="INFO" rate="16" maxBurst="100"/>
This feels like the rookiest solution, so I was wondering if there is any default attribute which I can toggle to switch the filter ON or OFF.
Expectation:
Default log4j2.xml:
<filters>
<Burst Activated="False" rate="16" maxBurst="100"/>
</filters>
If user wants to activate it:
<filters>
<Burst Activated="True" rate="16" maxBurst="100"/>
</filters>
Any help would be appreciated. Thank you
Unless the filter provides a parameter to enable/disable it I don't know of any way to directly toggle it. However, you can implement a toggle if you use the RoutingAppender to switch from an appender with the filter to one without the filter. The disadvantage is, obviously, that you need to have 2 appenders configured which could lead to some duplication of the configuration data. (You could extract some things into common properties which are shared between the two)
Here is a simple example using the ThresholdFilter which demonstrates the above strategy:
First, the log4j2.xml configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<!--
See how I defined a common property to hold the pattern since it is used
in both appenders below.
-->
<Properties>
<Property name="pattern">%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n</Property>
</Properties>
<Appenders>
<Console name="ConsoleWithFilter" target="SYSTEM_OUT">
<PatternLayout pattern="${pattern}" />
<ThresholdFilter level="ERROR" onMatch="ACCEPT" onMismatch="DENY"/>
</Console>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${pattern}" />
</Console>
<Routing name="Routing">
<Routes pattern="$${ctx:filterToggle}">
<!-- This route is chosen if ThreadContext has value "ENABLE" for key filterToggle. -->
<Route key="ENABLE" ref="ConsoleWithFilter" />
<!-- This is the default route when no others match -->
<Route ref = "Console"/>
</Routes>
</Routing>
</Appenders>
<Loggers>
<Root level="ALL">
<AppenderRef ref="Routing" />
</Root>
</Loggers>
</Configuration>
Next, some Java code to generate logs:
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.ThreadContext;
public class SomeClass {
private static final Logger log = LogManager.getLogger();
public static void main(String[] args){
//We start with no value in the ThreadContext for filterToggle
// this should cause all logs to appear in the console.
if(log.isDebugEnabled())
log.debug("This is some debug! (This should appear in console)");
log.info("Here's some info! (This should appear in console)");
log.error("Some error happened! (We will also see this in the console)");
//Now we enable the filter by switching appenders
ThreadContext.put("filterToggle", "ENABLE");
log.info("This should not appear in the console");
log.debug("This also should --not-- appear");
log.fatal("This will appear since it's more specific than ERROR level.");
}
}
Finally, here is the output of the above:
12:22:35.922 [main] DEBUG example.SomeClass - This is some debug! (This should appear in console)
12:22:35.923 [main] INFO example.SomeClass - Here's some info! (This should appear in console)
12:22:35.923 [main] ERROR example.SomeClass - Some error happened! (We will also see this in the console)
12:22:35.924 [main] FATAL example.SomeClass - This will appear since it's more specific than ERROR level.
I hope this helps you!
Am trying to understand if there are any best practices/utilities/out of the box functionality for logging messages that are quite long (typically json file responses/xmls/CSVs etc).
While logging as part of my application I do something like
log.info("Incompatible order found {}", order.asJson())
The problem being that the asJson() representation could be pretty long. In the log files the actual json is only relevant say 1% of the time. So it is important enough to retain but big enough to make me lose focus when I am trawling through logs.
Is there anyway I can could do something like
log.info("Incompatible order found, file dumped at {}", SomeUtility.dumpString(order.asJson()));
where the utility dumps the file into a location consistent with other log files and then in my log file I can see the following
Incompatible order found, file dumped at /abc/your/log/location/tmpfiles/xy23nmg
Key things to note being
It would be preferable to simply use the existing logging api to somehow configure this so that the location of these temp files are the same as log itself this way it goes through the cleanup cycle after N days etc just like the actual log files.
I can obviously write something but I am keen on existing utilities or features if already available withing log4j
I am aware that when logs such as these are imported into analysis systems like Splunk then only the filenames will be present without the actual files and this is okay.
(Suggestion was given based on logback instead of log4j2. However I believe similar facilities exists in log4j2)
In Logback, there is a facility called SiftingAppender, which allow separate appender (e.g. file appender) to be created according to some discriminator (e.g. in MDC).
So, by configuring an appender (e.g. orderDetailAppender) which separates file based on a discriminator (e.g. by putting order ID in MDC), and make use of a separate logger to connect to the appender, this should give you the result you want:
pseudo code:
logback config:
<appender name="ORDER_DETAIL_APPENDER" class="ch.qos.logback.classic.sift.SiftingAppender">
<!-- use MDC disciminator by default, you may choose/develop other appropriate discrimator -->
<sift>
<appender name="ORDER-DETAIL-${ORDER_ID}" class="ch.qos.logback.core.FileAppender">
....
</appender>
</sift>
</appender>
<logger name="ORDER_DETAIL_LOGGER">
<appender-ref name="ORDER_DETAIL_APPENDER"/>
</logger>
and your code looks like:
Logger logger = LoggerFactory.getLogger(YourClass.class); // logger you use normally
Logger orderDetailLogger = LoggerFactory.getLogger("ORDER_DETAIL_LOGGER");
.....
MDC.put("ORDER_ID", order.getId());
logger.warn("something wrong in order {}. Find the file in separate file", order.getId());
orderDetailLogger.warn("Order detail:\n{}", order.getJson());
// consider making order.getJson() an lambda, or wrap the line with logger
// level checking, to avoid json construction even not required to log
MDC.remove("ORDER_ID");
The simplest approach is to use a separate logger for the JSON dump, and configure that logger to put its output in a different file.
As an example:
private static final Logger log = LogManager.getLogger();
private static final Logger jsonLog = LogManager.getLogger("jsondump");
...
log.info("Incompatible order found, id = {}", order.getId());
jsonLog.info("Incompatible order found, id = {}, JSON = {}",
order.getId(), order.asJson());
Then, configure the logger named jsondump to go to a separate file, possibly with a separate rotation schedule. Make sure to set additivity to false for the JSON logger, so that messages sent to that logger won't be sent to the root logger as well. See Additivity
in the Log4J docs for details.
Example (XML configuration, Log4J 2):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
<Appenders>
<!-- By setting only filePattern, not fileName, all log files
will use the pattern for naming. Files will be rotated once
an hour, since that's the most specific unit in the
pattern. -->
<RollingFile name="LogFile" filePattern="app-%d{yyyy-MM-dd HH}00.log">
<PatternLayout pattern="%d %p %c [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
<!-- See comment for appender named "LogFile" above. -->
<RollingFile name="JsonFile" filePattern="json-%d{yyyy-MM-dd HH}00.log">
<!-- Omitting logger name for this appender, as it's only used by
one logger. -->
<PatternLayout pattern="%d %p [%t] %m %ex%n"/>
<Policies>
<TimeBasedTriggeringPolicy/>
</Policies>
</RollingFile>
</Appenders>
<Loggers>
<!-- Note that additivity="false" to prevent JSON dumps from being
sent to the root logger as well. -->
<Logger name="jsondump" level="debug" additivity="false">
<appender-ref ref="JsonFile"/>
</Logger>
<Root level="debug">
<appender-ref ref="LogFile"/>
</Root>
</Loggers>
</Configuration>
I have a single java class (a device controller) that is being used to create 5 separate processes. Each of the processes is assigned an identifier. I would like each of the processes to write to its own log file based on its assigned identifier. I have all of the appenders and loggers defined in a shared log4j2.xml config file.
Issue: When I start the first device controller, it successfully writes to the correct log file. However, when I start the second device controller, log4j will roll-over all of the loggers in the log4j2.xml config file and will only write to the log file assigned to the new process. All of the log messages for the first process will go to the rolled-over log file, but new messages are no longer written to its newly rolled-over log file.
(OS: Linux, log4j version: 2.8.2)
Below is an abbreviated version of the log4j2.xml config file that I used.
...
<Appenders>
...
<RollingFile name="RollingFile-1" fileName="/logs/EPDU/Device-1.log" filePattern="/logs/EPDU/Device-1_%d{dd-MMM-yyyy::HH:mm:ss}.log">
<PatternLayout>
...
</PatternLayout>
<Policy>
<OnStartUpTriggeringPolicy minSize="1"/>
<SizeBasedTriggeringPolicy size="20 MB"/>
</Policy>
<DefaultRolloverStrategy fileIndex="nomax"/>
</RollingFile>
...
<RollingFile name="RollingFile-5" fileName="/logs/EPDU/Device-5.log" filePattern="/logs/EPDU/Device-5_%d{dd-MMM-yyyy::HH:mm:ss}.log">
<PatternLayout>
...
</PatternLayout>
<Policy>
<OnStartUpTriggeringPolicy minSize="1"/>
<SizeBasedTriggeringPolicy size="20 MB"/>
</Policy>
<DefaultRolloverStrategy fileIndex="nomax"/>
</RollingFile>
</Appenders>
<Loggers>
<Logger name="device-1" level="trace" additivity="false">
<AppenderRef ref="RollingFile-1" level="debug"/>
</Logger>
...
<Logger name="device-5" level="trace" additivity="false">
<AppenderRef ref="RollingFile-5" level="debug"/>
</Logger>
</Loggers>
The Logger variable is initialized and assigned in the main method after the device identifier is determined similar to the code below:
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.LogManager;
public class DeviceController {
private satic Logger deviceLogger;
public DeviceController(Param param1, Param param2){
...
}
...
public static void main(String[] args) {
/**
* Fancy code to find device identifier...
* String loggerName = (results of fancy code is "device-[1..5]");
*/
deviceLogger = LogManager.getLogger(loggerName);
deviceLogger.info("Start logging stuff in device log.");
new DeviceController(param1, param2);
}
...
}
How can I prevent all of the loggers from rolling over, but instead leave the currently running processes/logs alone as the next process and log is started?
Note: I tried to provide a "Goldilocks" amount of detail to explain the problem. Sorry if I provided too much or not enough information.
Could you show a little bit more of your code? I think your issue comes from the fact you have a static logger. So from your above snippet, I believe you overwrite for each new DeviceController the deviceLogger with a new Logger with the next identifier you fetch. I would guess that at the end, all your logs are being appended to your device-5 log file, aren't they?
Side note, I think it's good practice to use the sl4f interface to declare your logger but then assigned the log4j implementation to your logger.
Before someone closes this question off as a duplicate, just hear me out... I have read through countless blog posts, tutorials, FAQs and SO questions for days now and I'm no closer to understanding why I'm getting this specific behaviour.
Configuration
My log4j2.xml config file contains the following:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="DEBUG">
<Properties>
<Property name="APP_NAME">MyCoolApp</Property>
<Property name="BASE_PACKAGE">my.cool.package</Property>
<Property name="LOG_DIR">${env:LOG_ROOT:-logs}</Property>
<Property name="LOG_PATTERN">%d [%t] %-5level %c{1.}:%L:%M | %m%n</Property>
</Properties>
<Appenders>
<RollingFile name="AppLogFile" fileName="${LOG_DIR}/${APP_NAME}.log" filePattern="${LOG_DIR}/archive/${APP_NAME}.%d{yyyy-MM-dd}.log.gz">
<PatternLayout pattern="${LOG_PATTERN}"/>
<TimeBasedTriggeringPolicy/>
</RollingFile>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="${LOG_PATTERN}"/>
</Console>
</Appenders>
<Loggers>
<Logger name="${BASE_PACKAGE}" level="INFO">
<AppenderRef ref="AppLogFile"/>
</Logger>
<Root level="TRACE">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Then each class initialises a logger with private static final Logger LOGGER = LogManager.getLogger();.
Runtime
From my understanding of log levels, loggers and appenders, this should give me the following:
All INFO and higher level logging output to file
All TRACE and higher level logging output to console
Meanwhile, if I run the app when the AppLogFile appender is enabled, I get less output to the console; counting from only after the Log4j initialisation, I get 255 lines compared to 367.
Looking through the console and file output, when the AppLogFile appender is enabled, I don't get any TRACE or DEBUG output, only INFO and higher. When I comment out just that appender (without changing anything else), then I get everything to console, including TRACE and DEBUG.
I've tried playing around with reordering the "console" and "file" related elements, I've tried explicitly enabling and disabling the logger's additivity property, I've tried using filters in the appenders and loggers, and even multiple appender references inside one logger with explicit level properties.
All I want is to get everything to go to the console and everything INFO level and higher to go to a file. What am I missing here..?
The level attribute and the <AppenderRef> element are entirely independent:
Specifying level changes the logging level of the given logger and all sub-loggers.
Specifying <AppenderRef> adds1 another appender to the given logger and all sub-loggers.
The fact that you do both at the same time, doesn't affect those independent effects.
If you want to limit log entries for the Appender, specify the level attribute on the <AppenderRef> instead.
1) If you wanted the Appender to replace, you need to specify additivity="false"
(OP edit) Just for the sake of clarity, between what I learned from this answer and the Log4j2 FAQ; I streamlined things a bit and ended up with the following <Loggers> configuration:
<Loggers>
<Root level="TRACE">
<AppenderRef ref="AppLogFile" level="INFO"/>
<AppenderRef ref="Console"/>
</Root>
</Loggers>
I'm trying to use Log4j2 in an OSGi environment. I've got it to work so far, but while inspecting the logs from the console and from the file, I noticed that some of them were missing, specifically logs that were called from a static method.
The Log class in the example below is just a convenience class to let my colleagues call the logging functionality more easily (in the example for just a String it seems like overkill of course) through a create method. It does nothing more than create an instance of the Log class that has a Logger internally that calls the respective method from the Log4j2 logger.
My question is: Do I just have a simple error in my project or is Log4j2 not able to log to files from static methods?
Here's a code example to make it a bit more clear:
Log log = Log.testLog();
log.info("non static log" );
That's the code I call from a non-static method.
And here's the testLog()-method:
public static Log testLog() {
Log.create( Log.class ).info( "static log" );
return Log.create( Log.class );
}
Results:
Both #info() calls write to the Console Appender, but only the "non static log" message is written to the file.
Here's my log4j2.xml:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Appenders>
<Console name="Console">
<PatternLayout pattern="!ENTRY %logger{1.} %level %d{DEFAULT} [%t]%n!MESSAGE %msg%n%n"/>
</Console>
<RollingFile name="RollingFile" fileName="${sys:osgi.logfile}.log4j.log"
filePattern="${sys:osgi.logfile}.log4j_bak_%i.log">
<PatternLayout>
<pattern>!ENTRY %logger{1.} %level %d{DEFAULT} [%t]\n!MESSAGE %msg%n%n</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1 MB"/>
</Policies>
<DefaultRolloverStrategy max="10"
fileIndex="min"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="TRACE" additivity="false">
<AppenderRef ref="RollingFile"/>
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Finally found the source of my particular problem, which is OSGi (in this case the Equinox Framework). My application uses the osgi.logfile system property to point to the location where the logs should be saved at.
Unfortunately Equinox not only creates that property, but also changes it at startup to a different location. For Log4j2 I used ${sys:osgi.logfile} to get this system property, but because a few particular plugins started so early, Log4j2 still had the wrong (aka. old) location configured for these plugins (more specifically: their LoggerContext).
What helped me in this case was a simple LoggerContext.reconfigure() on the LoggerContext that still had the old location.