I've been trying to configure spring-boot 2.1 with webflux to store access logs in JSON. Moreover I need to have information like protocol, status code as separate JSON fields (not a part of a message). Looking at the internet I found logstash-logback-encoder. Which seems to have everything I need. But during runtime I get following error:
ERROR in ch.qos.logback.core.FileAppender[accessLog] - Appender [accessLog] failed to append. java.lang.ClassCastException: ch.qos.logback.classic.spi.LoggingEvent cannot be cast to ch.qos.logback.access.spi.IAccessEvent
at java.lang.ClassCastException: ch.qos.logback.classic.spi.LoggingEvent cannot be cast to ch.qos.logback.access.spi.IAccessEvent
at at net.logstash.logback.composite.accessevent.AccessEventFormattedTimestampJsonProvider.getTimestampAsMillis(AccessEventFormattedTimestampJsonProvider.java:20)
at at net.logstash.logback.composite.FormattedTimestampJsonProvider.writeTo(FormattedTimestampJsonProvider.java:149)
at at net.logstash.logback.composite.JsonProviders.writeTo(JsonProviders.java:77)
at at net.logstash.logback.composite.CompositeJsonFormatter.writeEventToGenerator(CompositeJsonFormatter.java:189)
at at net.logstash.logback.composite.CompositeJsonFormatter.writeEventToOutputStream(CompositeJsonFormatter.java:166)
at at net.logstash.logback.encoder.CompositeJsonEncoder.encode(CompositeJsonEncoder.java:122)
at at net.logstash.logback.encoder.CompositeJsonEncoder.encode(CompositeJsonEncoder.java:34)
at at ch.qos.logback.core.OutputStreamAppender.subAppend(OutputStreamAppender.java:230)
at at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:102)
at at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
at at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:51)
at at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
at at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
at at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421)
at at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:383)
at at ch.qos.logback.classic.Logger.info(Logger.java:591)
at at reactor.util.Loggers$Slf4JLogger.info(Loggers.java:255)
at at reactor.netty.http.server.AccessLog.log(AccessLog.java:104)
at at reactor.netty.http.server.AccessLogHandler.lambda$write$0(AccessLogHandler.java:77)
at at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:511)
at at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:504)
at at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:483)
at at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:424)
at at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:103)
at at io.netty.util.internal.PromiseNotificationUtil.trySuccess(PromiseNotificationUtil.java:48)
at at io.netty.channel.ChannelOutboundBuffer.safeSuccess(ChannelOutboundBuffer.java:696)
at at io.netty.channel.ChannelOutboundBuffer.remove(ChannelOutboundBuffer.java:258)
at at io.netty.channel.nio.AbstractNioByteChannel.doWriteInternal(AbstractNioByteChannel.java:216)
at at io.netty.channel.nio.AbstractNioByteChannel.doWrite0(AbstractNioByteChannel.java:209)
at at io.netty.channel.socket.nio.NioSocketChannel.doWrite(NioSocketChannel.java:397)
at at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0(AbstractChannel.java:934)
at at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.flush0(AbstractNioChannel.java:360)
at at io.netty.channel.AbstractChannel$AbstractUnsafe.flush(AbstractChannel.java:901)
at at io.netty.channel.DefaultChannelPipeline$HeadContext.flush(DefaultChannelPipeline.java:1396)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768)
at at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749)
at at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.flush(CombinedChannelDuplexHandler.java:533)
at at io.netty.channel.ChannelOutboundHandlerAdapter.flush(ChannelOutboundHandlerAdapter.java:115)
at at io.netty.channel.CombinedChannelDuplexHandler.flush(CombinedChannelDuplexHandler.java:358)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768)
at at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749)
at at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768)
at at io.netty.channel.AbstractChannelHandlerContext.flush(AbstractChannelHandlerContext.java:749)
at at io.netty.channel.ChannelDuplexHandler.flush(ChannelDuplexHandler.java:117)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush0(AbstractChannelHandlerContext.java:776)
at at io.netty.channel.AbstractChannelHandlerContext.invokeFlush(AbstractChannelHandlerContext.java:768)
My configuration is fairly simple :
<appender name="accessLog" class="ch.qos.logback.core.FileAppender">
<file>${LOGS}/access_log.log</file>
<encoder class="net.logstash.logback.encoder.LogstashAccessEncoder"/>
</appender>
<logger name="reactor.netty.http.server.AccessLog" level="DEBUG" additivity="false">
<appender-ref ref="accessLog"/>
</logger>
At this point I got stuck. I've been searching internet and looking into logback code but still I have no idea what to do to have AccessEvent which contains much more information, instead of LoggingEvent
The LogstashAccessEncoder from logstash-logback-encoder requires logback-access, which provides AccessEvents. logback-access is only available for jetty and tomcat. It is not available for reactor-netty.
Reactor-netty's reactor.netty.http.server.AccessLog just uses a standard log event (not an AccessEvent). So, if you want to use logstash-logback-encoder, you would use a LogstashEncoder instead of a LogstashAccessEncoder, like this:
<appender name="accessLog" class="ch.qos.logback.core.FileAppender">
<file>${LOGS}/access_log.log</file>
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<logger name="reactor.netty.http.server.AccessLog" level="DEBUG" additivity="false">
<appender-ref ref="accessLog"/>
</logger>
The downside is that LogstashEncoder knows nothing about an HTTP request. It just knows about log events logged via a Logger. Therefore, you cannot configure which HTTP details are logged when using a LogstashEncoder like you can when using LogstashAccessEncoder. Instead, your log event will just show the details provided by reactor.netty.http.server.AccessLog as part of the log message. The details will not be available as separate fields in the JSON output. Although, since logstash-logback-encoder is extremely customizable, you could write a custom JsonProvider that parsed the log message from reactor.netty.http.server.AccessLog and split it out into separate JSON fields.
Related
For debugging/testing purposes, I would like Google's Logback LoggingAppender to write to STDOUT instead to connect to Google's Logging API. According to https://github.com/googleapis/java-logging-logback, this should be possible by using redirectToStdout.
My logback.xml:
<appender name="CONSOLE" class="com.google.cloud.logging.logback.LoggingAppender">
<redirectToStdout>true</redirectToStdout>
</appender>
<root>
<level value="info" />
<appender-ref ref="CONSOLE" />
</root>
However, I get an error that no project was set. The error message:
14:48:36,515 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [com.google.cloud.logging.logback.LoggingAppender]
14:48:36,534 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [CONSOLE]
14:48:36,970 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter#28:16 - RuntimeException in Action for tag [appender] java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
at java.lang.IllegalArgumentException: A project ID is required for this service but could not be determined from the builder or the environment. Please set a project ID using the builder.
at at com.google.common.base.Preconditions.checkArgument(Preconditions.java:142)
...
Although that should not be necessary according to the documentation cited above, I also tried setting logDestinationProjectId. Note, that doesn't make sense in my case, as I want to run this configuration on my local machine for debug/test purposes. Also, that produced a different error (although that should be ignored according to the documentation).
Any hints what I am missing? Is my use case even supported? If not, how do you test a configuration change for your LoggingAppender before deploying it to Google Cloud?
The linked github issue was accepted and confirmed by a contributor. You can work around by setting an arbitrary project ID, i.e. by setting logDestinationProjectId:
<appender name="CONSOLE" class="com.google.cloud.logging.logback.LoggingAppender">
<redirectToStdout>true</redirectToStdout>
<logDestinationProjectId>TEST</logDestinationProjectId>
</appender>
<root>
<level value="info" />
<appender-ref ref="CONSOLE" />
</root>
In a Spring Boot application, we take advantage of the pre-configured Logback configuration from Spring by including these configuration files and then just configure the loggers, something like this:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<logger name="com.bla.bla" level="DEBUG" additivity="false">
<appender-ref ref="CONSOLE"/>
</logger>
<root level="INFO">
<appender-ref ref="CONSOLE"/>
</root>
</configuration>
Now I would like to add a ch.qos.logback.core.filter.EvaluatorFilter to do some filtering, but I cannot figure out if it's possible to add this to the appender defined in the included files, since it looks like the filter must be attached to the appender configuration (according to the Logback documentation). Ideally I would like to have the filter configuration in my application configuration and not touch the pre-defined configuration from Spring, something like this:
<filter class="ch.qos.logback.core.filter.EvaluatorFilter">
<evaluator> <!-- defaults to type ch.qos.logback.classic.boolex.JaninoEventEvaluator -->
<expression>return logger.equals("org.hibernate.engine.jdbc.spi.SqlExceptionHelper") && message.contains("duplicate key value violates unique constraint \"source_data_version\"");</expression>
</evaluator>
<OnMismatch>NEUTRAL</OnMismatch>
<OnMatch>DENY</OnMatch>
</filter>
Is this possible somehow or do I really need to re-define the appender so to speak in my own configuration file?
I started looking into writing my own filter class, but it doesn't seem to help with this configuration issue, since the custom filter still needs to be added to the appender as far as I understand.
I am trying to debug an issue with Jackson and I would like to see log entries from com.fasterxml.jackson. Unfortunately this solution How to see org.codehaus.jackson log messages - using logging.properties doesn't do anything for me because Spring Boot 2 uses the updated com.fasterxml.jackson package, and I'm using logback for my log config. I am using Spring Boot 2.1.0.RELEASE which uses Jackson 2.9.
I have the following in logback-spring.xml
<logger name="com.fasterxml.jackson" level="TRACE" additivity="false">
<appender-ref ref="CONSOLE" />
</logger>
(I have tried with additivity set to true as well)
and the following in application.properties
logging.level.com.fasterxml.jackson=TRACE
but nothing is showing up in my logs for Jackson.
I am facing this trouble for a long time now without accessing the debug logs of the managed Threads in my Spring boot application when run on Tomcat. All the logs appear when run on the Eclipse/STS.
In Tomcat logs, I can only see the main Tread Logs.
I am connection to a database through JDBC and this is happening in a separate thread. I tried to follow the log configuration documentation but none of them helps to get the debug logs of these threads. So I do not actually see the exact problem of what is causing the connection to fail.
Here is what I tried so far:
I tried with the following logback.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<Target>System.out</Target> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss} %-5p
%c{1}:%L - %m%n</pattern> </encoder> </appender> <logger name="com.biscoind"
additivity="false" level="TRACE"> <appender-ref ref="stdout" /> </logger>
<root level="debug"> <appender-ref ref="stdout" /> </root> -->
<include resource="org/springframework/boot/logging/logback/base.xml" />
<logger name="org.springframework.web" level="DEBUG" />
</configuration>
When that did not resolve the issue I removed this file and see if by default, if it logs all the treads. But it did not.
So, I added the following configurations to the application.properties
logging.level.org.springframework.web:TRACE
logging.level.org.hibernate:ERROR
Then It seemed to me that this is only logging out the above namespaces, I again added
debug=true
logging.level.org.springframework.web:DEBUG
logging.level.org.hibernate:DEBUG
Tried and it did not work.
I added my namespaces also and tried as follwing,
debug=true
logging.level.com.mydomain:DEBUG
logging.level.org.springframework.web:DEBUG
logging.level.org.hibernate:DEBUG
That did not work also, I am now confused on the what should I do with the config relative to logging to make the logs to appear for the tread executions.
Irrespective of the treads, because of the property spring.jpa.show-sql=true it logs the queries that are made.
It was not a problem with the threads at all. The application was working correctly in the development environment. The problem was in the deployment environment.
It turned out to be a Java version miss-match with the jar files and the JVM version. The jars were build using Java 8 and it was running on Java 7 JVM.
When the JMV was changed to Java-8. It worked fine. So Next time I will be more careful with the version mismatch.
We have a weblogic batch application which processes multiple requests from consumers at the same time. We use log4j for logging puposes. Right now we log into a single log file for multiple requests. It becomes tedious to debug an issue for a given request as for all requests the logs are in a single file.
So plan is to have one log file per request. The consumer sends a request ID for which processing has to be performed. Now, in reality there could be multiple consumers sending the request IDs to our application. So question is how to seggregate the log files based on the request.
We cannot start & stop the production server every time so the point in using an overridden file appender with date time stamp or request ID is ruled out. This is what is explained in the article below:
http://veerasundar.com/blog/2009/08/how-to-create-a-new-log-file-for-each-time-the-application-runs/
I also tried playing around with these alternatives:
http://cognitivecache.blogspot.com/2008/08/log4j-writing-to-dynamic-log-file-for.html
http://www.mail-archive.com/log4j-user#logging.apache.org/msg05099.html
This approach gives the desired results but it does not work properly if multiple request are send at the same time. Due to some concurrency issues logs go here and there.
I anticipate some help from you folks. Thanks in advance....
Here's my question on the same topic:
dynamically creating & destroying logging appenders
I follow this up on a thread where I discuss doing something exactly like this, on the Log4J mailing list:
http://www.qos.ch/pipermail/logback-user/2009-August/001220.html
Ceci Gulcu (inventor of log4j) didn't think it was a good idea...suggested using Logback instead.
We went ahead and did this anyway, using a custom file appender. See my discussions above for more details.
Look at SiftingAppender shipping with logback (log4j's successor), it is designed to handle the creation of appenders on runtime criteria.
If you application needs to create just one log file per session, simply create a discriminator based on the session id. Writing a discriminator involves 3 or 4 lines of code and thus should be fairly easy. Shout on the logback-user mailing list if you need help.
This problem is handled very well by Logback. I suggest to opt for it if you have the freedom.
Assuming you can, what you will need to use is is SiftingAppender. It allows you to separate log files according to some runtime value. Which means that you have a wide array of options of how to split log files.
To split your files on requestId, you could do something like this:
logback.xml
<configuration>
<appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>requestId</key>
<defaultValue>unknown</defaultValue>
</discriminator>
<sift>
<appender name="FILE-${requestId}" class="ch.qos.logback.core.FileAppender">
<file>${requestId}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d [%thread] %level %mdc %logger{35} - %msg%n</pattern>
</layout>
</appender>
</sift>
</appender>
<root level="DEBUG">
<appender-ref ref="SIFT" />
</root>
</configuration>
As you can see (inside discriminator element), you are going to discriminate the files used for writing logs on requestId. That means that each request will go to a file that has a matching requestId. Hence, if you had two requests where requestId=1 and one request where requestId=2, you would have 2 log files: 1.log (2 entries) and 2.log (1 entry).
At this point you might wonder how to set the key. This is done by putting key-value pairs in MDC (note that key matches the one defined in logback.xml file):
RequestProcessor.java
public class RequestProcessor {
private static final Logger log = LoggerFactory.getLogger(RequestProcessor.java);
public void process(Request request) {
MDC.put("requestId", request.getId());
log.debug("Request received: {}", request);
}
}
And that's basically it for a simple use case. Now each time a request with a different (not yet encountered) id comes in, a new file will be created for it.
using filePattern
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
<Properties>
<property name="filePattern">${date:yyyy-MM-dd-HH_mm_ss}</property>
</Properties>
<Appenders>
<File name="File" fileName="export/logs/app_${filePattern}.log" append="false">
<PatternLayout
pattern="%d{yyyy-MMM-dd HH:mm:ss a} [%t] %-5level %logger{36} - %msg%n" />
</File>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="Console" />
<AppenderRef ref="File" />
</Root>
</Loggers>
</Configuration>