How to design log structure for a Java application - java

Am new to java, I know Java has Log4J, logback etc for logging purposes. My question is more around how many log files should we have in a application. Should it be per thread, per group of threads, process, exception etc. In our application there is a possibility of having a large number of threads and am thinking about cons of having log file per thread. Are there best practices for logging in applications having huge number of threads .
Thanks in advance!

1 log for messages - Call it SystemOut.log
1 log for stack traces - Call it SystemErr.log
1 log for traces - Call it Trace.log
1 log for native stdout - Call it nativeStdOut.log
1 log for native stderr - Call it nativeStdErr.log
Have a config panel that sets:
maxSize
maxCount
When a log hits max size, starting rolling them upto maxCount and append a timestamp to the rolled filename.

I think that good solution would be to name your threads and write logs together with the name of thread in which log occurred. Thanks to that you will be able to both analysis logs separately for each thread or analysis all logs together.

Typically there is one log file per application (process) -- rarely for Thread and never be Exception. Sometimes this log file is split into various different log levels: debug messages in one bucket, information in another, warnings/errors in a third. This makes it easy to watch for errors by only looking at the warning-and-more-critical file.
log4j has a configuration file in which you can route certain messages to certain files using different criteria. Here's a sample of the log4j properties file:
# default is WARN and above going to the appender "logfile" and STDOUT
log4j.rootLogger=WARN, logfile, stdout
# write to ui.log for a day and then move to .yyyy-MM-dd suffix
log4j.appender.logfile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.logfile.File=ui.log
log4j.appender.logfile.Append=true
log4j.appender.logfile.DatePattern='.'yyyy-MM-dd
# but we log information message from classes in the package com.mprew.be
log4j.logger.com.mprew.be=INFO
log4j, and custom loggers, decorate each time with a Class name, priority level, date/time, etc.. For example:
# date time priority Class-name Log message
2012-03-26 00:55:39,545 [INFO] CmsClientTask Content is up-to-date
Typically exceptions are written out as multiple lines so you can get the entire stack-trace.
2012-03-26 01:55:35,777 [INFO] ExceptionInterceptor Reporting problem to customer
org.springframework.NoSuchRequestException: No request handling method
at com.ui.base.BaseController.invokeNamedHandler(BaseController.java:240)
at com.ui.base.BaseController.handleRequestInternal(BaseController.java:100)
at com.ui.base.CoreServices.handleRequest(CoreServicesController.java:147)
...
In our distributed system, we route all logs from all of the system to 2 servers which write a debug, info, and warning logs. Along with the date/time, class name, priority, and message, the log messages also have the hostname and a specific log token so we can easily identify classes of problems. The following is on one line:
2012-03-26 00:00:00.045 UTC INFO FE8 TrayController TRAY_CLIENT_LOOKUP
message=created unknown client with partner STW9
Then we can easily grep for specific issues.
Hope this helps.

Related

Apache Beam Dataflow job fails with "GetWork timed out, retrying"

I am able to run an Acache Beam job successfully using the DirectRunner, with the following arguments:
java -jar my-jar.jar --commonConfigFile=comJobConfig.yml
--configFile=relJobConfig.yml
--jobName=my-job
--stagingLocation=gs://my-bucket/staging/
--gcpTempLocation=gs://my-bucket/tmp/
--tempLocation=gs://my-bucket/tmp/
--runner=DirectRunner
--bucket=my-bucket
--project=my-project
--region=us-west1
--subnetwork=my-subnetwork
--serviceAccount=my-svc-account#my-project.iam.gserviceaccount.com
--usePublicIps=false
--workerMachineType=e2-standard-2
--maxNumWorkers=20 --numWorkers=2
--autoscalingAlgorithm=THROUGHPUT_BASED
However, while trying to run on Google Dataflow (simply changing --runner=DataflowRunner) I receive the following message (GetWork timed out, retrying) in the workers.
I have checked the logs generated by the Dataflow process and found
[2023-01-28 20:49:41,600] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:49:39.386Z: Autoscaling: Raised the number of workers to 2 so that the pipeline can catch up with its backlog and keep up with its input rate.
[2023-01-28 20:50:26,911] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:50:26.779Z: Workers have started successfully.
and I see no indication that the workers have failed. Moreover I do not see any relevant logs which indicate that the process is working (in my case, reading from the appropriate Pub/Sub topic for notifications). Let me know if there is any further documentation on this log, as I have not been able to find any.
Turns out I forgot to include the --enableStreamingEngine flag. This solved my problem.

Log4j2 Syslog appender is not writing the 1st message to syslog after the syslog service is restarted

We have configured our application to write some specific log messages to System's Syslog file using the Syslog appender of Log4j2. No issue in writing the Syslog to the file. But when the syslog service is restarted, the first log message is not written to the syslog. The subsequent messages are written.
Enabled debug logs of Log4j, no exception is seen while writing 1st message to syslog after the restart. But for the subsequent request, the following messages were captured in the Log4j2 log.
2022-01-27 18:07:40,120 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Reconnecting localhost/127.0.0.1:514
2022-01-27 18:07:40,121 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Creating socket localhost/127.0.0.1:514
2022-01-27 18:07:40,122 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Closing SocketOutputStream java.net.SocketOutputStream#1a769d7
2022-01-27 18:07:40,122 ajp-nio-0.0.0.0-8009-exec-3 DEBUG Connection to localhost:514 reestablished: Socket[addr=localhost/127.0.0.1,port=514,localport=57852]
I took threaddump and checked whether the Reconnector thread is running but no such exists in the threaddump. I am clueless here, any help on finding the reason for missing the message would be helpful.
Environment details:
CentOS 7.9 + RSyslog Service,
Application deployed in Tomcat and running on Java 11,
Log4j2 version is 2.17.1
This is due to the way plain text TCP syslog works. Check out this post for further information.
This "bug" exists, since version 8.1901 and newer.
The only way you can fix this - as far as i know - is to send the messages over the RELP protocol. See omrelp module.

2 stdout sources - interrupted writing problem

We got a couple spring boot applications in k8s that write both application log and tomcat access log to stdout.
When the log throughput is really high (either caused by amount of requests or amount of applciation logs) then it sometimes happens that log lines get interrupted.
In our case this looks like this:
[04/Aug/2021:13:39:27 +0200] - "GET /some/api/path?listWithIds=22838de1,e38e2021-08-04 13:39:26.774 ERROR 8 --- [ SomeThread-1] a.b.c.foo.bar.FooBarClass : Oh no, some error occured
e7fb,cd089756,1b6248ee HTTP/1.1" 200 (1 ms)
desired state:
[04/Aug/2021:13:39:27 +0200] - "GET /some/api/path?listWithIds=22838de1,e38ee7fb,cd089756,1b6248ee HTTP/1.1" 200 (1 ms)
2021-08-04 13:39:26.774 ERROR 8 --- [ SomeThread-1] a.b.c.foo.bar.FooBarClass : Oh no, some error occured
is there some way to prevent this?
maybe a tomcat, java or spring-boot setting?
or a setting on a container level to make sure that each line is buffered correctly
System.out had better be thread-safe, but that doesn't mean it won't interleave text when multiple threads write to it. Writing both application logs and HTTP server logs to the same stream seems like a mistake to me for at least this reason, but others as well.
If you want to aggregate logs together, using a character stream is not the way to do it. Instead, you need to use a logging framework that understands separate log-events which it can write coherently to that aggregate destination.
You may need to write your own AccessLogValvesubclass which uses your logging framework instead of writing directly to a stream.

Logback will only log a message to syslog once

I've tried several examples of using logback to write to syslog, but the only that I've found that works is this JavaCodeGeeks example. It writes a message to syslog, but it only writes a message once no matter how many times I run the code. If I change the message it will write it to syslog, but only once.
I'm on Ubuntu 19.10. I've uncommented the following four lines from my /etc/rsyslog.conf and restarted :
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
The only change I made to the javacodegeeks code is to comment out the remote appender in logback.xml. It only logs to the localhost syslog.
What causes this weird behavior?
To log all messages you have to set
$RepeatedMsgReduction off
in /etc/rsyslog.conf and restart rsyslog.
https://www.rsyslog.com/doc/v8-stable/configuration/action/rsconf1_repeatedmsgreduction.html
The default was on in Ubuntu 19.10.

Mixed log in the Java process

I found some mixed log in the Java process, something like:
2014-11-09 11:55:24,087 HTTP xxxxxxxxx.Pool.runJob(QueuedThreadPool.java:607) [jetty-util-9.1.0.v20131115.jar:9.1.0.v20131115]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [jetty-util-9.1.0.v20131115.jar:9.1.0.v20131115]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
xxxxxxxxx means the log recorded in our Java code, and the next is the RuntimeException by the server runtime.
Why the log is mixed together ?
This can happen if multiple threads try to write to the same log file at the same time.
Are both messages created by the same logging framework? Using the same appender?
If not, the output most likely lacks synchronization.

Categories

Resources