I found some mixed log in the Java process, something like:
2014-11-09 11:55:24,087 HTTP xxxxxxxxx.Pool.runJob(QueuedThreadPool.java:607) [jetty-util-9.1.0.v20131115.jar:9.1.0.v20131115]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:536) [jetty-util-9.1.0.v20131115.jar:9.1.0.v20131115]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51]
xxxxxxxxx means the log recorded in our Java code, and the next is the RuntimeException by the server runtime.
Why the log is mixed together ?
This can happen if multiple threads try to write to the same log file at the same time.
Are both messages created by the same logging framework? Using the same appender?
If not, the output most likely lacks synchronization.
Related
I am able to run an Acache Beam job successfully using the DirectRunner, with the following arguments:
java -jar my-jar.jar --commonConfigFile=comJobConfig.yml
--configFile=relJobConfig.yml
--jobName=my-job
--stagingLocation=gs://my-bucket/staging/
--gcpTempLocation=gs://my-bucket/tmp/
--tempLocation=gs://my-bucket/tmp/
--runner=DirectRunner
--bucket=my-bucket
--project=my-project
--region=us-west1
--subnetwork=my-subnetwork
--serviceAccount=my-svc-account#my-project.iam.gserviceaccount.com
--usePublicIps=false
--workerMachineType=e2-standard-2
--maxNumWorkers=20 --numWorkers=2
--autoscalingAlgorithm=THROUGHPUT_BASED
However, while trying to run on Google Dataflow (simply changing --runner=DataflowRunner) I receive the following message (GetWork timed out, retrying) in the workers.
I have checked the logs generated by the Dataflow process and found
[2023-01-28 20:49:41,600] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:49:39.386Z: Autoscaling: Raised the number of workers to 2 so that the pipeline can catch up with its backlog and keep up with its input rate.
[2023-01-28 20:50:26,911] [main] INFO org.apache.beam.runners.dataflow.util.MonitoringUtil$LoggingHandler:91 2023-01-28T20:50:26.779Z: Workers have started successfully.
and I see no indication that the workers have failed. Moreover I do not see any relevant logs which indicate that the process is working (in my case, reading from the appropriate Pub/Sub topic for notifications). Let me know if there is any further documentation on this log, as I have not been able to find any.
Turns out I forgot to include the --enableStreamingEngine flag. This solved my problem.
I've tried several examples of using logback to write to syslog, but the only that I've found that works is this JavaCodeGeeks example. It writes a message to syslog, but it only writes a message once no matter how many times I run the code. If I change the message it will write it to syslog, but only once.
I'm on Ubuntu 19.10. I've uncommented the following four lines from my /etc/rsyslog.conf and restarted :
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
The only change I made to the javacodegeeks code is to comment out the remote appender in logback.xml. It only logs to the localhost syslog.
What causes this weird behavior?
To log all messages you have to set
$RepeatedMsgReduction off
in /etc/rsyslog.conf and restart rsyslog.
https://www.rsyslog.com/doc/v8-stable/configuration/action/rsconf1_repeatedmsgreduction.html
The default was on in Ubuntu 19.10.
I see these exception
org.hibernate.queryexception could not resolve property
in Dynatrace exception logs thrown from a specific Hibernate query fired through an action performed. I am trying to replicate this error in my local workspace (Eclipse Mars with Websphere 8.5) in order to debug and fix this issue but I don't get this error in my server logs. I have made hibernate.show_sql = true in hibernate.cfg.xml, but this only prints the HQL statements. Is there some other properties that I would have to set in order see this exception in my server logs?
Dynatrace will also capture Exceptions that dont make it to your log files as Dynatrace captures Exceptions when Exception objects are created and not when they are logged to disk. This is why you typically see more Exceptions in Dynatrace than in log files.
What you could do is to use Dynatrace on your local workstation. There is a free for life version for local workstations - https://www.dynatrace.com/en/products/dynatrace-personal-license.html?utm_medium=blog&utm_source=dynatrace&utm_campaign=devops&utm_term=agrabner
Andi
I started jetty in nonstop server on port 18095 and it was running fine, few days later suddenly noticed it consumes more CPU and when I check the log noticed the following log writing continiously
2015-07-08 13:25:48.606:WARN:oejs.ServerConnector:qtp26807578-18-acceptor-0#182e42f-ServerConnector#1f02fde {HTTP/1.1}{0.0.0.0:18095}:
java.io.IOException: Bad file descriptor (errno:4009)
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:241)
at org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:377)
at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:500)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:724)
Is there any way to fix this. Thanks
The "errno: 4009" is from outside of Java itself.
Something in the OS (or FileSystem) is preventing that particular incoming socket from being accepted.
If you are a unix system, consider evaluating your various ulimit values and bumping up the appropriate values to suit your needs better.
If you are on a Windows environment, don't run on Windows ME/2000 (as those have a long history of JVM/ServerSocket issues)
Am new to java, I know Java has Log4J, logback etc for logging purposes. My question is more around how many log files should we have in a application. Should it be per thread, per group of threads, process, exception etc. In our application there is a possibility of having a large number of threads and am thinking about cons of having log file per thread. Are there best practices for logging in applications having huge number of threads .
Thanks in advance!
1 log for messages - Call it SystemOut.log
1 log for stack traces - Call it SystemErr.log
1 log for traces - Call it Trace.log
1 log for native stdout - Call it nativeStdOut.log
1 log for native stderr - Call it nativeStdErr.log
Have a config panel that sets:
maxSize
maxCount
When a log hits max size, starting rolling them upto maxCount and append a timestamp to the rolled filename.
I think that good solution would be to name your threads and write logs together with the name of thread in which log occurred. Thanks to that you will be able to both analysis logs separately for each thread or analysis all logs together.
Typically there is one log file per application (process) -- rarely for Thread and never be Exception. Sometimes this log file is split into various different log levels: debug messages in one bucket, information in another, warnings/errors in a third. This makes it easy to watch for errors by only looking at the warning-and-more-critical file.
log4j has a configuration file in which you can route certain messages to certain files using different criteria. Here's a sample of the log4j properties file:
# default is WARN and above going to the appender "logfile" and STDOUT
log4j.rootLogger=WARN, logfile, stdout
# write to ui.log for a day and then move to .yyyy-MM-dd suffix
log4j.appender.logfile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.logfile.File=ui.log
log4j.appender.logfile.Append=true
log4j.appender.logfile.DatePattern='.'yyyy-MM-dd
# but we log information message from classes in the package com.mprew.be
log4j.logger.com.mprew.be=INFO
log4j, and custom loggers, decorate each time with a Class name, priority level, date/time, etc.. For example:
# date time priority Class-name Log message
2012-03-26 00:55:39,545 [INFO] CmsClientTask Content is up-to-date
Typically exceptions are written out as multiple lines so you can get the entire stack-trace.
2012-03-26 01:55:35,777 [INFO] ExceptionInterceptor Reporting problem to customer
org.springframework.NoSuchRequestException: No request handling method
at com.ui.base.BaseController.invokeNamedHandler(BaseController.java:240)
at com.ui.base.BaseController.handleRequestInternal(BaseController.java:100)
at com.ui.base.CoreServices.handleRequest(CoreServicesController.java:147)
...
In our distributed system, we route all logs from all of the system to 2 servers which write a debug, info, and warning logs. Along with the date/time, class name, priority, and message, the log messages also have the hostname and a specific log token so we can easily identify classes of problems. The following is on one line:
2012-03-26 00:00:00.045 UTC INFO FE8 TrayController TRAY_CLIENT_LOOKUP
message=created unknown client with partner STW9
Then we can easily grep for specific issues.
Hope this helps.