I've looking for an answer on this for a while. In the company I work we have a highly concurrent system but recently found that the logging configuration of the web server (Jboss) includes the Console appender. The application loggers are also going to the console. We started to get deadlocks on the logging actions, most of them to the console appender (I know that Log4j has a really nasty sincronization bug, but i'm almost sure that we dont have any sincronization method in the related code). Another thing founded, is that the IT Guys regularly access to the console with a putty console, put pauses to check the logs and then just close the putty window.
Is it possible that the console appender, and the use of console for logging and monitoring in a production environment, are causing deadlocks and race conditions on the system?. My understanding is that the console should be used only on development phases with an IDE, because on a highly concurrent system it will be another resource to get (slow because of unbuffered I/O) subject to race conditions.
Thanks.
From the Best practices for Performance tuning JBoss Enterprise Application Platform 5, page 9
Turn off console logging in production
Turn down logging verbosity
Use asynchronous logging.
Wrap debug log statements with If(debugEnabled())
I heavily recommend first and last considerations in production due to a bug in Log4J that calculates what to log before logging it i.e. if MyClass#toString() is a heavy operation, Log4J will first calculate this String before (yes, it will do the heavy operation) and then it will check if this String must be logged (pretty bad, indeed =).
Also, tell the IT guys to use the less command when checking the log files in order to not blocking the files, do not check the console directly =. This command works in Linux, if your server is on Unix environment, the command will be tail (based on #Toni's comment).
IMO, I think that an official JBoss performance guide is the best proof to stop using the console logging in production (even if this still doesn't prove your deadlock problem).
Related
I am using slf4j+logback for logging in our application. Earlier we were using jcl+log4j and moved recently.
Due to the high amount of logging in our application, there is a chance of disk being full in production environment. In such cases we need to stop logging and application should work fine. What I found from the web is that we need to poll logback StatusManager for such errors. But this will add a dependency with logback for the application.
For log4j, I found that we can create an Appender which stops logging in such scenarios. That again will cause a application dependency with log4j.
Is there a way to configure this with only slf4j or is there any other mechanism to handle this?
You do not have to do or configure anything. Logback is designed to handle this situation quite nicely. Once target disk is full, logback's FileAppender will stop writing to it for a certain short amount of time. Once that delay elapses, it will attempt to recover. If the recovery attempt fails, the waiting period is increased gradually up to a maximum of 1 hour. If the recovery attempt succeeds, FileAppender will start logging again.
The process is entirely automatic and extends seamlessly to RollingFileAppender. See also graceful recovery.
On a more personal note, graceful recovery is one my favorite logback features.
You may try extending the slf4j.Logger class, specifically the info, debug, trace and other methods and manually query for the available space (via File.getUsableSpace() ) before every call.
That way you will not need any application dependency
2 real options:
add a cron task on linux (or scheduled one on windows) to clean up your mess (incl. gzip some, if need be).
buy a larger hard disk and manually perform the maintenance
+-reduce logging
Disk full is like OOM, you can't know what fails 1st when catch it. Dealing w/ out of memory (or disk) is by preventing it. There could be a lot of cases when extra disk space could be needed and the task failed.
Can anyone throw some light on the clear usage of different levels of LOGGER viz LOGGER.info() LOGGER.trace(), LOGGER.error() and LOGGER.debug().
Pls note its not about configuration, but its about when to use info() and when not to use etc.
I tend to use them like this:
TRACE: Mark where something has executed, like the start of a method. I'm usually not interested logging any information other than "this line executed". Usually turned off in both development and production (to prevent logging large amounts of output) but turned on if I'm diagnosing a defect that is particularly difficult to locate.
DEBUG: Output detailed information about variable state to the logs. After development is complete I turn up the logging level to INFO so these are not output to the logs. If I'm debugging a production problem, I sometimes put the logging level back down to DEBUG to start seeing this output again and assist in diagnosing the problem.
INFO: Output small amounts of important information, such as when a critical method is invoked. Sometimes I leave this on in production, sometimes not.
WARN: Output information about unexpected application state or error that does not prevent the application from continuing to execute. Usually turned on in production.
ERROR: Output information about unexpected application state or error that prevents an operation from completing execution. Always turned on in production.
You said that you aren't looking for help on configuration, but this other slf4j question might be of interest to you anyway.
These are common names for logger frameworks. Usually it's something like this:
debug is for the developer and usually disabled in production use
trace is even finer than debug, logging e.g. method calls and returns
The rest should be self explanatory. Of course it is not always clear cut what event should be logged in what level.
You should have a look at the information in the documentation.
I am considering logging business events in a J2EE web application by using Java logging and FileHandler.
I am wondering whether that could cause a performance bottleneck, since many log records will be written to one file.
What are your experiences and opinions?
Is logging a busy web application to one file with Java logging and FileHandler likely to become performance bottleneck?
It all depends on how much log statements you add. If you add logging after every line of code then performance will must certainly degrade.
Use logging for the important cases, set the correct logging level for your current purposes (testing or actual deployment) and use constructions like
if (Logger.isDebugEnabled()) {
Logger.debug("Value is " + costlyOperation()")
}
to avoid calling code that is costly to run.
You might also want to check this article
In order to avoid generalities like "it depends" or "a little" etc. you should measure the performance of your application with and without the logging overhead. Apache JMeter can help you generate the load for the test.
The information you can collect through logging is usually so essential for the integrity of the application, that you can not operate blindly. There is also a slight overhead if you use Google Analytics, but the benefits prevail.
In order to keep your log files within reasonable sizes, you can always use rotating log files.
I think that JavaRevisited blog has a pretty good post on a problem with performance: Top 10 Tips on Logging in Java
In a recent project, I log audit events to a database table and I was concerned about performance, so I added the ability to log in 'asynchronous' mode. In this mode the logger runs in a low-priority background thread and the act of logging from the main thread just puts the log events onto a queue which are lazily retrieved and written by the background logging thread.
This approach will only work, however, if there are natural 'breaks' in the processing; if your system is constantly busy then the queue will never be emptied. One way to solve this is to make the background thread more active depending on the number of the log messages in the queue (an enhancement I've yet to implement).
You should:
Define an appropriate metric of performance (e.g., responsiveness, throughput, etc.). Then you should measure this metric with all logging turned off and then on. The difference would be the cost of logging.
Then you should experiment with different logging libraries and the modes they provide and document the observed differences.
In my personal experience, for all the three projects I worked on, I found that asynchronous logging helped improve the application throughput a lot. But the same may not hold for you, so make sure you make your decision after careful measurements.
The following does not directly relate to your question.
I noticed that you specifically mentioned business logging. In this case, you may also want to keep logging relevant and clean, in case you find your log files are growing huge and difficult to understand. There is a generally accepted design pattern in this area: log as per function. This would mean that business logging (e.g., customer requested a refund) goes to a different destination, interface logging would go to another destination (e.g., user clicked the upvote button != user upvoted an answer), and a cross system call would go to another destination (e.g., Requesting clearance through payment gateway). Some people keep a master log file with all events as well just to see a timeline of the process while some design log miners/scrappers to construct timelines when required.
Hope this helps,
I want to find or develop an application that can run as a daemon, notify the administrator by email or sms when the Java applications running on a host get any exceptions or errors. I know JVMTI can achieve part of my goal, but it will impact performance of the monitored applications(I don't know how much will it be, it will be acceptable if it's slight), besides it seems to be a troublesom job to develop a JVMTI agent and I'm not sure what would happen if several applications running at the same time using the same agent. Is there any better solutions? Thanks in advance.
One way would be to use a logging system like log4j that publishes all errors occuring on system A to a logging server on system B from which you can monitor the errors occured. This isn't a completely generic solutation however, since only exceptions propagated to log4j (or any other logging system) would be handled - but it may be a good start.
The best solution is to have the Java application send its errors via email/sms. The problem is that programs will generate exceptions and handle correctly in normal operation. You only want particular exception.
Failing this you could write a log reader, which reads the logs of the application. This is tricky to get right, but it can be done.
An application can generate 1000+ exception per days and still be behaving normally because the application knows how to handle these exceptions. e.g. every time a socket connection is closed an exception can be thrown.
IMO, the best approach is to deploy an external monitoring system. This can:
monitor multiple applications
monitor infrastructure services
monitor network availability and machine accessibility,
monitor resources such as processor and file system usage.
Applications can be monitored in a variety of ways, including:
by processing log events,
by watching for application restarts,
by "pinging" the application's web apis to check service liveness, and
by using the application's JMX interfaces.
This information can be filtered and prioritized in an intelligent fashion, and critical events can be reported by whatever means is most appropriate.
You don't want individual applications sending emails, because they don't have sufficient information to do a decent job. Furthermore, putting the reporting logic into individual applications is likely to lead to inconsistent implementation, poor configurability, and so on.
There is a nearby alternative to JVMTI : JPDA. This infrastructure allows you to create a remote "debugger" (yes, that's what you're planning to do) using Java code, and connect it to the VM using either local or remote connection.
There will be, like for JVMTI, an overhead to program execution. However, as the Trace.java example shows, it's quite simple to both implement and connect to target VM.
Finally, notice if you want to instrument code run by application server (JBoss, Glassfish, Tomcat, you name it) there are various other means available.
I follow the pattern where every exception gets logged to a table.
Then an RSS feed selects from that table.
I subscribe to the RSS feed in MS Outlook at work and also on my Android phone with a program called NewsRob. NewsRob let me set my phone to alert me when there is something new.
I blog about how to do this HERE. It is in .net, but you get the idea.
As a related step I found a way to notify myself when something DIDN'T happen. That blog is HERE.
There are loads of applications out there that do what you are looking for in a way that does not impact performance. Have you had a look at Kibana/ElasticSearch, or Splunk or Logscape for enterprise solutions ( they both also have free versions).
I'm going to echo what has already been said and highlight what java already provides and what you can do with an external monitoring system. Java already provides:
log4j - log ERRORS, WARNINGS, FATAL and Exceptions to a file
JMX - Create custom application metrics and you also have access to java.lang/* which will give you heap memory usage , garbage collection, thread counters etc.
JVM gc logging - you can log all your garbage collection events to a file and watch for any long Full GC collections.
An external monitoring system will allow you to set alerts triggered off different operational scenarios. You will also get visualisation of your system performance through charts. I've used Logscape's java app in the past to monitor 30 java processes spread out over3 hosts.
I wanted to get ideas from the SO community about this issue.
Here is the problem:
We have a user on the other side of the world launching our app through WebStart. The user, however, is complaining that her whole application freezes up and becomes unresponsive. Usually, the client is doing a lot of database queries to a distributed database.
Questions:
If we ask her to do a CTRL-Break on her application, where would the JVM write the stack trace to?
Would it be enough just to use JConsole?
Would implementing JMX beans on the client be overkill? Would it actually help in troubleshooting issues in production?
Right now the users are running on JRE 1.5.0-b08, but we do plan on migrating to JRE 6 in a couple of months.
What do you think?
José, you can get a lot of information from the JVM in a number of ways.
The best might be to enable debugging in the remote JVM. You can set them using the j2se element in the descriptor XML, as shown here. Since you can set -Xdebug you have a good start; I've never tried to do remote debugging on a web start app, so it may be a little bit of an issue setting up the remote part.
You could also set some things up yourself by adding a separate thread to talk to you remotely and send debugging messages.
You could use a native java or log4j remote logger.
If it's hanging the way you describe, though, the odds are very high that what's happening is a network hangup of some sort. Can you put some tracing/debugging onto your end of the conversation?
Instead of these debugging suggestions, why don't you install an exception handler for your threads? See java.lang.Thread.
void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler eh)
Here's the relevant javadoc:
http://java.sun.com/javase/6/docs/api/java/lang/Thread.html#setDefaultUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)
If you install that in your code, and once inside Swing's EDT, then just write some java code to e-mail it to yourself, save it on a server, show it to the user, etc.
You need to have the Java Console displayed (run javaws from the command line, and select this from the Preferences dialog), then hit "v"