I am analyzing the differences between approaches for taking thread dumps. Below are the couple of them I am researching on
Defining a jmx bean which triggers jstack through Runtime.exec() on clicking a declared bean operation.
Daemon thread executing "ManagementFactory.getThreadMXBean().dumpAllThreads(true, true)" repeatedly after a predefined interval.
Comparing the thread dump outputs between the two, I see the below disadvantages with approach 2
Thread dumps logged with approach 2 cannot be parsed by open source thread dump analyzers like TDA
The ouput does not include the native thread id which could be useful in analyzing high cpu issues (right?)
Any more?
I would appreciate to get suggestions/inputs on
Are there any disadvantages of executing jstack through Runtime.exec() in production code? any compatibility issues on various operating systems - windows, linux?
Any other approach to take thread dumps?
Thank you.
Edit -
A combined approach of 1 and 2 seems to be the way to go. We can have a dedicated thread running in background and printing the thread dumps in the log file in a format understood by the thread dump analyzers.
If any extra information is need (like say probably the native thread id) which is logged only by the jstack output, we do it manually as required.
You can use
jstack {pid} > stack-trace.log
running as the user on the box where the process is running.
If you run this multiple times you can use a diff to see which threads are active more easily.
For analysing the stack traces I use the following sampled periodically in a dedicated thread.
Map<Thread, StackTraceElement[]> allStackTraces = Thread.getAllStackTraces();
Using this information you can obtain the thread's id, run state and compare the stack traces.
With Java 8 in picture, jcmd is the preferred approach.
jcmd <PID> Thread.print
Following is the snippet from Oracle documentation :
The release of JDK 8 introduced Java Mission Control, Java Flight Recorder, and jcmd utility for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jstack utility for enhanced diagnostics and reduced performance overhead.
However, shipping this with the application may be licensing implications which I am not sure.
If its a *nix I'd try kill -3 <PID>, but then you need to know the process id and maybe you don't have access to console?
I'd suggest you do all the heap analysis on a staging environment if there is such an env, then reflect your required Application Server tuning on production if any. If you need the dumps for analysis of your application's memory utilization, then perhaps you should consider profiling it for a better analysis.
Heap dumps are usually generated as a result of OutOfMemoryExceptions resulting from memory leaks and bad memory management.
Check your Application Server's documentation, most modern servers have means for producing dumps at runtime aside from the normal cause I mentioned earlier, the resulting dump might be vendor specific though.
Related
I work on very large web project which is written in java.
when I click some button or do other actions it is hard to me to understand what methods called in application code(because I am new in project and application is really really big). So I would like to know is there a tool which will allow to get stacktrace of some threads with given interval (say every 100 milliseconds ).
I know about VisualVm but it does not allow to do this, I can get thread dumb only at one point of time( there is no way to get stack trace continuously).
Can someone suggest tool or any technique which will allow me to monitor methods call at run-time.?
Thanks
For such cases I use Java Mission Control. The full features works on Oracle JDK, for OpenJDK not everything works properly. More info
From the website:
Starting with the release of Oracle JDK 7 Update 40 (7u40), Java Mission Control is bundled with the HotSpot JVM.
You need to add the following parameters in your JVM to be able to use it. note: I normally add also the debug options.
JAVA_DEBUG="-Xdebug -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=n"
JAVA_JMC="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3614 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:+UnlockCommercialFeatures -XX:+FlightRecorder"
Then you'll need to remote attach to port 3614 and you'll be able to see inside the JVM. There you'll be able to profile CPU, check allocation and deadlock detection + select the thread and see what is currently executing. And some other graphs and valuable information.
There are multiple ways in which you could check stacktrace:
Using jconsole's thread tab where you will get to see which all threads are alive and what state they are at.
Using JvisualVm (which comes fee with jdk installation) or you could use any of profilers like jprofiler/yourkit etc. to view stack trace when you run in development mode.
You could get stack trace say every minute by running kill -3 pid in unix or control + break on windows
You could use jstack command to get trace.
You could debug the code using say IDE (eclipse/netbeans/Intellij etc.)and after each and every method call trace the method call.
Many tools for Java VM monitoring and application monitoring exist. For instance, see the eG Java Application Monitor:
...the eG Java Monitor gives you a comprehensive view of the
activities within a JVM:
It lets you see which threads are running in the JVM and what state
they are in (such as runnable, blocked, waiting, timed waiting,
deadlocked, or high CPU).
You also have access to a stack trace for
each thread showing class, method, and line of code (to troubleshoot
problems down to the line of code level).
And you can monitor the
performance of garbage collection processes, CPU and memory usage, and
JVM restarts.
I have a Java Application (web-based) that at times shows very high CPU Utilization (almost 90%) for several hours. Linux TOP command shows this. On application restart, the problem goes away.
So to investigate:
I take Thread Dump to find what threads are doing. Several Threads are found in 'RUNNABLE' state, some in few other states. On taking repeated Thread Dumps, i do see some threads that are always present in 'RUNNABLE' state. So, they appear to be the culprit.
But I am unable to tell for sure, which Thread is hogging the CPU or has gone into a infinite loop (thereby causing high CPU util).
Logs don't necessarily help, as the offending code may not be logging anything.
How do I investigate - What part of the application or what-thread is causing High CPU Utilization? - Any other ideas?
If a profiler is not applicable in your setup, you may try to identify the thread following steps in this post.
Basically, there are three steps:
run top -H and get PID of the thread with highest CPU.
convert the PID to hex.
look for thread with the matching HEX PID in your thread dump.
You may be victim of a garbage collection problem.
When your application requires memory and it's getting low on what it's configured to use the garbage collector will run often which consume a lot of CPU cycles.
If it can't collect anything your memory will stay low so it will be run again and again.
When you redeploy your application the memory is cleared and the garbage collection won't happen more than required so the CPU utilization stays low until it's full again.
You should check that there is no possible memory leak in your application and that it's well configured for memory (check the -Xmx parameter, see What does Java option -Xmx stand for?)
Also, what are you using as web framework? JSF relies a lot on sessions and consumes a lot of memory, consider being stateless at most!
In the thread dump you can find the Line Number as below.
for the main thread which is currently running...
"main" #1 prio=5 os_prio=0 tid=0x0000000002120800 nid=0x13f4 runnable [0x0000000001d9f000]
java.lang.Thread.State: **RUNNABLE**
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:313)
at com.rana.samples.**HighCPUUtilization.main(HighCPUUtilization.java:17)**
During these peak CPU times, what is the user load like? You say this is a web based application, so the culprits that come to mind is memory utilization issues. If you store a lot of stuff in the session, for instance, and the session count gets high enough, the app server will start thrashing about. This is also a case where the GC might make matters worse depending on the scheme you are using. More information about the app and the server configuration would be helpful in pointing towards more debugging ideas.
Flame graphs can be helpful in identifying the execution paths that are consuming the most CPU time.
In short, the following are the steps to generate flame graphs
yum -y install perf
wget https://github.com/jvm-profiling-tools/async-profiler/releases/download/v1.8.3/async-profiler-1.8.3-linux-x64.tar.gz
tar -xvf async-profiler-1.8.3-linux-x64.tar.gz
chmod -R 777 async-profiler-1.8.3-linux-x64
cd async-profiler-1.8.3-linux-x64
echo 1 > /proc/sys/kernel/perf_event_paranoid
echo 0 > /proc/sys/kernel/kptr_restrict
JAVA_PID=`pgrep java`
./profiler.sh -d 30 $JAVA_PID -f flame-graph.svg
flame-graph.svg can be opened using browsers as well, and in short, the width of the element in stack trace specifies the number of thread dumps that contain the execution flow relatively.
There are few other approaches to generating them
By introducing -XX:+PreserveFramePointer as the JVM options as described here
Using async-profiler with -XX:+UnlockDiagnosticVMOptions -XX:+DebugNonSafepoints as described here
But using async-profiler without providing any options though not very accurate, can be leveraged with no changes to the running Java process with low CPU overhead to the process.
Their wiki provides details on how to leverage it. And more about flame graphs can be found here
Your first approach should be to find all references to Thread.sleep and check that:
Sleeping is the right thing to do - you should use some sort of wait mechanism if possible - perhaps careful use of a BlockingQueue would help.
If sleeping is the right thing to do, are you sleeping for the right amount of time - this is often a very difficult question to answer.
The most common mistake in multi-threaded design is to believe that all you need to do when waiting for something to happen is to check for it and sleep for a while in a tight loop. This is rarely an effective solution - you should always try to wait for the occurrence.
The second most common issue is to loop without sleeping. This is even worse and is a little less easy to track down.
You did not assign the "linux" to the question but you mentioned "Linux top". And thus this might be helpful:
Use the small Linux tool threadcpu to identify the most cpu using threads. It calls jstack to get the thread name. And with "sort -n" in pipe you get the list of threads ordered by cpu usage.
More details can be found here:
http://www.tuxad.com/blog/archives/2018/10/01/threadcpu_-_show_cpu_usage_of_threads/index.html
And if you still need more details then create a thread dump or run strace on the thread.
I'm using resin,sometime the load very high,so I particularly want to see inner the JVM process all of the threads state,how many cpu or memory or disckIO every thread using.
Thanks in advance
If using the HotSpot provided by Oracle/Sun launch jvisualvm and attach it to Resin.
Java VisualVM allows you to see what the application is doing at thread level.
See this link:
Java VisualVM Monitoring Application Threads
I have an application running on Websphere Application Server 6.0 and it crashes nearly every day because of Out-Of-Memory. From verbose GC is certain there are the memory leaks(many of them)
Unfortunately the application is provided by external vendor and getting things fixed is slow & painful process. As part of the process I need to gather the logs and heapdumps each time the OOM occurs.
Now I'm looking for some way how to automate it. Fundamental problem is how to detect OOM condition. One way would be to create shell script which will periodically search for new heapdumps. This approach seems me a kinda dirty. Another approach might be to leverage the JMX somehow. But I have little or no experience in this area and don't have much idea how to do it.
Or is in WAS some kind of trigger/hooks for this? Thank you very much for every advice!
You can pass the following arguments to the JVM on startup and a heap dump will be automatically generated on an OutOfMemoryError. The second argument lets you specify the path for the heap dump file. By using this at least you could check for the existence of a specific file to see if a heap dump has occurred.
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=<value>
I see two options if you want heap dumping automated but #Mark's solution with heap dump on OOM isn't satisfactory.
You can use the MemoryMXBean to detect high memory pressure, and then programmatically create a heap dump if the usage (or usage delta) seems high.
You can periodically get memory usage info and generate heap dumps with a cron'd shell script using jmap (works both locally and remote).
It would be nice if you could have a callback on OOM, but, uhm, that callback probably would just crash with an OOM error. :)
Have you looked at JConsole ? It uses JMX to give you visibility of a variety of JVM metrics, including memory info. It would probably be worth monitoring your application using this to begin with, to get a feel for how/when the memory is consumed. You may find the memory is consumed uniformly over the day, or when using certain features.
Take a look at the detecting low memory section of the above link.
If you need you can then write a JMX client to watch the application automatically and trigger whatever actions required. JConsole will indicate which JMX methods you need to poll.
And alternative to waiting until the application has crashed may be to script a controlled restart like every night if you're optimistic that it can survive for twelve hours..
Maybe even websphere can do that for you !?
You could add a listener (Session scoped or Application scope attribute listener) class that would be called each time a new object is added in session/app scope.
In this - you can attempt to check the total memory used by app (Log it) as as call run gc (note that invoking it will not imply gc will always run)
(The above is for the logging part and gc based on usage growth)
For scheduled gc:
In addition you can keep a timer task class that runs after every few hrs and does a request for gc.
Our experience with ITCAM has been less than stellar from the monitoring perspective. We dumped it in favor of CA Wily Introscope.
Have you had a look on the jvisualvm tool in the latest Java 6 JDK's?
It is great for inspecting running code.
I'd dispute that the you need the heap dumps when the OOM occurs. Periodic gathering of the information over time should give the picture of what's going on.
As has been observed various tools exist for analysing these problems. I have had success with ITCAM for WebSphere, as an IBMer I have ready access to that. We were very quickly able to indentify the exact lines of code in out problem situation.
If there's any way you can get a tool of that nature then that's the way to go.
It should be possible to write a simple program to get the process list from the kernel and scan it to see if your WAS process is still running. On a Unix box you could probably whip up something in Perl in a few minutes (if you know Perl), not sure how difficult it would be under Windows. Run it as a scheduled task every five minutes or so, and if the process doesn't show up you could have it fork off another process that would deal with the heap dump and re-start WAS.
A Java application I support that runs on JRE 1.4.2_12 is hanging near midnight every night. I'd like to try and record as much profiling information as I can to discover if there is an issue in the JVM or external to the app.
I'd like to use HPROF to collect as much information as possible.
Is there a way to have HPROF dump its cpu sample and memory allocation report every minute instead of at the termination of the JVM?
Is there a different, more appropriate profiler that can collect information like this?
Rather than relying on dump files, I would try hooking up a profiler to the VM and leave it attached until the hang up occurs. Then use the profiler to introspect the state of the threads.
The use of Java 1.4 is a minor issue here, since 1.4's debug interface is not great, but some profilers still support it. I can particularly recommend YourKit, which is commercial, but offers an evaluation licence. It's the best profiler I've used, but some margin.
First things first: did you analyze the thread dump when your application hangs? A lot of the time that has enough information to troubleshoot a hanging java app...
Ctrl-Break in the process window on Windows, or kill -QUIT [pid] on Linux.
I would first try to determine if its actually your app or something else.
Are there any other apps on the box, if so do they run any batch around midnight. It could be a situation of your app suffering from a lack of resources due to other things running on the box or chewing up bandwidth.
Was this always the case or did it start recently. If this is new look at what changed on the box as a whole not just your own app.