is there any way to show jmap dump progress in executing - java

Is there any possible way to show jmap dump progress? I am using this command to dump stack:
/opt/dabai/tools/jdk1.8.0_211/bin/jmap -F -dump:format=b,file=my.dump 5307
the output is:
[root#log001 ~]# /opt/dabai/tools/jdk1.8.0_211/bin/jmap -F -dump:format=b,file=my.dump 5307
Attaching to process ID 5307, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.211-b12
Dumping heap to my.dump ...
stay dump more than 5 minites,how to known what happen in dump progress?should I waiting?

Related

Jstack hung and cannot capture stack info

My java program hang there and i cannot debug it. Here is what i tried
jstack $pid, hang and never response
jstack -F $pid with /proc/sys/kernel/yama/ptrace_scope == 0. Get errors below
Error: -F option used
Cannot connect to core dump or remote debug server. Use jhsdb jstack instead
Then i tried jhsdb jstack --pid $pid. Still hangs forever and i got
Attaching to process ID 123212, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 11+28
Deadlock Detection:
After i press CTRL+C. I got:
Attaching to process ID 123212, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 11+28
Deadlock Detection:
java.lang.RuntimeException: VM.initialize() was not yet called
at jdk.hotspot.agent/sun.jvm.hotspot.runtime.VM.getVM(VM.java:460)
at jdk.hotspot.agent/sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:299)
at jdk.hotspot.agent/sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
at jdk.hotspot.agent/sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:73)
at jdk.hotspot.agent/sun.jvm.hotspot.oops.MetadataField.getValue(MetadataField.java:43)
at jdk.hotspot.agent/sun.jvm.hotspot.oops.MetadataField.getValue(MetadataField.java:40)
at jdk.hotspot.agent/sun.jvm.hotspot.oops.Klass.getNextLinkKlass(Klass.java:131)
at jdk.hotspot.agent/sun.jvm.hotspot.classfile.ClassLoaderData.find(ClassLoaderData.java:96)
at jdk.hotspot.agent/sun.jvm.hotspot.classfile.ClassLoaderDataGraph.find(ClassLoaderDataGraph.java:61)
at jdk.hotspot.agent/sun.jvm.hotspot.memory.SystemDictionary.getAbstractOwnableSynchronizerKlass(SystemDictionary.java:116)
at jdk.hotspot.agent/sun.jvm.hotspot.runtime.DeadlockDetector.print(DeadlockDetector.java:84)
at jdk.hotspot.agent/sun.jvm.hotspot.runtime.DeadlockDetector.print(DeadlockDetector.java:39)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:62)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.StackTrace.run(StackTrace.java:45)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.JStack.run(JStack.java:67)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at jdk.hotspot.agent/sun.jvm.hotspot.tools.JStack.runWithArgs(JStack.java:90)
at jdk.hotspot.agent/sun.jvm.hotspot.SALauncher.runJSTACK(SALauncher.java:259)
at jdk.hotspot.agent/sun.jvm.hotspot.SALauncher.main(SALauncher.java:450)
Can't print deadlocks:VM.initialize() was not yet called
Exception in thread "main" java.lang.RuntimeException: VM.initialize() was not yet called
at jdk.hotspot.agent/sun.jvm.hotspot.runtime.VM.getVM(VM.java:460)

HDFS Datanode crashes with OutOfMemoryError

I´m having repeated crashes in my Cloudera cluster HDFS Datanodes due to an OutOfMemoryError:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/hdfs_hdfs-DATANODE-e26e098f77ad7085a5dbf0d369107220_pid18551.hprof ...
Heap dump file created [2487730300 bytes in 16.574 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
# Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...
18551 TS 19 ? 00:25:37 java
Wed Aug 7 11:44:54 UTC 2019
JAVA_HOME=/usr/lib/jvm/java-openjdk
using /usr/lib/jvm/java-openjdk as JAVA_HOME
using 5 as CDH_VERSION
using /run/cloudera-scm-agent/process/3087-hdfs-DATANODE as CONF_DIR
using as SECURE_USER
using as SECURE_GROUP
CONF_DIR=/run/cloudera-scm-agent/process/3087-hdfs-DATANODE
CMF_CONF_DIR=/etc/cloudera-scm-agent
4194304
When analyzing the heap dump, the apparent biggest suspects are millions of instances of ScanInfo apparently quequed in the ExecutorService of the class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.
When I inspect the content of each ScanInfo runnable object, I don´t see anything weird:
Apart from this and a bit high block count in HDFS, I don´t get any other information apart from the different DataNodes crashing randomly in my cluster.
Any idea why these objects keep queueing up in the DirectoryScanner thread pool?
You can try once below command.
$ hadoop dfsadmin -finalizeUpgrade
The -finalizeUpgrade command removes the previous version of the NameNode’s and DataNodes’ storage directories.

Extracting information from a a java core dump with jmap(1.5)

Long story short, some coworkers are running a pretty old setup(oc4j jdk1.5.6 in x86_64) with an application which happens to be mission critical. They recently have tried to deploy a new version of the application, but as soon as they do the java process(es) throw a core dump and die.
The problem is, the core dumps seem to be fine, gdb can open them, but jmap and other tools refuse to process them:
# /usr/java/jdk1.5.0_06/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Error attaching to core file: Can't attach to the core file
And newer versions throw a exception:
# jdk1.6.0_45/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jmap.JMap.runTool(JMap.java:179)
at sun.tools.jmap.JMap.main(JMap.java:110)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 20.45-b01. Target VM is 1.5.0_06-b05
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:224)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:287)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:357)
at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:594)
at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494)
at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:348)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:169)
at sun.jvm.hotspot.tools.PMap.main(PMap.java:67)
... 6 more
gdb offers little information without symbols:
Reading symbols from /usr/java/jdk1.5.0_06/bin/java...(no debugging symbols found)...done.
[New Thread 9841]
[New Thread 31442]
[New Thread 31441]
...
Core was generated by `/usr/java/jdk1.5.0_06/bin/java -server -XX:+UseConcMarkSweepGC -XX:MaxHeapFreeR'.
Program terminated with signal 6, Aborted.
#0 0x0000003bbf030285 in ?? ()
(gdb) bt
#0 0x0000003bbf030285 in ?? ()
#1 0x0000003bbf031d30 in ?? ()
#2 0x0000000000000000 in ?? ()
The only valuable information I've gathered from the core is that most threads
are blocked(I'm far from being a gdb guru):
35 Thread 10093 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
34 Thread 10097 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
33 Thread 10099 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
Besides, I don't know if it's really relevant. The app is almost always heavily loaded, and my bet is that there were some lock contention already but since it's another's team app my knowledge about it it's pretty shallow.
I guess this is a long shot, but is there something that we can do to get a java thread dump or something like that? Do Sun used to offer debuginfo of the jdk as I guess is avalaible now with openjdk?
Thanks in advance.
UPDATE: The other team has resolved the issue without getting info from the core dump, just by trial and error after successfully replicating the problem in a test system. I'm still intrigued about the thing: how to debug an ancient java core dump which jmap can't process, it might be valuable info for the future, althought it seems is that there is no solution to that problem. Probably the JVM memory got corrupted and that's why jmap can't process it.
You can add the following JVM option when starting your application, that will allow you to run any command you specify if a fatal JVM error occurs:
-XX:OnError="<cmd args>"
For instance, you could run a command (or a script) that will perform certain actions like get a heap or thread dump.
Jmap and other JVM utilities are extremely version sensitive. From your error, it is self explanatory that hopefully the same jvm is not used in your case.
Java VisualVM can load core dumps directly. But you must use the same jvm that created the core file.
Resource Link:
https://stackoverflow.com/a/9981498/2293534
Suggestion#1:
kjkoster has given a solution here in this tutorial.
You need to use the jmap that comes with the JVM. From your error
message I gather that you are using a different version of jmap than
of the JVM.
Please check what JVMs are installed on your machine and ensure than
when you run jmap, you use the right version.
To solve such issues I never rely on path. Instead I set JAVA_HOME to
be the one that the JVM uses and then invoke both the JVM and jmap
like so:
Code:
$ JAVA_HOME=/usr/local/jdk1.6.0
$ export JAVA_HOME
$ ${JAVA_HOME}/bin/java ...
...
$ ${JAVA_HOME}/bin/jmap ...
Hope this helps.
Suggestion#2:
It is a full solution step by step is given by Chamilad. Hope it will clarify your root cause and solution procedure.
Almost every Java developer knows about jmap and jstack tools that come with the JDK. These provide functionality to extract heap and thread information of a running JVM instance. Easy.
What if there’s a running JVM that has produced a deadlock and you want to take a thread dump while the process is running? You go in and run the following.
jstack pid >> thread_dump.txt
Turns out the system doesn’t know what jstack is. You don’t panic, but you get a tiny sensation at the back of your head saying you’re not leaving early this Friday.
What has happened is the running JVM is based on a JRE and not a JDK. The JRE is a minimal runtime that doesn’t pack the monitoring and analysis tools that the JDK packs.
So what are our options here?
Stop the process. Download JDK, start the process again on top of
JDK and hope the deadlock happens again. Nope.
Start JVisualVM on your laptop and hope the process has JMX enabled. Nope.
tools.jar TO THE RESCUE!
Functionalities such as jstack are implemented in the tools.jar file which is packed inside <JDK_HOME>/lib folder. We can use this to invoke the JStack class and get a thread dump of the running process.
So we march on to download and extract the JDK, and then to run the following.
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
..and come across the following error.
Exception in thread "main" java.lang.UnsatisfiedLinkError: no attach in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at sun.tools.attach.LinuxVirtualMachine.<clinit>(LinuxVirtualMachine.java:342)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
at sun.tools.jstack.JStack.runThreadDump(JStack.java:163)
at sun.tools.jstack.JStack.main(JStack.java:116)
Darn it! Spoiled again!
How do we solve this? The above error is caused when the process can’t find libattach.so file which is related to the Dynamic Attach function related to JStack. Setting the following environment variable will help the JVM to find the libattach.so file.
export LD_LIBRARY_PATH=<JDK_HOME>/jre/lib/amd64/
Now let’s run JStack again, this time with results!
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
Now that we have the thread dump, we move on to the heap dump. The tool we normally use is jmap but that too is not available on the JRE. So what? We can use the binary in the JDK’s bin directory right? right?
root#snowflake1 latest]# <JDK_HOME>/bin/jmap -heap <pid>
Attaching to process ID <pid>, please wait…
Error attaching to process: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:435)
at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:305)
at sun.jvm.hotspot.HotSpotAgent.attach(HotSpotAgent.java:140)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:185)
at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.tools.jmap.JMap.runTool(JMap.java:201)
at sun.tools.jmap.JMap.main(JMap.java:130)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:227)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:294)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:370)
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:431)
… 11 more
Nope! Unless you match the JDK version with the exact version the JRE is at you get the above issue (which is pretty self-explanatory). So we download the JDK of the JRE of on top our process is running and run jmap again.
<JDK_HOME>/bin/jmap -dump:file=heap_dump.hprof <pid>
Resource Link:
Extracting memory and thread dumps from a running JRE based JVM
jmap is not going to help you debug a core dump. The JVM dumps core when either it has a bug or you have JNI code with a problem. Organizations with mission-critical applications should, sadly, see upgrading from unsupported versions of the JVM as mission-critical, or be prepared to pay Oracle a fortune for help.

How does jstack -F affect a running Java process?

I am trying to diagnose a problem where a Java web application I'm using (Jenkins) becomes unresponsive. If I run jstack without the -F flag it doesn't give me anything, but if I put the flag in to force a thread dump not only do I get a result, but the application starts responding and goes on as if nothing had happened until it eventually stops responding again.
What does jstack -F flag do that would affect a running JVM and cause an unresponsive application to start responding again?
You can see the source to jstack here. The -F argument changes how jstack connects to the jvm. With -F (or -m) JStack connects to the jvm using the java debugger interface. If a pid is specified, JStack connects with the SA PID Attaching Connector which says,
The process to be debugged need not have been started in debug
mode(ie, with -agentlib:jdwp or -Xrunjdwp). It is permissable for the
process to be hung.
I don't know why it would cause an unresponsive application to start responding again, but the link above also says,
The process is suspended when this connector attaches and resumed when
this connector detaches.
This may have an effect.
jstack -F -l pid is similarly to (assume working dir is JAVA_HOME)
bin/java -Dsun.jvm.hotspot.debugger.useWindbgDebugger -Dsun.jvm.hotspot.debugger.useProcDebugger -cp lib/sa-jdi.jar;lib/tools.jar sun.tools.jstack.JStack -F -l pid
and in the sun.tools.jstack.JStack code
if (arg.equals("-F")) {
useSA = true;
}
.....
// now execute using the SA JStack tool or the built-in thread dumper
if (useSA) {
// parameters (<pid> or <exe> <core>
...
runJStackTool(mixed, locks, params);
} else {
// pass -l to thread dump operation to get extra lock info
String pid = args[optionCount];
...
runThreadDump(pid, params);
}
and since -F is passed in, runJStackTool is called to load sun.jvm.hotspot.tools.JStack, it have same effect of invoking directly
bin\java -Dsun.jvm.hotspot.debugger.useWindbgDebugger -Dsun.jvm.hotspot.debugger.useProcDebugger -cp lib/sa-jdi.jar;lib/tools.jar sun.jvm.hotspot.tools.JStack pid
and sun.jvm.hotspot.tools.JStack will call sun.jvm.hotspot.bugspot.BugSpotAgent attach -> go ->setupVM method
Maybe below code is the magic
jvmdi = new ServiceabilityAgentJVMDIModule(debugger, saLibNames);
if (jvmdi.canAttach()) {
jvmdi.attach();
jvmdi.setCommandTimeout(6000);
debugPrintln("Attached to Serviceability Agent's JVMDI module.");
// Jog VM to suspended point with JVMDI module
resume();
suspendJava();
suspend();
debugPrintln("Suspended all Java threads.");
}
it will suspend all Java threads in the target process. if your application is hang for thread starvation, the suspend method call may relax them.

Analyzing a java tomcat thread dump using Eclipse Memory Analyzer - erroring out

So I got a java tomcat heap/thread dump using kill -3 .
The dump seems to go into catalina.out in the form of
Full thread dump Java HotSpot(TM) 64-Bit Server VM (20.2-b06 mixed mode):
"RMI TCP Connection(2018)-50.28.31.254" daemon prio=10 tid=0x00007f743cb90800 nid=0x2624 runnable [0x00007f7438ef7000]
java.lang.Thread.State: RUNNABLE
ending with
Heap
PSYoungGen total 570432K, used 315753K [0x00000007d5560000, 0x0000000800000000, 0x0000000800000000)
eden space 442752K, 71% used [0x00000007d5560000,0x00000007e89ba460,0x00000007f05c0000)
70000000,0x000000077848b090,0x0000000780000000)
...
I copied and pasted the above (and everything in between)..
Eclipse errored out with the following error when trying to "Open a dump":
Error opening heap dump 'dump2.txt'. Check the error log for further details.
Error opening heap dump 'dump2.txt'. Check the error log for further details.
Invalid HPROF file header. (java.io.IOException)
Invalid HPROF file header.
What am I doing wrong?
What you're doing wrong is that this is a thread dump and not a heap dump. The easiest way to do a heap dump instead would be to use VisualVM.
Alternatively, you could use jmap to create your heap dump e.g.:
jmap -dump:file=app.bin 123456
which will create a heap dump called app.bin for process id 123456

Categories

Resources