Jstack and Jstat stopped working with upgrade to JDK6u23 - java

We recently upgraded from JDK6u20 (Linux, 32bit and 64bit) to JDK6u23. Since then, we cannot longer use the tools jstack and jstat to get monitoring information from the running process. If we switch back to JDK6u20, everything works fine.
We are running Tomcat 6. According to this forum post, others have the same problem:
http://forums.oracle.com/forums/thread.jspa?threadID=2151967&tstart=0
Running simple plain Java processes and using the tools works.
Jstack says: Unable to open socket file: target process not responding or HotSpot VM not loaded The -F option can be used when the target process is not responding.
Jstat says: 19799 not found
Using Jps does not show the running processes at all, so I guess the problem is more of general nature with JDK6u23 and also JDK6u24. It has a new Hotspot engine. Maybe something does not work in conjunction with Tomcat and that Hotspot v19.
Any idea? Help is appreciated.
P.S. Of course, we run that as the same user and we have not changed anything else. Only the JDK.

Found a possible answer in the Oracle forum:
While it's true that 6u23/24 introduce this issue, it's not a bug in jps. Rather a change in behavior of the VM itself. On GNU/Linux Jps and the likes seem to only look at /tmp but not necessarily your CATALINA_TMPDIR. If set or not, try to export CATALINA_TMPDIR=/tmp which translates to "-Djava.io.tmpdir=/tmp" and after restarting the Tomcat process you should see Tomcat's data as "/tmp/hsperfdata_/" and Jps will most likely work again as well.

See jps returns no output even when java processes are running for an instruction how to tell jps or jstat to connect to Tomcat's temp-dir

Related

visualvm/jvisualvm: not supported for this JVM

I wanted to monitor the JVM of wildfly running as service with jvisualvm/visualvm but I fail to do this. I tried the following things:
setting the %TMP% and %TEMP% to C:\Windows\Temp (wildfly console
tells me this for java.io.tmpdir)
running a console with sysinternals
pstools as system account: psexec -i -s cmd.exe and started visualvm
from within this new console (checked that the temp folders are
correctly set).
In both cases under local applications the process of wildfly was listed but visualvm only told me "not supported for this jvm".
As soon as I run wildfly from the cli, visualvm has no problems and shows me everything. There is only the jdk from oracle installed (with the corresponding jre).
How can I monitor the process of wildfly running as service (local system account)? Why is it not working with the solutions above?
Thanks a lot (for reading)
Thank you Salah
With your hint (local JMX connection) I've managed to make it work by using the following command for visualvm (no change of TMP/TEMP variables in cmd):
visualvm.exe -cp:a "<path-to-wildfly>\bin\client\jboss-client.jar"
and adding the path to the jmx console (don't forget to set the username/pw for the admin gui)
service:jmx:http-remoting-jmx://localhost:9990

Heap dump on JRE 6 (Windows) without JDK

Is there a way to create a heap dump on a remote machine without JDK installed?
I can't change the installation / settings and it's running on Windows.
So I have pnly access to commandline tools.
Problem is that a Java app on a remote machine freezes (no out of memory exception so -XX:-HeapDumpOnOutOfMemoryError is useless) and we need to create a dump.
-XX:+HeapDumpOnCtrlBreak
is no option too, because it's not supported anymore on JDK6+.
JMX is not allowed due to security reasons.
Any Ideas? Thank you for your help!
Edit:
Windows
No JDK
No JMX
I think I solved the problem.
You have to "patch" your JRE with some files of the JDK (the same version of course - if you are running jre6uXX you need the corresponding files from jdk6uXX )
Copy the following files:
\JDK6uXX\bin\attach.dll --> %JAVAJRE_HOME%\bin\
\JDK6uXX\bin\jmap.exe --> %JAVAJRE_HOME%\bin\
\JDK6uXX\lib\tools.jar --> %JAVAJRE_HOME%\lib\
No files are overwritten, JRE shouldn't be affected by this.
Now you can use jmap just fine to take dumps ;-)
I appreciate your help! Bye
The simplest solution is to use jmap -dump:liv,format=b,file=app.dump on the command line. You can use jps -lvm to find the process id.
An alternative is to connect to it to jvisualvm This will take the dump and analyse it for you. You can also use this tool to read a dump written by jmap so you may end up using it anyway.
Where jvisualvm struggles is for large heap dumps i.e. more than about half you main memory size. I have found using YourKit to handle larger dumps and also give more useful information. An evaluation license might be all you need to diagnose this.
jmx is not allowed due to security reasons
In that case, you can't do this remotely, unless you use YourKit or some other commercial profiler.
You have start your application with jmx console enabled in a port to debug your application. Execute jconsole and connect to the port which you have enabled for debugging. You can also use of jmap to collect heapdump.
JProfiler has a command line utility bin/jpdump that can take an HPROF heap dump. There is no need to install JDK. There is also no need to run the GUI installer of JProfiler, just extract the ZIP distribution and execute jpdump on the command line.
Disclaimer: My company develops JProfiler.
Update 2016-06-23
As of JProfiler 9.2, jpdump and jpenable run with Java 6 as well.
You could use jvisualvm, just enable jmx port and connect to your application, then you will be able to generate a heap file.
You can do that by adding the following parameters:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=8484
-Dcom.sun.management.jmxremote.ssl=false
Then You need to add your tomcat process manually, So right click on you localhost node -> Add JMX Connection -> type your port -> OK.
Your tomcat process will be listed in under localhost node.
jmap -dump:format=b,file=snapshot.jmap
process-pid
Regardless of how the Java VM was started, the jmap tool will produce a head dump snapshot, in the above example in a file called snapshot.jmap. The jmap output files should contain all the primitive data, but will not include any stack traces showing where the objects have been created.

Why won't the VisualVM Profiler profile my application?

I've created a simple 1 file java application that iterates through a loop, calls some functions, allocates some memory, adds some numbers, etc. I run that application via eclipse's Run As->Java Application.
The running application shows up in Java VisualVM under Local.
I double click on that application and go to the Profiler tab.
The default settings are:
Start profiling from classes: my.main.package.**
Do not profile classes: java.*, javax.*,
sun.*, sunw.*, com.sun.*
I click on CPU. The CPU and Memory buttons gray out. Nothing happens.
The Status says profiling inactive.
When my application terminates the Status says application terminated.
What am I doing wrong here? Are there some settings I need to tweak? Do I need to set a VM flag when I launch my application?
I had the same issue after java 1.7.0_45 update. I had to delete the following folder:
C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
After doing so, everything works like a charm.
I'd guess the issue relates to the application being started from within Eclipse, this is because JVisualVM expects to find data in the java.io.tmpdir directory (usually C:\Users\[your username]\AppData\Local\Temp\hsperfdata_[your username] on a Windows system).
I assume rather than in the normal location where JPS, JVisualVM etc. expects it, Eclipse puts the data in it's own temp folder?
If so, try invoking JVisualVM using jvisualvm -J-Djava.io.tmpdir=[Eclipse's temp directory] to explicitly tell it where that data is.
If you can't find the hsperfdata_$USER folder, try just running your application outside Eclipse in the usual command line Java way.
Also note that there was a bug affecting the temp folder (case sensitivity) introduced around 1.6.0_23, so maybe you'd benefit by updating to a more recent Java 6 (or 7) build?
Mikaveli, Kuba and Somaiah Kumbera have provided great solutions. Just adding what I have done to make things work.
I first checked the location C:\users\'username'\AppData\Local\Temp\hsperfdata_'username' There was no file named with the process ID of my program running inside eclipse.
I simply stopped the program and added the following parameter to the Run Configurations of the program (Run Configurations -> Arguments -> VM Arguments)
-Djava.io.tmpdir=C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
I started the program again. Still could not profile it. But now I have a file created for the process at the given temp directory.
Then, a simple restart of VisualVM did the trick.
I had the same issue, but with the following symptoms:
I started jetty, with the work directory in
C:\Users\t852124\AppData\Local\Temp
Jetty was creating the hsperfdata_ directory but not setting a processID in it
So when I started visualVM, it could not get any java process info.
I solved this by starting jetty with the -Djava.io.tmpdir=C:/temp/java option.
Now when I started jetty, the process ID was created as a file in the hsperfdata_ directory.
So when I started visualVM, it was able to see my local java process
I had the same problem and running VisualVM with elevated privileges (admin rights) solved the issue.
On Linux with VisualVM 1.3.3 I have to remove local settings of application in ~/.visualvm/1.3.3/ to enable CPU Profiler and CPU Sampler.
Also note that /usr/bin/jvisualvm contains hardcoded path to OpenJDK (set with jdkhome variable), which seems to cause a lot of issues, comparing to running to Oracle JDK 1.7.
Also note that if your application is using a recent non-Oracle JVM, you may need to download the "bleeding edge" VisualVM from github.
For example, the VisualVM bundled with JDK 1.8.0.111 doesn't seem to work with the IBM 1.8 JVM. Possibly the IBM JVM was simply released after the Oracle 1.8 JVM, so including the necessary changes wasn't possible at that time.

visualvm cannot see a java process launched from cygwin

If I start a java process in a cygwin console, and then launch visualVm, the later cannot see the former.
If I start the same process in a Dos console visualvm sees it fine. I am in jdk1.6.0_25. This happens both in win7 32b, and in win7 64b with a 64b jvm.
Anyone can think of an explanation/workaround?
I fixed the problem by running VisualVM from within Cygwin. If you prefer not to profile using a remote JMX connection, you can run both VisualVM and your Java program using Cygwin:
Open the Cygwin Console window, navigate to visual_vm.exe and run that file from within the Cygwin environment.
I had the same problem. The vm was not shown automatically but I was able to connect via "Add JMX Connection", using hostname and jmx.remote.port...
On VisualVM go to File -> Add JMX Connection
localhost:3333
Add vm parameter at startup e.g.:
-Dcom.sun.management.jmxremote.port=3333
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
VisualVM can automatically detect local applications running under the same user. So one explanation can be that cygwin process is running under the different user. Make sure that both VisualVM and monitored application is running under JDK 6 update 25. JDK 6 update 25 has a fix for the following JDK bug #6938627, which can affect your case.
The opposite approach to #seanhodges answer is to launch the application to debug with a modified environment, pointing it back to your Windows User Temp directory
For example if you normally do:
./gradlew run
And say your TEMP directory on Windows (according to your User environment variables) is:
T:\Temp
You can do one of these instead:
TMP=T:\\Temp ./gradlew run
TMP=/cygdrive/t/Temp ./gradlew run
(they both seem to work)

Can I get Tomcat running as a service to dump heap?

I am attempting to have Tomcat, which is currently running as a service on a Windows 2003 box, dump heap on an OutOfMemoryError.
(Tomcat is running Hudson, which is reporting a heap space problem at the tail end of my build. Running the build manually produces no such error. The Hudson guys need a heap dump to get started.)
As instructed elsewhere, I've told the Apache Service Monitor to configure the JVM it uses to run Tomcat to dump heap when an OutOfMemoryError is encountered by adding the following to the JVM options:
-XX:+HeapDumpOnOutOfMemoryError
Then I run the build again. Sure enough, it reports there was a heap error. I scan the entire disk looking for the default java_pid123.hprof file (where obviously 123 is replaced by the PID of the JVM). No .hprof files exist anywhere.
I am caught in a catch 22: I need the heap dump for the Hudson guys to fix their memory leak, but I can't get the heap dump if I run Hudson under Tomcat.
Is there some special way, when Tomcat is running as a Windows service, to get a heap dump from it on an OutOfMemoryError?
The other thing I've tried is to tell it, on the Startup and Shutdown tabs, to use the "Java" option instead of the "jvm" option. I believe this should tell the Service Manager to attempt to start Tomcat with a Java executable command instead of launching the jvm.dll directly. When I do this, the service won't start.
Surely someone else has had a similar problem?
After finally putting this one to bed, I wanted to answer this for others who might have the same problem.
First, if you install Tomcat on Windows, do not use the .exe installer, even though it is promoted by Apache. It will not let you run Tomcat as anything other than the system account, no matter what you do. It appears that the system account does not have privileges to write .hprof files in the current directory, and no amount of Windows security tweaking appears to make this problem go away.
OK, so you've installed Tomcat from the .zip distribution. Install it as a service using the service.bat script. Make sure it is set to run as a specific user that you created specifically for this purpose. Make sure as well that the folder you want Tomcat to write to in the event of a heap dump is writable by that user.
Edit the service.bat file to include the -XX:+HeapDumpOnOutOfMemoryError and the -XX:HeapDumpPath=C:\whatever options in the correct place (where you can put JVM options). That should do the trick.
Have you tried -XX:HeapDumpPath option?
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
I found the following link, which describes how to configure the tomcat service (includes setting the java parameters). Not sure if it applies to the version you are running.
http://tomcat.apache.org/tomcat-5.5-doc/windows-service-howto.html
When java process running as window service you can generate the heapdump using below steps,
Run the command console as Administrator
version of JDK (for jmap command) and JRE (Java app run environment) should be same.
Get the PID no of running window process for that java application from task manager
Execute below command
jmap -dump:file=d:\heapdump\myHeapDump.hprof -F #PID_No#
If got any exception with JDK/JRE 7 try the same with JDK/JRE 8
Actually I faced some issue in jmap with JDK 7, but when i moved to JDK 8, I were able to successfully generate the heap dump using same command
The .hprof files are dumped in the current directory. Exactly what that means for a windows service is anyone's guess, assuming it means anything.
I suggest posting a new question (on http://superuser.com) asking what "current directory" means for a windows service.
From 20 Tips for Using Tomcat in Production
Add the following to your JAVA_OPTS in catalina.sh (or catalina.bat for Windows): -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/j2ee/heapdumps
if you have installed tomcat with .exe you can configure tomcat service to use account other than local system account and you can assign that user rights on directory "c:\whatever" where you are creating your dump file. one thing here to remember tomcat service don't run with account having administrative privileges. so create a simple user in windows(member of user group) and set tomcat services to user this account. and give that user rights on "c:\whatever" directory. This resolves the user directory rights issue but you have to configure tomcat for Memory dumps errors.

Categories

Resources