How to find what exact java command jenkins runs in the background? - java

So I am trying to reproduce an issue with my maven project (resource allocation issue, OOM - can't create native thread) which I see in Jenkins, locally. Hence, I wanted to run the exact java command that Jenkins would run (in the background) along with the arguments but am not sure where to find it, or how to figure that out. The only thing in the configuration I see is the maven commands I have given it.
Any pointers?

Running jps will give you a list of instrumented JVMs running on a machine, including their runtime arguments.
http://docs.oracle.com/javase/1.5.0/docs/tooldocs/share/jps.html
Other useful commands:
Running jmap will give you a heapdump or a histogram of counts of all objects allocated on the heap.
Running jvisualvm will start a monitoring tool that allows you to study the JVM interactively.

Related

Heap dump on JRE 6 (Windows) without JDK

Is there a way to create a heap dump on a remote machine without JDK installed?
I can't change the installation / settings and it's running on Windows.
So I have pnly access to commandline tools.
Problem is that a Java app on a remote machine freezes (no out of memory exception so -XX:-HeapDumpOnOutOfMemoryError is useless) and we need to create a dump.
-XX:+HeapDumpOnCtrlBreak
is no option too, because it's not supported anymore on JDK6+.
JMX is not allowed due to security reasons.
Any Ideas? Thank you for your help!
Edit:
Windows
No JDK
No JMX
I think I solved the problem.
You have to "patch" your JRE with some files of the JDK (the same version of course - if you are running jre6uXX you need the corresponding files from jdk6uXX )
Copy the following files:
\JDK6uXX\bin\attach.dll --> %JAVAJRE_HOME%\bin\
\JDK6uXX\bin\jmap.exe --> %JAVAJRE_HOME%\bin\
\JDK6uXX\lib\tools.jar --> %JAVAJRE_HOME%\lib\
No files are overwritten, JRE shouldn't be affected by this.
Now you can use jmap just fine to take dumps ;-)
I appreciate your help! Bye
The simplest solution is to use jmap -dump:liv,format=b,file=app.dump on the command line. You can use jps -lvm to find the process id.
An alternative is to connect to it to jvisualvm This will take the dump and analyse it for you. You can also use this tool to read a dump written by jmap so you may end up using it anyway.
Where jvisualvm struggles is for large heap dumps i.e. more than about half you main memory size. I have found using YourKit to handle larger dumps and also give more useful information. An evaluation license might be all you need to diagnose this.
jmx is not allowed due to security reasons
In that case, you can't do this remotely, unless you use YourKit or some other commercial profiler.
You have start your application with jmx console enabled in a port to debug your application. Execute jconsole and connect to the port which you have enabled for debugging. You can also use of jmap to collect heapdump.
JProfiler has a command line utility bin/jpdump that can take an HPROF heap dump. There is no need to install JDK. There is also no need to run the GUI installer of JProfiler, just extract the ZIP distribution and execute jpdump on the command line.
Disclaimer: My company develops JProfiler.
Update 2016-06-23
As of JProfiler 9.2, jpdump and jpenable run with Java 6 as well.
You could use jvisualvm, just enable jmx port and connect to your application, then you will be able to generate a heap file.
You can do that by adding the following parameters:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=8484
-Dcom.sun.management.jmxremote.ssl=false
Then You need to add your tomcat process manually, So right click on you localhost node -> Add JMX Connection -> type your port -> OK.
Your tomcat process will be listed in under localhost node.
jmap -dump:format=b,file=snapshot.jmap
process-pid
Regardless of how the Java VM was started, the jmap tool will produce a head dump snapshot, in the above example in a file called snapshot.jmap. The jmap output files should contain all the primitive data, but will not include any stack traces showing where the objects have been created.

Internal error in eclipse when running java program

Eclipse froze on me earlier today, so I typed "top" into the command prompt and killed it. Now when I try to run a java application, I get this error:
eclipse\plugins\org.eclipse.jdt.debug_3.7.0.v20110509
That's all that shows up under details.
None of my previously working programs run, and I have no clue what this is. I have Eclipse 1.5.0 running 1.6 and 1.7 Java, depending on what program. Thanks for any help.
It is possible that you killed part of the process but not all of it. It is possible that a java process is running with a reference to this job. I would try restarting your computer to see if it will stop whatever process is referencing that jar.
Aside from a restart, then another option would be to use (in linux) pstree, filtered for your user to see if any other jobs are referencing that jar and/or java.
EDIT:
Another path is to look at log files. On linux they are in /var/log. Here's a link in that direction: http://docs.oracle.com/javase/7/docs/webnotes/tsg/TSG-Desktop/html/felog.html

Why won't the VisualVM Profiler profile my application?

I've created a simple 1 file java application that iterates through a loop, calls some functions, allocates some memory, adds some numbers, etc. I run that application via eclipse's Run As->Java Application.
The running application shows up in Java VisualVM under Local.
I double click on that application and go to the Profiler tab.
The default settings are:
Start profiling from classes: my.main.package.**
Do not profile classes: java.*, javax.*,
sun.*, sunw.*, com.sun.*
I click on CPU. The CPU and Memory buttons gray out. Nothing happens.
The Status says profiling inactive.
When my application terminates the Status says application terminated.
What am I doing wrong here? Are there some settings I need to tweak? Do I need to set a VM flag when I launch my application?
I had the same issue after java 1.7.0_45 update. I had to delete the following folder:
C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
After doing so, everything works like a charm.
I'd guess the issue relates to the application being started from within Eclipse, this is because JVisualVM expects to find data in the java.io.tmpdir directory (usually C:\Users\[your username]\AppData\Local\Temp\hsperfdata_[your username] on a Windows system).
I assume rather than in the normal location where JPS, JVisualVM etc. expects it, Eclipse puts the data in it's own temp folder?
If so, try invoking JVisualVM using jvisualvm -J-Djava.io.tmpdir=[Eclipse's temp directory] to explicitly tell it where that data is.
If you can't find the hsperfdata_$USER folder, try just running your application outside Eclipse in the usual command line Java way.
Also note that there was a bug affecting the temp folder (case sensitivity) introduced around 1.6.0_23, so maybe you'd benefit by updating to a more recent Java 6 (or 7) build?
Mikaveli, Kuba and Somaiah Kumbera have provided great solutions. Just adding what I have done to make things work.
I first checked the location C:\users\'username'\AppData\Local\Temp\hsperfdata_'username' There was no file named with the process ID of my program running inside eclipse.
I simply stopped the program and added the following parameter to the Run Configurations of the program (Run Configurations -> Arguments -> VM Arguments)
-Djava.io.tmpdir=C:\users\'username'\AppData\Local\Temp\hsperfdata_'username'
I started the program again. Still could not profile it. But now I have a file created for the process at the given temp directory.
Then, a simple restart of VisualVM did the trick.
I had the same issue, but with the following symptoms:
I started jetty, with the work directory in
C:\Users\t852124\AppData\Local\Temp
Jetty was creating the hsperfdata_ directory but not setting a processID in it
So when I started visualVM, it could not get any java process info.
I solved this by starting jetty with the -Djava.io.tmpdir=C:/temp/java option.
Now when I started jetty, the process ID was created as a file in the hsperfdata_ directory.
So when I started visualVM, it was able to see my local java process
I had the same problem and running VisualVM with elevated privileges (admin rights) solved the issue.
On Linux with VisualVM 1.3.3 I have to remove local settings of application in ~/.visualvm/1.3.3/ to enable CPU Profiler and CPU Sampler.
Also note that /usr/bin/jvisualvm contains hardcoded path to OpenJDK (set with jdkhome variable), which seems to cause a lot of issues, comparing to running to Oracle JDK 1.7.
Also note that if your application is using a recent non-Oracle JVM, you may need to download the "bleeding edge" VisualVM from github.
For example, the VisualVM bundled with JDK 1.8.0.111 doesn't seem to work with the IBM 1.8 JVM. Possibly the IBM JVM was simply released after the Oracle 1.8 JVM, so including the necessary changes wasn't possible at that time.

Jstack and Jstat stopped working with upgrade to JDK6u23

We recently upgraded from JDK6u20 (Linux, 32bit and 64bit) to JDK6u23. Since then, we cannot longer use the tools jstack and jstat to get monitoring information from the running process. If we switch back to JDK6u20, everything works fine.
We are running Tomcat 6. According to this forum post, others have the same problem:
http://forums.oracle.com/forums/thread.jspa?threadID=2151967&tstart=0
Running simple plain Java processes and using the tools works.
Jstack says: Unable to open socket file: target process not responding or HotSpot VM not loaded The -F option can be used when the target process is not responding.
Jstat says: 19799 not found
Using Jps does not show the running processes at all, so I guess the problem is more of general nature with JDK6u23 and also JDK6u24. It has a new Hotspot engine. Maybe something does not work in conjunction with Tomcat and that Hotspot v19.
Any idea? Help is appreciated.
P.S. Of course, we run that as the same user and we have not changed anything else. Only the JDK.
Found a possible answer in the Oracle forum:
While it's true that 6u23/24 introduce this issue, it's not a bug in jps. Rather a change in behavior of the VM itself. On GNU/Linux Jps and the likes seem to only look at /tmp but not necessarily your CATALINA_TMPDIR. If set or not, try to export CATALINA_TMPDIR=/tmp which translates to "-Djava.io.tmpdir=/tmp" and after restarting the Tomcat process you should see Tomcat's data as "/tmp/hsperfdata_/" and Jps will most likely work again as well.
See jps returns no output even when java processes are running for an instruction how to tell jps or jstat to connect to Tomcat's temp-dir

Can I get Tomcat running as a service to dump heap?

I am attempting to have Tomcat, which is currently running as a service on a Windows 2003 box, dump heap on an OutOfMemoryError.
(Tomcat is running Hudson, which is reporting a heap space problem at the tail end of my build. Running the build manually produces no such error. The Hudson guys need a heap dump to get started.)
As instructed elsewhere, I've told the Apache Service Monitor to configure the JVM it uses to run Tomcat to dump heap when an OutOfMemoryError is encountered by adding the following to the JVM options:
-XX:+HeapDumpOnOutOfMemoryError
Then I run the build again. Sure enough, it reports there was a heap error. I scan the entire disk looking for the default java_pid123.hprof file (where obviously 123 is replaced by the PID of the JVM). No .hprof files exist anywhere.
I am caught in a catch 22: I need the heap dump for the Hudson guys to fix their memory leak, but I can't get the heap dump if I run Hudson under Tomcat.
Is there some special way, when Tomcat is running as a Windows service, to get a heap dump from it on an OutOfMemoryError?
The other thing I've tried is to tell it, on the Startup and Shutdown tabs, to use the "Java" option instead of the "jvm" option. I believe this should tell the Service Manager to attempt to start Tomcat with a Java executable command instead of launching the jvm.dll directly. When I do this, the service won't start.
Surely someone else has had a similar problem?
After finally putting this one to bed, I wanted to answer this for others who might have the same problem.
First, if you install Tomcat on Windows, do not use the .exe installer, even though it is promoted by Apache. It will not let you run Tomcat as anything other than the system account, no matter what you do. It appears that the system account does not have privileges to write .hprof files in the current directory, and no amount of Windows security tweaking appears to make this problem go away.
OK, so you've installed Tomcat from the .zip distribution. Install it as a service using the service.bat script. Make sure it is set to run as a specific user that you created specifically for this purpose. Make sure as well that the folder you want Tomcat to write to in the event of a heap dump is writable by that user.
Edit the service.bat file to include the -XX:+HeapDumpOnOutOfMemoryError and the -XX:HeapDumpPath=C:\whatever options in the correct place (where you can put JVM options). That should do the trick.
Have you tried -XX:HeapDumpPath option?
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
I found the following link, which describes how to configure the tomcat service (includes setting the java parameters). Not sure if it applies to the version you are running.
http://tomcat.apache.org/tomcat-5.5-doc/windows-service-howto.html
When java process running as window service you can generate the heapdump using below steps,
Run the command console as Administrator
version of JDK (for jmap command) and JRE (Java app run environment) should be same.
Get the PID no of running window process for that java application from task manager
Execute below command
jmap -dump:file=d:\heapdump\myHeapDump.hprof -F #PID_No#
If got any exception with JDK/JRE 7 try the same with JDK/JRE 8
Actually I faced some issue in jmap with JDK 7, but when i moved to JDK 8, I were able to successfully generate the heap dump using same command
The .hprof files are dumped in the current directory. Exactly what that means for a windows service is anyone's guess, assuming it means anything.
I suggest posting a new question (on http://superuser.com) asking what "current directory" means for a windows service.
From 20 Tips for Using Tomcat in Production
Add the following to your JAVA_OPTS in catalina.sh (or catalina.bat for Windows): -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/j2ee/heapdumps
if you have installed tomcat with .exe you can configure tomcat service to use account other than local system account and you can assign that user rights on directory "c:\whatever" where you are creating your dump file. one thing here to remember tomcat service don't run with account having administrative privileges. so create a simple user in windows(member of user group) and set tomcat services to user this account. and give that user rights on "c:\whatever" directory. This resolves the user directory rights issue but you have to configure tomcat for Memory dumps errors.

Categories

Resources