Trigger Heap Dump of a 1.5 JVM running on Windows - java

I'm trying to diagnose a PermGen memory leak problem in a Sun One 9.1 Application Server. In order to do that I need to get a heap dump of the JVM process. Unfortunately, the JVM process is version 1.5 running on Windows. Apparently, none of the ways for triggering a heap dump support that setup. I can have the JVM do a heap dump after it runs out of memory, or when it shuts down, but I need to be able to get heap dumps at arbitrary times.
The two often mentioned ways for getting heap dumps are either using jmap or using the HotSpotDiagnostic MBean. Neither of those support jvm 1.5 on Windows.
Is there a method that I've missed? If there's a way to programmatically trigger a heap dump (without using the HotSpotDiagnostic MBean), that would do too...
If it's really not possible to do it in Windows, I guess I'd have to resort to building a Linux VM and doing my debugging in there.
Thanks.

There was a new hotspot option introduced in Java6, -XX:-HeapDumpOnOutOfMemoryError, which was actually backported to the Java5 JVM.
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
Dump heap to file when
java.lang.OutOfMemoryError is thrown.
Manageable. (Introduced in 1.4.2
update 12, 5.0 update 7.)
It's very handy. The JVM lives just long enough to dump its heap to a file, then falls over.
Of course, it does mean that you have to wait for the leak to get bad enough to trigger an OutOfMemoryError.
An alternative is to use a profiler, like YourKit. This provides the means to take a heap snapshot of a running JVM. I believe it still supports Java5.
P.S. You really need to upgrade to java 6....

If it's 1.5.0_14 or later, you can use -XX:+HeapDumpOnCtrlBreak and hit Ctrl-Break in the console

Related

Monitoring Java internal objects & memory usage

I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html

How to you check memory usage and force garbage collection for jetty application

I think I may have a memory leak in a servlet application running in production on jetty 8.1.7.
Is there a way of seeing how much heap memory is actually being used at an instance of time, not the max memory allocated with -Xmx, but the actual amount of memory being used.
Can I force a garbage collection to occur for an application running within jetty
yes, both are easily achievable using: VisualVM (see: http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/monitor_tab.html) This one is shipped with Oracle JDK by default (=> no extra installation required)
However for the memory leak detection, I'd suggest to do memory dump and analyze it later with eclipse MAT ( http://www.eclipse.org/mat/ ) as it has quite nice UI visualizing java memory dumps.
EDIT:
For the ssh only access, yes you can use the mentioned two tools. However you need to run them on the machine with running window manager and remotely connect over ssh to the other machine (you need to have java on both of these machines):
For visualVM: you need to have VisualVM running on one maching and via the ssh connect to remote one, see: VisualVM over ssh
and for the memory dump: use jmap (for sample usage see: http://kadirsert.blogspot.de/2012/01/…) afterwards download the dump file and load if locally to eclipse MAT
enable jmx and connect up to it using jconsole
http://wiki.eclipse.org/Jetty/Tutorial/JMX
You can call System.gc(). That will typically perform a full GC ... but this facility can be disabled. (There is a JVM option to do this with HotSpot JVMs.)
However, if your problem is a memory leak, running the GC won't help. In fact, it is likely to make your server even slower than it currently is.
You can also monitor the memory usage (in a variety of ways - see other Answers) but that only gives you evidence that a memory leak might leak.
What you really need to do is find and fix the cause of the memory leak.
Reference:
How to find a Java Memory Leak
You can use jvisualvm.exe which is under the %JAVA_HOME%\bin folder. By using this application you can monitor memory usage and can force gc.

Memory leaks of Garbage Collector objects

I have a significant memory leak in my application. I have run jmap and it says that there are currently the following objects that should not be there (and are the major source of leak):
java.lang.management.MemoryUsage - 3938500 instances, 189048000 bytes
[Ljava.lang.management.MemoryUsage - 787700 instances, 31508000 bytes
com.sun.management.GCInfo - 293850 instances, 22055600 bytes
sun.management.GCInfoCompositeData - 393850 instances, 12603200 bytes
I do not directly use these objects. They are however used by Garbage Collector.
I use:
Java version: 1.7.0-b147
VM version: Java Hotspot(TM) 64-bit Server VM (build 21.0-b17, mixed mode)
The application is run in Jetty version 7.3.1
I use currently Concurrent low pause garbage collector. However I had the same problem even when running the Throughput collector.
Do you have any idea why do these objects stay in the memory? What would you suggest to do?
UPDATE: The memory leak still occurs with Java 1.7 update 1 (1.7.0_01-b08, Java Hotspot(TM) 64-bit Server VM (build 21.1-b02, mixed mode) )
UPDATE 2: The memory leak is caused by JConsole. There are no instances of classes mentioned above before JConosole is started. Once I connect to the application with JConsole, the objects start to appear in the memory and they remain there forever. After I shutdown JConsole, objects are still in memory and the amount of them is growing until the application is shutdown.
I have not really used jmap but I have handled memory leaks in our application.
Does your application go Out of Memory? I would suggest dumping before the application closes, add the following to your vm args
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp
When your application goes oom, it will create an hprof file under tmp that you can use to debug the issue.
If it doesn't go OOM, try allocate a lower memory so that you can force an OOM.
I used eclipse MAT to analyze this files. It is pretty good because it will immediately tell you suspects of the leak.
I think you need to provide more details as to what the app is for and what it's doing. Are you just using jConsole for tracking this problem down?
I use Visual VM for tracking down these types of issues. See this link on how to use it for a memory leak and this one for the Visual VM main page.
I have the same problem. I did investigation 2 months ago, and the problem is in JAVA 7_0 virtual machine. In my scenario java.lang.management.MemoryUsage object is hanging and daily growing hundreds of MB. All the other object you see hanging are referenced by java.lang.management.MemoryUsage object. The problem is, that this MemoryUsage object is hanging only in java 7_0 and higher version, because this MemoryUsage class has been added into JAVA 7 and was never in previous java. And most important is, that this MemoryUsage class is hanging in memory only after I use JConsole to connect to the server. After JConsole connects first time, it creates some MemoryUsage tracking mechanism, which starts to create MemoryUsage objects. Those objects are used then to draw the nice graphs in JConsole. This is all ok. BUT, the problem is, that JAVA 7 is buggy, and never frees the memory. The MemoryUsage objects are hanging forever on the heap. Doesn't matter that you close JConsole, it will continue to grow afterwards. The first time you use JConsole to connect to JAVA 7_0 process, you create the problem and there is no solution. Just don't use Jconsole, or any other memory monitoring tool, or don't use Java 7. In my scenario, I am doomed, because I have to use Jconsole all the time, and JAVA 6 is no option for me, because there is another bug, which makes memory leaks thanks to Locking objects. I reported this bug to ORACLE, but I have no idea, if they got it, know about it, and are solving it. I am just waiting for newer version of java so I can test it and stop restarting my server every few days.
I reported an issue to Oracle a couple of years ago where in JDK 7, a memory leak would start the moment you connect JConsole. The leak would persist forever; even if you disconnect JConsole.
What was leaking? Objects relating to why the garbage collector ran. Mostly strings in fact (Like "Allocation Failure"). I only found the issue because I used YourKit and in YourKit you can analyse the objects that are tagged as garbage collectable. Basically the objects weren't being referenced by anything in my application but they weren't be collected by the garbage collector either.
Most heap dump tools remove garbage collectable objects from the analysis immediately. So YourKit was critical in pinpointing that the bug was really in the JVM.
Couldn't find my ticket but I found other ones:
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7143760
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7128632

Collecting Java heapdump under load

I am running load against Tomcat 6 running on Java 6. I want to collect a heapdump of the Java heap while the Tomcat server is under load. I normally use jmap -dump to collect my heapdumps.
However, when I try to do this when Tomcat is handling a high load I find that the heapdump collection fails.
Is jmap the best tool for collecting a heap dump from a process under load? What are the possible causes which would cause jmap to fail to collect a heapdump?
If jmap is not the best tool - what is better?
It is entirely acceptable to me for jmap (or some other tool) to stop the world within the Java process while the heap dump is taken.
Is jmap the best tool for collecting a heap dump from a process under load?
I think: No it isn't. From this link:
NOTE - This utility is unsupported and
may or may not be available in future
versions of the JDK.
I've also found jmap can pretty temperamental. If you're having problems:
Try it again. It often manages to get a heap dump after a couple of attempts if it first fails
Use the -F option
Add -XX:+HeapDumpOnOutOfMemoryError as a standard configuration to proactively take heap dumps when an OOM error is thrown
Run Tomcat interactively and add the heap dump on ctrl-break option. This gives you a thread dump too, something you'll probably need anyway
If your heap size is especially large and you have a repeatable condition, temporarily lower your heap size. It makes the resulting file much easier to handle, takes less time and is more likely to succeed
I have found that running Tomcat with a JMX port allows me to take a remote heapdump using visualvm. This succeeded for me when jmap failed.

General strategy to resolve Java memory leak?

I have a standalone program that I run locally, it is meant to be a server type program running 24/7. Recently I found that it has a memory leak, right now our only solution is to restart it every 4 hours. What is the best way to go about finding this memory leak? Which tool and method should we use?
If you are using Java from Sun and you use at least Java 6 update 10 (i.e. the newest), then try running jvisualvm from the JDK on the same machine as your program is running, and attach to it and enable profiling.
This is most likely the simplest way to get started.
When it comes to hunting memory problems, I use SAP Memory Analyzer Eclipse Memory Analyser (MAT), a Heap Dump analysis tool.
The Memory Analyzer provides a general purpose toolkit to analyze Java heap dumps. Besides heap walking and fast calculation of retained sizes, the Eclipse tool reports leak suspects and memory consumption anti-patterns. The main area of application are Out Of Memory Errors and high memory consumption.
Initiated by SAP, the project has since been open sourced and is now know as Eclipse Memory Analyser. Check out the Getting Started page and especially the Finding Memory Leaks section (I'm pasting it below because I fixed some links):
Start by running the leak report to automatically check for memory leaks.
This blog details How to Find a Leaking Workbench Window.
The Memory Analyzer grew up at SAP. Back then, Krum blogged about Finding Memory Leaks with SAP Memory Analyzer. The content is still relevant!
This is probably the best tool you can get (even for money) for heap dump analysis (and memory leaks).
PS: I do not work for SAP/IBM/Eclipse, I'm just a very happy MAT user with positive feedback.
You need a memory profiler. I recommend trying the Netbeans profiler.
One approach would be to take heap dumps on a regular basis, then trend the instance counts of your classes to try to work out which objects are being consistently created but not collected.
Another would be to switch off parts of your app to try to narrow down where the problem is.
Look at tools like jmap and jhat.
You might look up JMX and the jconsole app that ships with Java. You can get some interesting statistics out-of-the-box, and adding some simple instrumentation to your classes can provide a whole lot more.
As already stated jvisualvm is a great way to get started, but once you know what is leaking you may need to find what is holding references to the objects in question for which I'd recommend jmap and jhat, e.g
jmap -dump:live,file=heap.dump.out,format=b <pid>
and
jhat heap.dump.out
where <pid> is easily found from jvisualvm. Then in a browser navigate to localhost:7000 and begin exploring.
You need to try and capture Java heap dump which is a memory print of the Java process.
It's a critical process for memory consumption optimisation and finding memory leaks.
Java heap dump is an essential object for diagnosing memory-linked issues including java.lang.OutOfMemoryError, Garbage Collection issues, and memory leaks which are all part of Java web development process
For clarity, a Heap dump contains information such as Java classes and objects in a heap during instant of taking the snapshot.
To do it, you need to run jmap -dump:file=myheap.bin <program pid>.
To learn more about how to capture Java heat dumps, check out: https://javatutorial.net/capture-java-heap-dump

Categories

Resources