Loading a large hprof into jhat - java

I have a 6.5GB Hprof file that was dumped by a 64-bit JVM using the -XX:-HeapDumpOnOutOfMemoryError option. I have it sitting on a 16GB 64-bit machine, and am trying to get it into jhat, but it keeps running out of memory. I have tried passing in jvm args for minimum settings, but it rejects any minimum, and seems to run out of memory before hitting the maximum.
It seems kind of silly that a jvm running out of memory dumps a heap so large that it can't be loaded on a box with twice as much ram. Are there any ways of getting this running, or possibly amortizing the analysis?

Use the equivalent of jhat -J-d64 -J-mx16g myheap.hprof as a command to launch jhat, i.e., this will start jhat in 64-bit mode with a maximum heap size of 16 gigabytes.
If the JVM on your platform defaults to 64-bit-mode operation, then the -J-d64 option should be unnecessary.

I would take a look at the eclipse memory analyzer. This tool is great, and I have looked at several Gig heaps w/ this tool. The nice thing about the tool is it creates indexes on the dump so it is not all in memory at once.

I had to load a 11 GB hprof file and couldn't with eclipse memory analyzer. What I ended up doing was to write a program to reduce the size of the hprof file by randomly removing instance information. Once I got the size of the hprof file down to 1GB, I could open it with eclipse memory analyzer and get a clue on what was causing the memory leak.

What flags are you passing to jhat? Make sure that you're in 64-bit mode and you're setting the heap size large enough.

Related

How resolve an OutOfMemory Error by the better way

I am programming a Java application allowing to minimize any boolean expression using QuineMcCluskey methodology. When I compile my code, I have an OutOfMemory Error with the message "Java heap space"!
If I understand, the exception may have several origins:
The memory space allocated to the JVM heap is insufficient to create the objects required by the application.
A memory leak prevents the garbage collector from releasing objects that are yet unused but still have references. Thus these objects are never released and occupy more and more space in the pile until occupying all the available space.
...
I know that use a profiling tool may be necessary to analyze the contents of the memory of the JVM and determine the origin of the memory consumption. But how use thoses tools ? Have I to modify xmx and xms data ? What could be the consequences if I change them ?
(I know also that it is necessary to optimize my code).
What are the different debugging steps ?
Furthermore, this application has to be use by lot of users (so by diferent computers...)
As you can see, I have lot of questions about this problem (I am a novice lol)... That's why I create this post.. I would like to resolve this problem by the better way and also, learn good reflexes.
Thank you!
Increasing of the memory is NOT the solution, unless you have unlimited resources!
If that happens in production or any machine other than your own, you can force the JVM which runs your service to generate a dump file which you can then download and analyse by adding VM flag -XX:+HeapDumpOnOutOfMemoryError to your jvm.conf file.
That is pretty useful when the machine that runs the app becomes unresponsive so no JMX connections could be made in order to run Oracle's JMC (or similar) monitoring tools. If you are for some reason able to get to the VM you can run the flight recording and try to analyse which method is causing the OOM.
Check here how to use flight recording.
And check this for heap dump analysis preview
It is possible to increase heap size allocated by the JVM by using command line options Here we have 3 options
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
java -Xms16m -Xmx64m ClassName
It is also possible to increase heap size allocated by the JVM in eclipse directly In eclipse IDE goto
Run---->Run Configurations---->Arguments
And set VM arguments

java.lang.OutOfMemoryError: Java heap space in every 2-3 Hours

In our application we have both, Apache Server (for the front end only) & JBoss 4.2 (for the business / backend end). We are using Ubuntu 12 as server OS. Our application is throwing java.lang.OutOfMemoryError: "Java heap space" repeatedly. (It throws OOMEs for an hour or so and then goes back to working normally for next 2-3 hours. Then it repeats the pattern.) Our Java memory settings are
-Xms512m -Xmx1024m
Our server has 6 GB of Ram physically. Please guide us do we need to increase java Heap size. If yes, what should be the ideal size considering physical 6GB of Ram.
Are you sure you dont have memory leaks? Also if you are using high memory using api like POI for doc or itext for PDF the you are utilizing code to keep memory footprint low. You can use a profiler to see what exactly is happening. If you still need to increase increase step by step till it gets to a appropriate value.
like
-Xms512m -Xmx1024m
then
-Xms512m -Xmx2048m
so on ...
I would check whether you have a memory leak e.g. are there objects building up and not being freed.
You can do that with a profiler e.g. visualvm or jmap -histo:live might be enough.
If you don't have a memory leak and the memory usage is valid I would try increasing the maximum to the maximum amount of memory you would want the JVM to use e.g perhaps 4 GB.

Unable to set Java heap size larger than 1568

I am running a server with the following attributes:
Windows Server 2008 R2 Standard - 64bit
4gb RAM
I am trying to set the heap size to 3gb for an application. I am using the flags -Xmx3G -Xms3G. Running with the flags results in the following error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
I have been playing with the setting to see what my ceiling is and found that 1568 is my ceiling. What am I missing?
How much physical memory is available on your system (out of the original 4 GB)? It sounds like your system doesn't have 3GB of physical memory available when the vm starts up.
Remember that the JVM needs more memory than is allocated to the heap -- there are other data structures as well (thread stacks, etc) that also need memory. So the settings you are providing attempt to use more than 3GB of memory.
Also, are you using a 64-bit jvm? The practical limit for heap size on a 32-bit vm is 1.4 to 1.6 gigabytes according to this document.
Java requires continuous virtual memory on startup. On windows, 32-bit application run in an 32-bit emulated environment so you don't get much more continuous memory than you would in a 32-bit OS. c.f. on Solaris you get over 3 GB virtual memory for 32-bit Java.
I suggest you use the 64-bit version of Java as this will make use of all the memory you have. You still need to have free memory but the larger address space doesn't suffer from fragmentation.
BTW: The heap space is only part of the memory used, you need memory for shared libraries, direct memory, GUI components etc.
It seems you don't have 3G of physical mememory available. Here is an interesting article on Java heap size settings errors. Java heap size setting errors

Tool for analyzing large Java heap dumps

I have a HotSpot JVM heap dump that I would like to analyze. The VM ran with -Xmx31g, and the heap dump file is 48 GB large.
I won't even try jhat, as it requires about five times the heap memory (that would be 240 GB in my case) and is awfully slow.
Eclipse MAT crashes with an ArrayIndexOutOfBoundsException after analyzing the heap dump for several hours.
What other tools are available for that task? A suite of command line tools would be best, consisting of one program that transforms the heap dump into efficient data structures for analysis, combined with several other tools that work on the pre-structured data.
Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script.
For instance, the last line of that file might look like this after modification
./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$#" -vmargs -Xmx40g -XX:-UseGCOverheadLimit
Run it like ./path/to/ParseHeapDump.sh ../today_heap_dump/jvm.hprof
After that succeeds, it creates a number of "index" files next to the .hprof file.
After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.
Example report:
./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects
Other report options:
org.eclipse.mat.api:overview and org.eclipse.mat.api:top_components
If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.
EDIT:
I just liked to add two notes :
As far as I know, only the generation of the indices is the memory intensive part of Eclipse MAT. After you have the indices, most of your processing from Eclipse MAT would not need that much memory.
Doing this on a shell script means I can do it on a headless server (and I normally do it on a headless server as well, because they're normally the most powerful ones). And if you have a server that can generate a heap dump of that size, chances are, you have another server out there that can process that much of a heap dump as well.
First step: increase the amount of RAM you are allocating to MAT. By default it's not very much and it can't open large files.
In case of using MAT on MAC (OSX) you'll have file MemoryAnalyzer.ini file in MemoryAnalyzer.app/Contents/MacOS. It wasn't working for me to make adjustments to that file and have them "take". You can instead create a modified startup command/shell script based on content of this file and run it from that directory. In my case I wanted 20 GB heap:
./MemoryAnalyzer -vmargs -Xmx20g --XX:-UseGCOverheadLimit ... other params desired
Just run this command/script from Contents/MacOS directory via terminal, to start the GUI with more RAM available.
I suggest trying YourKit. It usually needs a little less memory than the heap dump size (it indexes it and uses that information to retrieve what you want)
The accepted answer to this related question should provide a good start for you (if you have access to the running process, generates live jmap histograms instead of heap dumps, it's very fast):
Method for finding memory leak in large Java heap dumps
Most other heap analysers (I use IBM http://www.alphaworks.ibm.com/tech/heapanalyzer) require at least a percentage of RAM more than the heap if you're expecting a nice GUI tool.
Other than that, many developers use alternative approaches, like live stack analysis to get an idea of what's going on.
Although I must question why your heaps are so large? The effect on allocation and garbage collection must be massive. I'd bet a large percentage of what's in your heap should actually be stored in a database / a persistent cache etc etc.
This person http://blog.ragozin.info/2015/02/programatic-heapdump-analysis.html
wrote a custom "heap analyzer" that just exposes a "query style" interface through the heap dump file, instead of actually loading the file into memory.
https://github.com/aragozin/heaplib
Though I don't know if "query language" is better than the eclipse OQL mentioned in the accepted answer here.
The latest snapshot build of Eclipse Memory Analyzer has a facility to randomly discard a certain percentage of objects to reduce memory consumption and allow the remaining objects to be analyzed. See Bug 563960 and the nightly snapshot build to test this facility before it is included in the next release of MAT. Update: it is now included in released version 1.11.0.
A not so well known tool - http://dr-brenschede.de/bheapsampler/ works well for large heaps. It works by sampling so it doesn't have to read the entire thing, though a bit finicky.
This is not a command line solution, however I like the tools:
Copy the heap dump to a server large enough to host it. It is very well possible that the original server can be used.
Enter the server via ssh -X to run the graphical tool remotely and use jvisualvm from the Java binary directory to load the .hprof file of the heap dump.
The tool does not load the complete heap dump into memory at once, but loads parts when they are required. Of course, if you look around enough in the file the required memory will finally reach the size of the heap dump.
I came across an interesting tool called JXray. It provides limited evaluation trial license. Found it very useful to find memory leaks. You may give it a shot.
Try using jprofiler , its works good in analyzing large .hprof, I have tried with file sized around 22 GB.
https://www.ej-technologies.com/products/jprofiler/overview.html
$499/dev license but has a free 10 day evaluation
When the problem can be "easily" reproduced, one unmentioned alternative is to take heap dumps before memory grows that big (e.g., jmap -dump:format=b,file=heap.bin <pid>).
In many cases you will already get an idea of what's going on without waiting for an OOM.
In addition, MAT provides a feature to compare different snapshots, which can come handy (see https://stackoverflow.com/a/55926302/898154 for instructions and a description).

How to analyze a large heapdump?

Is there a tool to analyze a large Java Heap dump (2GB), if one only can assign 1,5GB to the JVM? I can't believe the dump must be fully loaded into memory to be analyzed...
Eclipse MemoryAnalyzer fails, and the IBM tool also.
Do I need to use command line tools here now?
If it's a dev server, restrict the max heap size to something a 32-bit OS can handle. If it's in production, demand a 64-bit OS! If you can't get that, you can run jhat on the server (it has a web interface you can access on your own PC).
One solution is to install the MAT tool on the remote server and generate an HTML output of the analysis to download and view locally. This saves the headache of attempting to get X Windows installed on the remote machine and get all of the ssh tunneling sorted out (which is of course an option as well).
First, download and install the stand-alone Eclipse RCP Application. Then transfer to your server and unpack. Then determine how large the heap dump is and, if necessary, modify the MemoryAnalyzer.ini file to instantiate a JVM with enough RAM for your heap dump.
In this example, I have an 11GB heap dump and have modified the last two lines (adding -Xms)
-startup
plugins/org.eclipse.equinox.launcher_1.3.100.v20150511-1540.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.gtk.linux.x86_64_1.1.300.v20150602-1417
-vmargs
-Xmx16g
-Xms16g
Do an initial run to parse the heap dump. This will generate intermediary data that can be used by subsequent runs to make future analysis faster.
./ParseHeapDump.sh /path/to/heap-dump
After that completes, you can run any of a number of different analysis on the data. The following is an illustration of how to search for memory leak suspects.
./ParseHeapDump.sh /path/to/heap-dump org.eclipse.mat.api:suspec
Unfortunately Eclipse MAT and all heap dump analysis tools loads entire heap dump into memory in order to do the analysis. If Eclipse MAT fails for you, you may try HeapHero tool. JHAT take lore more memory and time than Eclipse MAT to analyze heap dumps.

Categories

Resources