In a java heap dump how do I know exactly where in the code/which thread caused the dump?
For reading the memory dumps:
I would recommend you to try "eclipse memory analyzer" From here
Another option (free) would be opening this with JVisualVM (available at $JAVA_HOME/bin)
jhat is cool too but was already recommended :)
Now, you're asking about the thread that caused memory heap-dump and not about how to proceed with memory dump...
It depends on how did you obtain the memory dump.
There are different ways to obtain the dump.
On your process you can instruct the JVM to produce memory dump once OutOfMemory error is encountered, in this case I believe it will be the JVM itself.
You can trigger the heap dump creation from the MBean given you have a JMX Server running along with your JVM
Example
You can even use system calls (on linux) externally to your application: kill -3 _YOUR_JAVA_PROCESS_ID_ will generation the heapdump.
But I hardly can imagine why would you need such an information. Later in comments you mention 'exact line of code' but these ways are usually external to your JVM... Are you sure you need a line of code that generated heap dump itself, or you're trying to identify the real issue?
Hope this helps
In java you create object some where, use it in many places and the let GC to collect it back. There is no single line causing a leak.
What you should look for in tools like MAP is objects count and heap used by them. Pick each of the target class and see why they are not garbage collected. (some one is holding reference more than needed- say a static field )
You may find instructions from this page more useful - http://scn.sap.com/people/krum.tsvetkov/blog/2007/07/02/finding-memory-leaks-with-sap-memory-analyzer (also linked from MAT homepage)
Try to use Java Heap Analysis Tool(jhat) or jconsole (that is in your JAVA_HOME\bin).
Related
I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html
I have got a large (5GB) hprof dump, created by application when OutOfMemoryError occurred. (Using XX: HeapDumpOnOutOfMemoryError ).
Unfortunately there are no logs collected when this error happened. Re-creating this will take couple of hours. I was hoping if some tools could show the exception stack trace or all threads stacks etc from hprof.
I am currently using MAT, could not see a way to get thread information. Which tool I could use?
(I am not sure if hprof file has information about thread/location of call when OOM occurred).
( I do know to how to take thread dump in normal cases. The trouble here is the event already happened, all I have is the hprof dump. )
Answering own question. Credit goes to # RC
Open the dump using visualvm. It takes a while.
click on "threads at heap dump"
MAT can show the threads directly now (perhaps this was added since the question was asked).
Threads Overview
To get an overview of all the threads in the heap dump use the "Thread Overview" button in the toolbar, as shown on the image below. Alternatively one could use the Query Browser > Thread Overview and Stacks query:
I don't think heap dump contains thread information except GC root. If you need thread related information, you need to take thread dump also.
Eclipse MAT allows you to see the suspect threads in the Leak suspects report. Look for the classes in your application namespace with their line numbers to find how much memory they occupy in heap. This will give you a hint of leaky classes.
You can kill -3 the process id to get a thread dump to standard out. This will not kill the java process so you can do it as many times as you want.
as RC stated visualVM is a good tool which will give you Object counts by class type and all kinds of graphs and profiling tools.
Use visualvm.
try to analyse the graph when perm heap space exceeds...
u should also check the memory samples & save its snapshot..
Analysis the thread stack... will help you narrow down to the problem.
To turn on an option your need + and to turn off an option you need -
What is confusing about the documentation is that it shows the default setting to make it "clear" what setting you have already. The ones with + are on by default and the ones with - are off by default. This means if you copy any of the + or - options from the documentation they should do nothing (except where the default has changed over time)
-XX:-HeapDumpOnOutOfMemoryError turns off the heap dump, which is the default.
-XX:+HeapDumpOnOutOfMemoryError turns on the heap dump.
I have a standalone Java problem running in a linux server. I started the jvm with -Xmx256m. I attached a JMX monitor and can see that the heap never really passes 256Mb. However, on my linux system when I run the top command I can see that:
1) First of all, the RES memory usage of this process is around 350Mb. Why? I suppose this is because of memory outside of the heap?
2) Secondly, the VIRT memory usage of this process just keeps growing and growing. It never stops! It now shows at 2500Mb! So do I have a leak? But heap doesn't increase, it just cycles!
Ultimately this poses a problem because the swap of the system keeps growing and eventually the system dies.
Any ideas what is going on?
The important question I want to ask, what are some scenarios that this could be a result of my code and not the JVM, kernal, etc. For example, if the number of threads keeps growing, would that fit the description of my observations? Anything similar that you can suggest me to look out for?
A couple of potential problems:
Direct allocated buffers and memory mapped files are allocated outside of the Java heap, and can't conveniently be disposed.
An area of stack is reserved for each new thread.
Permanent generation (code and interned strings) is outside of the usual stack. It can be a problem is class loaders leak (usually when reloading webapps).
It's possible that the C heap is leaking.
pmap -x should show how your memory has disappeared.
Swap Sun vs IBM JVM to test
RES will include code + non-head data. Also, some things that you think would be stored in the heap aren't, such as the thread stack and "class data". (It's a matter of definition but code and class data are controlled by -XX:MaxPermSize=.)
This one sounds like a memory leak in either the JVM implementation, the linux kernel, or in library JNI code.
If using the Sun JVM, try IBM, or vice versa.
I'm not sure exactly how dlopen works, but code accessing system libraries might be remapping the same thing repeatedly, if that's possible.
Finally, you should use ulimit to make the system fail earlier, so you can repeat tests easily.
WRT #1, it's normal for your RSS to be larger than your heap. This is because system libraries and non-Java code are included in the RSS but not the heap size.
WRT #2, Yes, it sounds like you have a leak of some sort. If the system itself is crashing, you are likely consuming too much of a system resources, like sockets, threads, or files.
Try using lsof to see what files the JVM has open. Run this a few times as your memory increases. If the JVM is crashing, be sure to set the -XX:+HeapDumpOnOutOfMemoryError option.
In my experience, the most common cause of non-heap memory leak in Java is thread leak.
A tool you may find useful is jvmtop, which lets you monitor heap size, thread number and other metrics in real time.
Sounds like you have a leak. Can't you do profiling to see which function is driving the memory up? I am not sure though.
If I had to take a stab in the dark, I would say that the JVM you are using has a memory leak.
I have a standalone program that I run locally, it is meant to be a server type program running 24/7. Recently I found that it has a memory leak, right now our only solution is to restart it every 4 hours. What is the best way to go about finding this memory leak? Which tool and method should we use?
If you are using Java from Sun and you use at least Java 6 update 10 (i.e. the newest), then try running jvisualvm from the JDK on the same machine as your program is running, and attach to it and enable profiling.
This is most likely the simplest way to get started.
When it comes to hunting memory problems, I use SAP Memory Analyzer Eclipse Memory Analyser (MAT), a Heap Dump analysis tool.
The Memory Analyzer provides a general purpose toolkit to analyze Java heap dumps. Besides heap walking and fast calculation of retained sizes, the Eclipse tool reports leak suspects and memory consumption anti-patterns. The main area of application are Out Of Memory Errors and high memory consumption.
Initiated by SAP, the project has since been open sourced and is now know as Eclipse Memory Analyser. Check out the Getting Started page and especially the Finding Memory Leaks section (I'm pasting it below because I fixed some links):
Start by running the leak report to automatically check for memory leaks.
This blog details How to Find a Leaking Workbench Window.
The Memory Analyzer grew up at SAP. Back then, Krum blogged about Finding Memory Leaks with SAP Memory Analyzer. The content is still relevant!
This is probably the best tool you can get (even for money) for heap dump analysis (and memory leaks).
PS: I do not work for SAP/IBM/Eclipse, I'm just a very happy MAT user with positive feedback.
You need a memory profiler. I recommend trying the Netbeans profiler.
One approach would be to take heap dumps on a regular basis, then trend the instance counts of your classes to try to work out which objects are being consistently created but not collected.
Another would be to switch off parts of your app to try to narrow down where the problem is.
Look at tools like jmap and jhat.
You might look up JMX and the jconsole app that ships with Java. You can get some interesting statistics out-of-the-box, and adding some simple instrumentation to your classes can provide a whole lot more.
As already stated jvisualvm is a great way to get started, but once you know what is leaking you may need to find what is holding references to the objects in question for which I'd recommend jmap and jhat, e.g
jmap -dump:live,file=heap.dump.out,format=b <pid>
and
jhat heap.dump.out
where <pid> is easily found from jvisualvm. Then in a browser navigate to localhost:7000 and begin exploring.
You need to try and capture Java heap dump which is a memory print of the Java process.
It's a critical process for memory consumption optimisation and finding memory leaks.
Java heap dump is an essential object for diagnosing memory-linked issues including java.lang.OutOfMemoryError, Garbage Collection issues, and memory leaks which are all part of Java web development process
For clarity, a Heap dump contains information such as Java classes and objects in a heap during instant of taking the snapshot.
To do it, you need to run jmap -dump:file=myheap.bin <program pid>.
To learn more about how to capture Java heat dumps, check out: https://javatutorial.net/capture-java-heap-dump
I've been tasked with debugging a Java (J2SE) application which after some period of activity begins to throw OutOfMemory exceptions. I am new to Java, but have programming experience. I'm interested in getting your opinions on what a good approach to diagnosing a problem like this might be?
This far I've employed JConsole to get a picture of what's going on. I have a hunch that there are object which are not being released properly and therefor not being cleaned up during garbage collection.
Are there any tools I might use to get a picture of the object ecosystem? Where would you start?
I'd start with a proper Java profiler. JConsole is free, but it's nowhere near as full featured as the ones that cost money. I used JProfiler, and it was well worth the money. See https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler for more options and opinions.
Try the Eclipse Memory Analyzer, or any other tool that can process a java heap dump, and then run your app with the flap that generates a heap dump when you run out of memory.
Then analyze the heap dump and look for suspiciously high object counts.
See this article for more information on the heap dump.
EDIT: Also, please note that your app may just legitimately require more memory than you initially thought. You might try increasing the java minimum and maximum memory allocation to something significantly larger first and see if your application runs indefinitely or simply gets slightly further.
The latest version of the Sun JDK includes VisualVM which is essentially the Netbeans profiler by itself. It works really well.
http://www.yourkit.com/download/index.jsp is the only tool you'll need.
You can take snapshots at (1) app start time, and (2) after running app for N amount of time, then comparing the snapshots to see where memory gets allocated. It will also take a snapshot on OutOfMemoryError so you can compare this snapshot with (1).
For instance, the latest project I had to troubleshoot threw OutOfMemoryError exceptions, and after firing up YourKit I realised that most memory were in fact being allocated to some ehcache "LFU " class, the point being that we specified loads of a certain POJO to be cached in memory, but us not specifying enough -Xms and -Xmx (starting- and max- JVM memory allocation).
I've also used Linux's vmstat e.g. some Linux platforms just don't have enough swap enabled, or don't allocate contiguous blocks of memory, and then there's jstat (bundled with JDK).
UPDATE see https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler
You can also add an "UnhandledExceptionHandler" to your Application's Thread. This will catch 'uncaught' exception, like an out of memory error, and you will at least have an idea where the exception was thrown. Usually this not were the problem is but the 'new' that couldn't be satisfied. As a rule I always add the UnhandledExceptionHandler to a Thread if nothing else to add logging.