I have a significant memory leak in my application. I have run jmap and it says that there are currently the following objects that should not be there (and are the major source of leak):
java.lang.management.MemoryUsage - 3938500 instances, 189048000 bytes
[Ljava.lang.management.MemoryUsage - 787700 instances, 31508000 bytes
com.sun.management.GCInfo - 293850 instances, 22055600 bytes
sun.management.GCInfoCompositeData - 393850 instances, 12603200 bytes
I do not directly use these objects. They are however used by Garbage Collector.
I use:
Java version: 1.7.0-b147
VM version: Java Hotspot(TM) 64-bit Server VM (build 21.0-b17, mixed mode)
The application is run in Jetty version 7.3.1
I use currently Concurrent low pause garbage collector. However I had the same problem even when running the Throughput collector.
Do you have any idea why do these objects stay in the memory? What would you suggest to do?
UPDATE: The memory leak still occurs with Java 1.7 update 1 (1.7.0_01-b08, Java Hotspot(TM) 64-bit Server VM (build 21.1-b02, mixed mode) )
UPDATE 2: The memory leak is caused by JConsole. There are no instances of classes mentioned above before JConosole is started. Once I connect to the application with JConsole, the objects start to appear in the memory and they remain there forever. After I shutdown JConsole, objects are still in memory and the amount of them is growing until the application is shutdown.
I have not really used jmap but I have handled memory leaks in our application.
Does your application go Out of Memory? I would suggest dumping before the application closes, add the following to your vm args
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp
When your application goes oom, it will create an hprof file under tmp that you can use to debug the issue.
If it doesn't go OOM, try allocate a lower memory so that you can force an OOM.
I used eclipse MAT to analyze this files. It is pretty good because it will immediately tell you suspects of the leak.
I think you need to provide more details as to what the app is for and what it's doing. Are you just using jConsole for tracking this problem down?
I use Visual VM for tracking down these types of issues. See this link on how to use it for a memory leak and this one for the Visual VM main page.
I have the same problem. I did investigation 2 months ago, and the problem is in JAVA 7_0 virtual machine. In my scenario java.lang.management.MemoryUsage object is hanging and daily growing hundreds of MB. All the other object you see hanging are referenced by java.lang.management.MemoryUsage object. The problem is, that this MemoryUsage object is hanging only in java 7_0 and higher version, because this MemoryUsage class has been added into JAVA 7 and was never in previous java. And most important is, that this MemoryUsage class is hanging in memory only after I use JConsole to connect to the server. After JConsole connects first time, it creates some MemoryUsage tracking mechanism, which starts to create MemoryUsage objects. Those objects are used then to draw the nice graphs in JConsole. This is all ok. BUT, the problem is, that JAVA 7 is buggy, and never frees the memory. The MemoryUsage objects are hanging forever on the heap. Doesn't matter that you close JConsole, it will continue to grow afterwards. The first time you use JConsole to connect to JAVA 7_0 process, you create the problem and there is no solution. Just don't use Jconsole, or any other memory monitoring tool, or don't use Java 7. In my scenario, I am doomed, because I have to use Jconsole all the time, and JAVA 6 is no option for me, because there is another bug, which makes memory leaks thanks to Locking objects. I reported this bug to ORACLE, but I have no idea, if they got it, know about it, and are solving it. I am just waiting for newer version of java so I can test it and stop restarting my server every few days.
I reported an issue to Oracle a couple of years ago where in JDK 7, a memory leak would start the moment you connect JConsole. The leak would persist forever; even if you disconnect JConsole.
What was leaking? Objects relating to why the garbage collector ran. Mostly strings in fact (Like "Allocation Failure"). I only found the issue because I used YourKit and in YourKit you can analyse the objects that are tagged as garbage collectable. Basically the objects weren't being referenced by anything in my application but they weren't be collected by the garbage collector either.
Most heap dump tools remove garbage collectable objects from the analysis immediately. So YourKit was critical in pinpointing that the bug was really in the JVM.
Couldn't find my ticket but I found other ones:
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7143760
http://bugs.java.com/bugdatabase/view_bug.do?bug_id=7128632
Related
I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html
I think I may have a memory leak in a servlet application running in production on jetty 8.1.7.
Is there a way of seeing how much heap memory is actually being used at an instance of time, not the max memory allocated with -Xmx, but the actual amount of memory being used.
Can I force a garbage collection to occur for an application running within jetty
yes, both are easily achievable using: VisualVM (see: http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/monitor_tab.html) This one is shipped with Oracle JDK by default (=> no extra installation required)
However for the memory leak detection, I'd suggest to do memory dump and analyze it later with eclipse MAT ( http://www.eclipse.org/mat/ ) as it has quite nice UI visualizing java memory dumps.
EDIT:
For the ssh only access, yes you can use the mentioned two tools. However you need to run them on the machine with running window manager and remotely connect over ssh to the other machine (you need to have java on both of these machines):
For visualVM: you need to have VisualVM running on one maching and via the ssh connect to remote one, see: VisualVM over ssh
and for the memory dump: use jmap (for sample usage see: http://kadirsert.blogspot.de/2012/01/…) afterwards download the dump file and load if locally to eclipse MAT
enable jmx and connect up to it using jconsole
http://wiki.eclipse.org/Jetty/Tutorial/JMX
You can call System.gc(). That will typically perform a full GC ... but this facility can be disabled. (There is a JVM option to do this with HotSpot JVMs.)
However, if your problem is a memory leak, running the GC won't help. In fact, it is likely to make your server even slower than it currently is.
You can also monitor the memory usage (in a variety of ways - see other Answers) but that only gives you evidence that a memory leak might leak.
What you really need to do is find and fix the cause of the memory leak.
Reference:
How to find a Java Memory Leak
You can use jvisualvm.exe which is under the %JAVA_HOME%\bin folder. By using this application you can monitor memory usage and can force gc.
I have a Java web application running on Tomcat 7 that appears to have a memory leak. The average memory usage of the application increases linearly over time when under load (determined using JConsole). After the memory usage reaches the plateau, performance degrades significantly. Response times go from ~100ms to [300ms, 2500ms], so this is actually causing real problems.
JConsole memory profile of my application:
Using VisualVM, I see that at least half the memory is being used by character arrays (i.e. char[]) and that most (roughly the same number of each, 300,000 instances) of the strings are one of the following: "Allocation Failure", "Copy", "end of minor GC", all of which seem to be related to garbage collection notification. As far as I know, the application doesn't monitor the garbage collector at all. VisualVM can't find a GC root for any of these strings, so I'm having a hard time tracking this down.
Memory Analyzer heap dump:
I can't explain why the memory usage plateaus like that, but I have a theory as to why performance degrades once it does. If memory is fragmented, the application could take a long time to allocate a contiguous block of memory to handle new requests.
Comparing this to the built-in Tomcat server status application, the memory increases and levels off at, but doesn't hit a high "floor" like my application. It also doesn't have the high number of unreachable char[].
JConsole memory profile of Tomcat server status application:
Memory Analyzer heap dump of Tomcat server status applicationp:
Where could these strings be allocated and why are they not being garbage collected? Are there Tomcat or Java settings that could affect this? Are there specific packages that could be affect this?
I removed the following JMX configuration from tomcat\bin\setenv.bat:
set "JAVA_OPTS=%JAVA_OPTS%
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=9090
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false"
I can't get detailed memory heap dumps anymore, but the memory profile looks much better:
24 hours later, the memory profile looks the same:
I would suggest to use memoryAnalyzer for analyzing your heap, it gives far more information.
http://www.eclipse.org/mat/
there is a standalone application and eclipse embedded one.
you just need to run jmap on your application and analyze the result with this.
The plateau is caused by the available memory dropping below the default percentage threshold which causes a Full GC. This explains why the performance drops as the JVM is constantly pausing while it tries to find and free memory.
I would usually advise to look at object caches but in your case I think your Heap size is simply too low for a Tomcat instance + webapp. I would recommend increasing your heap to 1G (-Xms1024m -Xmx1024m) and then review your memory usage again.
If you still see the same kind of behaviour then you should take another Heap dump and look at the largest consumers after String and Char. It my experience this is usually caching mechanisms. Either increase your memory further or reduce the caching stores if possible. Some caches only define number of objects so you need to understand how big each cached object is.
Once you understand your memory usage, you may be able to lower it again but IMHO 512MB would be a minimum.
Update:
You need not worry about unreachable objects as they should be cleaned up by the GC. Also, it's normal that the largest consumers by type are String and Char - most objects will contain some kind of String so it makes sense that Strings and Chars are the most common by frequency. Understanding what holds the objects that contains the Strings is the key to finding memory consumers.
I can recommend jvisualvm which comes along with every Java installation. Start the programm, connect to your Webapplication. Go to Monitor -> Heap Dump. It now may take some time (depending on the size).
The navigation through the Heap Dump is quite easy, but the meaning you have to figure out yourself (not too complicated though), e.g.
Go to Classes (within the heapdump), select java.lang.String, right click Show in Instances View. After that you'll see on the left side table String instances currently active in your system.
Klick on one String instance and you'll see some String preferenes on the right-upper part of the right table, like the value of the String.
On the bottom-right part of the right table you'll see where this String instance is referenced from. Here you have to check where the most of your *String*s are being referenced from. But with your case (176/210, good propability to find some String examples which causes your problems soon) it should be clear after some inspection where the problem lies.
I just encountered the same problem in a totally different application, so tomcat7 is probably not to blame. Memory Analyzer shows 10M unreachable String instances in the process (which has been running for about 2 months), and most/all of them have values that relate to Garbage Collection (e.g., "Allocation Failure", "end of minor GC")
Memory Analyzer
Full GC is now running every 2s but those Strings don't get collected. My guess is that we've hit a bug in the GC code. We use the following java version:
$ java -version
java version "1.7.0_06"
Java(TM) SE Runtime Environment (build 1.7.0_06-b24)
Java HotSpot(TM) 64-Bit Server VM (build 23.2-b09, mixed mode)
and the following VM parameters:
-Xms256m -Xmx768m -server -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC
-XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:NewSize=32m -XX:MaxNewSize=64m
-XX:SurvivorRatio=8 -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
-Xloggc:/path/to/file
By accident, I stumbled across the following lines in our Tomcat's conf/catalina.properties file that activate String caching. This might be related to your case if you have any of them turned on. It seems others are warning to use the feature.
tomcat.util.buf.StringCache.byte.enabled=true
#tomcat.util.buf.StringCache.char.enabled=true
#tomcat.util.buf.StringCache.trainThreshold=500000
#tomcat.util.buf.StringCache.cacheSize=5000
Try to use MAT and make sure that when you parse the heapdump, do it not dropping out the unreachable objects.
To do so, follow the tutorial here.
Then you can run a simple Mem Leak Analysis (This is a good tutorial)
That should quickly lead you to the root cause.
As this sounds unspecific, one candidate would have been JSF. But then I would have expected hash maps leaking too.
Should you use JSF:
In web.xml you could try:
javax.faces.STATE_SAVING_METHOD client
com.sun.faces.numberOfViewsInSession 0
com.sun.faces.numberOfLogicalViews 1
As for tools: JavaMelody might be interesting for continual statistics, but needs effort.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.
There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.
I'm trying to diagnose a PermGen memory leak problem in a Sun One 9.1 Application Server. In order to do that I need to get a heap dump of the JVM process. Unfortunately, the JVM process is version 1.5 running on Windows. Apparently, none of the ways for triggering a heap dump support that setup. I can have the JVM do a heap dump after it runs out of memory, or when it shuts down, but I need to be able to get heap dumps at arbitrary times.
The two often mentioned ways for getting heap dumps are either using jmap or using the HotSpotDiagnostic MBean. Neither of those support jvm 1.5 on Windows.
Is there a method that I've missed? If there's a way to programmatically trigger a heap dump (without using the HotSpotDiagnostic MBean), that would do too...
If it's really not possible to do it in Windows, I guess I'd have to resort to building a Linux VM and doing my debugging in there.
Thanks.
There was a new hotspot option introduced in Java6, -XX:-HeapDumpOnOutOfMemoryError, which was actually backported to the Java5 JVM.
http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
Dump heap to file when
java.lang.OutOfMemoryError is thrown.
Manageable. (Introduced in 1.4.2
update 12, 5.0 update 7.)
It's very handy. The JVM lives just long enough to dump its heap to a file, then falls over.
Of course, it does mean that you have to wait for the leak to get bad enough to trigger an OutOfMemoryError.
An alternative is to use a profiler, like YourKit. This provides the means to take a heap snapshot of a running JVM. I believe it still supports Java5.
P.S. You really need to upgrade to java 6....
If it's 1.5.0_14 or later, you can use -XX:+HeapDumpOnCtrlBreak and hit Ctrl-Break in the console