Given a heapdump or a running VM, how do I discover what the contents of the permanent generation is ? I know about 'jmap -permstat' but that's not available on Windows.
This article Memory Monitoring with Java SE 5 describes how to programmatically discover information on heap usage, memory pools (including permgen space) and so on. Its very simple:
MemoryUsage usage = ManagementFactory.getMemoryMXBean().getNonHeapMemoryUsage();
long nonHeapFree = usage.getMax() - usage.getUsed();
long nonHeapTotal = usage.getMax();
In my testing on OSX (Sun VM) "non heap memory usage" matches the values returned for the permgen pool closely and presumably will do something useful on VMs that don't have permgen.
The permanent generation contains the class object.
So you should check the heap dump or other form of object list for classes.
If you have problem with the size of permanent generation usually it is caused by two reason:
your program or a library you use creates classes dynamically and the default size of permanent generation is too small - simple increate the size with -XX:MaxPermSize=256m
your program or a library you use creates new classes dynamically every time it is called, so the size of permanent generation increases non-stop - this is a programming error you should fix it (or search a fix/create a bug report)
To see which is your case check the size of the permanent generation over a larger period.
And a good overview about permanent generation:
http://blogs.oracle.com/jonthecollector/entry/presenting_the_permanent_generation
See my blog post about thr permsize of Eclipse
In short the Memory Analyzer can doit, but you need the SAP JVM.
Do you have a specific problem to solve? The use of String.intern() is one of the typical causes for permgen problems. Additionally projects with a lot of classes also have permgen problems.
I do not know how to get into the permgen and see what it is there...
Permanent generation really only contains two kinds of things: Class definitions and interned strings. Latter very rarely gives you problems, but it is often blamed for problems. More often former is the one giving problems, due to code generation and partial hot reloading (dangling references).
Unlike name suggests, permgen does eventually get GC'ed too, just not part of regular GC cycle. Hence unreferenced interned Strings and unused classes do get cleaned up.
But permgen also does not grow dynamically which means that it is sometimes necessary to manually resize its settings for JVM start.
I'm looking into the same thing but due to memory constraints of an embedded platform.
Look at the code for jmap, the permstat tool is only available if the sun.jvm.hotspot.tools.HeapSummary class is available. This class is part of the serviceability agent. According to OpenJDK documentation (http://openjdk.java.net/groups/serviceability/svcjdk.html#bsa):
Serviceability Agent components are built as part of the standard build of the HotSpot repository. These components are:
-libsaproc.so: this is the native code component of SA.
-sa-jdi.jar: This is contains the Java classes of SA. It includes an implementation of JDI which allows JDI clients to do read-only debugging on core files and hung processes.
SA is used by jinfo, jmap, jstack
NOTE: The Serviceability Agent and the technologies that use it are not currently included in JDK releases on the Windows platforms.
Looks to be the case for Oracle JDK as well. I'm looking to modify the jmap tool to get more info.
One technique that helped me was to use the -verbose:class command-line option to java, and you'll get log output telling you as classes are loaded/unloaded. Since classes are loaded to the permgen, this can help in certain circumstances.
you can use JConsole or jvisualvm.exe(with jdk 1.6 7) to find what is where. If you want to know how all of your objects are related to each other and tree of objects, then you might want to try Eclipse Memory Analyzer -- http://www.eclipse.org/mat/.
IN summary, you will get want you want from "http://www.eclipse.org/mat/".
Good luck,
Related
I have a Java web application running on Tomcat 7 that appears to have a memory leak. The average memory usage of the application increases linearly over time when under load (determined using JConsole). After the memory usage reaches the plateau, performance degrades significantly. Response times go from ~100ms to [300ms, 2500ms], so this is actually causing real problems.
JConsole memory profile of my application:
Using VisualVM, I see that at least half the memory is being used by character arrays (i.e. char[]) and that most (roughly the same number of each, 300,000 instances) of the strings are one of the following: "Allocation Failure", "Copy", "end of minor GC", all of which seem to be related to garbage collection notification. As far as I know, the application doesn't monitor the garbage collector at all. VisualVM can't find a GC root for any of these strings, so I'm having a hard time tracking this down.
Memory Analyzer heap dump:
I can't explain why the memory usage plateaus like that, but I have a theory as to why performance degrades once it does. If memory is fragmented, the application could take a long time to allocate a contiguous block of memory to handle new requests.
Comparing this to the built-in Tomcat server status application, the memory increases and levels off at, but doesn't hit a high "floor" like my application. It also doesn't have the high number of unreachable char[].
JConsole memory profile of Tomcat server status application:
Memory Analyzer heap dump of Tomcat server status applicationp:
Where could these strings be allocated and why are they not being garbage collected? Are there Tomcat or Java settings that could affect this? Are there specific packages that could be affect this?
I removed the following JMX configuration from tomcat\bin\setenv.bat:
set "JAVA_OPTS=%JAVA_OPTS%
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=9090
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false"
I can't get detailed memory heap dumps anymore, but the memory profile looks much better:
24 hours later, the memory profile looks the same:
I would suggest to use memoryAnalyzer for analyzing your heap, it gives far more information.
http://www.eclipse.org/mat/
there is a standalone application and eclipse embedded one.
you just need to run jmap on your application and analyze the result with this.
The plateau is caused by the available memory dropping below the default percentage threshold which causes a Full GC. This explains why the performance drops as the JVM is constantly pausing while it tries to find and free memory.
I would usually advise to look at object caches but in your case I think your Heap size is simply too low for a Tomcat instance + webapp. I would recommend increasing your heap to 1G (-Xms1024m -Xmx1024m) and then review your memory usage again.
If you still see the same kind of behaviour then you should take another Heap dump and look at the largest consumers after String and Char. It my experience this is usually caching mechanisms. Either increase your memory further or reduce the caching stores if possible. Some caches only define number of objects so you need to understand how big each cached object is.
Once you understand your memory usage, you may be able to lower it again but IMHO 512MB would be a minimum.
Update:
You need not worry about unreachable objects as they should be cleaned up by the GC. Also, it's normal that the largest consumers by type are String and Char - most objects will contain some kind of String so it makes sense that Strings and Chars are the most common by frequency. Understanding what holds the objects that contains the Strings is the key to finding memory consumers.
I can recommend jvisualvm which comes along with every Java installation. Start the programm, connect to your Webapplication. Go to Monitor -> Heap Dump. It now may take some time (depending on the size).
The navigation through the Heap Dump is quite easy, but the meaning you have to figure out yourself (not too complicated though), e.g.
Go to Classes (within the heapdump), select java.lang.String, right click Show in Instances View. After that you'll see on the left side table String instances currently active in your system.
Klick on one String instance and you'll see some String preferenes on the right-upper part of the right table, like the value of the String.
On the bottom-right part of the right table you'll see where this String instance is referenced from. Here you have to check where the most of your *String*s are being referenced from. But with your case (176/210, good propability to find some String examples which causes your problems soon) it should be clear after some inspection where the problem lies.
I just encountered the same problem in a totally different application, so tomcat7 is probably not to blame. Memory Analyzer shows 10M unreachable String instances in the process (which has been running for about 2 months), and most/all of them have values that relate to Garbage Collection (e.g., "Allocation Failure", "end of minor GC")
Memory Analyzer
Full GC is now running every 2s but those Strings don't get collected. My guess is that we've hit a bug in the GC code. We use the following java version:
$ java -version
java version "1.7.0_06"
Java(TM) SE Runtime Environment (build 1.7.0_06-b24)
Java HotSpot(TM) 64-Bit Server VM (build 23.2-b09, mixed mode)
and the following VM parameters:
-Xms256m -Xmx768m -server -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC
-XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:NewSize=32m -XX:MaxNewSize=64m
-XX:SurvivorRatio=8 -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
-Xloggc:/path/to/file
By accident, I stumbled across the following lines in our Tomcat's conf/catalina.properties file that activate String caching. This might be related to your case if you have any of them turned on. It seems others are warning to use the feature.
tomcat.util.buf.StringCache.byte.enabled=true
#tomcat.util.buf.StringCache.char.enabled=true
#tomcat.util.buf.StringCache.trainThreshold=500000
#tomcat.util.buf.StringCache.cacheSize=5000
Try to use MAT and make sure that when you parse the heapdump, do it not dropping out the unreachable objects.
To do so, follow the tutorial here.
Then you can run a simple Mem Leak Analysis (This is a good tutorial)
That should quickly lead you to the root cause.
As this sounds unspecific, one candidate would have been JSF. But then I would have expected hash maps leaking too.
Should you use JSF:
In web.xml you could try:
javax.faces.STATE_SAVING_METHOD client
com.sun.faces.numberOfViewsInSession 0
com.sun.faces.numberOfLogicalViews 1
As for tools: JavaMelody might be interesting for continual statistics, but needs effort.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.
There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.
I'm running a GWT+Hibernate app on Glassfish 3.1. After a few hours, I run out of Permgen space. This is without any webapp reloads. I'm running with –XX:MaxPermSize=256m –XmX1024m.
I took the advice from this page, and found that I'm leaking tons of classes- all of my Hibernate models and all of my GWT RequestFactory proxies.
The guide referenced above says to "inspect the chains, locate the accidental reference, and fix the code". Easier said than done.
The classloader always points back to an instance of org.glassfish.web.loader.WebappClassLoader. Digging further, I find lots of references from $Proxy135 and similar-named objects. But I don't know how else to follow through.
new class objects get placed into the PermGen and thus occupy an ever increasing amount of space. Regardless of how large you make the PermGen space, it will inevitably top out after enough deployments. What you need to do is take measures to flush the PermGen so that you can stabilize its size. There are two JVM flags which handle this cleaning:
-XX:+CMSPermGenSweepingEnabled
This setting includes the PermGen in a garbage collection run. By default, the PermGen space is never included in garbage collection (and thus grows without bounds).
-XX:+CMSClassUnloadingEnabled
This setting tells the PermGen garbage collection sweep to take action on class objects. By default, class objects get an exemption, even when the PermGen space is being visited during a garabage collection.
There are some OK tools to help with this, though you'd never know it. The JDK (1.6 u1 and above) ships with jhat and jmap. These tools will help significantly, especially if you use the jhat JavaScript query support.
See:
http://blog.ringerc.id.au/2011/06/java-ee-application-servers-learning.html
http://blogs.oracle.com/fkieviet/entry/classloader_leaks_the_dreaded_java
http://www.mhaller.de/archives/140-Memory-leaks-et-alii.html
http://blogs.oracle.com/sundararajan/entry/jhat_s_javascript_interface
I "solved" this by moving to Tomcat.
(I can't view the link you provided as it's blocked by websense so if I'm restating anything I apologize)
It sounds like you have a class loader leak. These are difficult to track down, add these options to the JVM Options in your instance configuration
-XX:+PrintGCDetails
-XX:+TraceClassUnloading
-XX:+TraceClassLoading
Now when you run your app, you can look at the jvm.log located in your domain/logs folder and see what's loading and unloading. Mostly likely, you'll see the same class(es) loading over and over again.
A good culprit is JAXB, especially if you're creating a new JAXBContext over and over again.
I've been tasked with debugging a Java (J2SE) application which after some period of activity begins to throw OutOfMemory exceptions. I am new to Java, but have programming experience. I'm interested in getting your opinions on what a good approach to diagnosing a problem like this might be?
This far I've employed JConsole to get a picture of what's going on. I have a hunch that there are object which are not being released properly and therefor not being cleaned up during garbage collection.
Are there any tools I might use to get a picture of the object ecosystem? Where would you start?
I'd start with a proper Java profiler. JConsole is free, but it's nowhere near as full featured as the ones that cost money. I used JProfiler, and it was well worth the money. See https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler for more options and opinions.
Try the Eclipse Memory Analyzer, or any other tool that can process a java heap dump, and then run your app with the flap that generates a heap dump when you run out of memory.
Then analyze the heap dump and look for suspiciously high object counts.
See this article for more information on the heap dump.
EDIT: Also, please note that your app may just legitimately require more memory than you initially thought. You might try increasing the java minimum and maximum memory allocation to something significantly larger first and see if your application runs indefinitely or simply gets slightly further.
The latest version of the Sun JDK includes VisualVM which is essentially the Netbeans profiler by itself. It works really well.
http://www.yourkit.com/download/index.jsp is the only tool you'll need.
You can take snapshots at (1) app start time, and (2) after running app for N amount of time, then comparing the snapshots to see where memory gets allocated. It will also take a snapshot on OutOfMemoryError so you can compare this snapshot with (1).
For instance, the latest project I had to troubleshoot threw OutOfMemoryError exceptions, and after firing up YourKit I realised that most memory were in fact being allocated to some ehcache "LFU " class, the point being that we specified loads of a certain POJO to be cached in memory, but us not specifying enough -Xms and -Xmx (starting- and max- JVM memory allocation).
I've also used Linux's vmstat e.g. some Linux platforms just don't have enough swap enabled, or don't allocate contiguous blocks of memory, and then there's jstat (bundled with JDK).
UPDATE see https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler
You can also add an "UnhandledExceptionHandler" to your Application's Thread. This will catch 'uncaught' exception, like an out of memory error, and you will at least have an idea where the exception was thrown. Usually this not were the problem is but the 'new' that couldn't be satisfied. As a rule I always add the UnhandledExceptionHandler to a Thread if nothing else to add logging.
I'm running Tomcat6 in Sun's JRE6 and every couple deploys I get OutOfMemoryException: PermGen. I've done the Googling of PermGen solutions and tried many fixes. None work. I read a lot of good things about Oracle's JRockit and how its PermGen allocation can be gigs in size (compare to Sun's 128M) and while it doesn't solve the problem, it would allow me to redeploy 100 times between PermGen exceptions compared to 2 times now.
The problem with JRockit is to use it in production you need to buy WebLogic which costs thousands of dollars. What other (free) options exist that are more forgiving of PermGen expansion? How do the below JVMs do in this area?
IBM JVM
Open JDK
Blackdown
Kaffe
...others?
Update: Some people have asked why I thought PermGen max was 128M. The reason is because any time I try to raise it above 128M my JVM fails to initialize:
[2009-06-18 01:39:44] [info] Error occurred during initialization of VM
[2009-06-18 01:39:44] [info] Could not reserve enough space for object heap
[2009-06-18 01:39:44] [395 javajni.c] [error] CreateJavaVM Failed
It's strange that it fails trying to reserve space for the object heap, though I'm not sure it's "the" heap instead of "a" heap.
I boot the JVM with 1024MB initial and 1536MB max heap.
I will close this question since it has been answered, ie. "switching is useless" and ask instead Why does my Sun JVM fail with larger PermGen settings?
I agree with Michael Borgwardt in that you can increase the PermGen size, I disagree that it's primarily due to memory leaks. PermGen space gets eaten up aggressively by applications which implement heavy use of Reflection. So basically if you have a Spring/Hibernate application running in Tomcat, be prepared to bump that PermGen space up a lot.
What gave you the idea that Sun's JVM is restricted to 128M PermGen? You can set it freely with the -XX:MaxPermSize command line option; the default is 64M.
However, the real cause of your problem is probably a memory leak in your application that prevents the classes from getting garbage collected; these can be very subtle, especially when ClassLoaders are involved, since all it takes is a single reference to any of the classes, anywhere. This article describes the problem in detail, and this one suggests ways to fix it.
Technically, the "PermGen" memory pool is a Sun JVM thing. Other JVMs don't call it that, but they all have the idea of one or more non-heap memory pools.
But if you have a problem with permgen in your Sun JVM, moving to another JVM is very unlikely solve anything, it'll just manifest itself under a different name.
If multiple redeployments are causing your problems, just boost the VM's PermGen up to large values. We tried JRockit a while back because of this very problem, and it suffers from the same redeployment exhaustion. We moved back to SUn JVM.
Changing JVM is not a panacea. You can get new unexpected issues (e.g. see an article about launching an application under 4 different JVM).
You can have a class leak (e.g. via classloaders) that mostly often happen on redeploy. Frankly, I've never saw working hot redeploy on Tomcat (hope to see one day).
You can have incorrect JVM paramaters (e.g. for Sun JDK 6 64 bits -XX:+UseParNewGC switch leads to leak PermGen segment of memory. If you add additional switches: -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled-XX:+CMSPermGenSweepingEnabled the situation will be resolved. Funny, but I never met above mentioned leak with Sun JDK 6 32 bits). Link to an article "Tuning JVM Garbage Collection for Production Deployments".
Your PermGen chunk can be not enough to load classes and related information (actually that most often happens after redeploy under Tomcat, old classes stay in memory and new ones are loading)
From my past experience, debugging that kind of leak is one of the most tricky kind of debugging that I've ever had.
[UPDATED]
Useful article how to eliminate classloader link on an application redeploy.
I use JRockit and I still get PermGen errors if I don't bump up (via -XX:MaxPermSize) the memory. I also can't get anything to work to avoid getting this (other than increasing it).
Perm gen is probably the simplest memory to handle, I doubt there'd be much difference between the various vm implementations.
Make sure all those Tomcat configs that are marked turn off in production are turned off in production.
Yes, some frameworks that do generate a lot of classes an the fly, but they should be cleaning up after themselves, and, in any case, you can fit more than a few classes in 128Mb.
Seriously, if perm gen keeps going up then thats a leak a should be fixed, though it may not be your problem to fix.
The IBM JVM does not (and did not in 2009) have a permgen. You can read more about its Generational Concurrent Garbage Collector which is its default GC for Java 7.
I have sometimes run the Eclipse IDE on IBM JVM specifically because with my favorite plugins it would frequently fill up the HotSpot JVM's permgen. Sure, there was probably a memory leak that someone should have fixed, but meanwhile my IDE was not crashing and I was not busy experimenting with different settings.