What free JVM implementation has the best PermGen handling? - java

I'm running Tomcat6 in Sun's JRE6 and every couple deploys I get OutOfMemoryException: PermGen. I've done the Googling of PermGen solutions and tried many fixes. None work. I read a lot of good things about Oracle's JRockit and how its PermGen allocation can be gigs in size (compare to Sun's 128M) and while it doesn't solve the problem, it would allow me to redeploy 100 times between PermGen exceptions compared to 2 times now.
The problem with JRockit is to use it in production you need to buy WebLogic which costs thousands of dollars. What other (free) options exist that are more forgiving of PermGen expansion? How do the below JVMs do in this area?
IBM JVM
Open JDK
Blackdown
Kaffe
...others?
Update: Some people have asked why I thought PermGen max was 128M. The reason is because any time I try to raise it above 128M my JVM fails to initialize:
[2009-06-18 01:39:44] [info] Error occurred during initialization of VM
[2009-06-18 01:39:44] [info] Could not reserve enough space for object heap
[2009-06-18 01:39:44] [395 javajni.c] [error] CreateJavaVM Failed
It's strange that it fails trying to reserve space for the object heap, though I'm not sure it's "the" heap instead of "a" heap.
I boot the JVM with 1024MB initial and 1536MB max heap.
I will close this question since it has been answered, ie. "switching is useless" and ask instead Why does my Sun JVM fail with larger PermGen settings?

I agree with Michael Borgwardt in that you can increase the PermGen size, I disagree that it's primarily due to memory leaks. PermGen space gets eaten up aggressively by applications which implement heavy use of Reflection. So basically if you have a Spring/Hibernate application running in Tomcat, be prepared to bump that PermGen space up a lot.

What gave you the idea that Sun's JVM is restricted to 128M PermGen? You can set it freely with the -XX:MaxPermSize command line option; the default is 64M.
However, the real cause of your problem is probably a memory leak in your application that prevents the classes from getting garbage collected; these can be very subtle, especially when ClassLoaders are involved, since all it takes is a single reference to any of the classes, anywhere. This article describes the problem in detail, and this one suggests ways to fix it.

Technically, the "PermGen" memory pool is a Sun JVM thing. Other JVMs don't call it that, but they all have the idea of one or more non-heap memory pools.
But if you have a problem with permgen in your Sun JVM, moving to another JVM is very unlikely solve anything, it'll just manifest itself under a different name.
If multiple redeployments are causing your problems, just boost the VM's PermGen up to large values. We tried JRockit a while back because of this very problem, and it suffers from the same redeployment exhaustion. We moved back to SUn JVM.

Changing JVM is not a panacea. You can get new unexpected issues (e.g. see an article about launching an application under 4 different JVM).
You can have a class leak (e.g. via classloaders) that mostly often happen on redeploy. Frankly, I've never saw working hot redeploy on Tomcat (hope to see one day).
You can have incorrect JVM paramaters (e.g. for Sun JDK 6 64 bits -XX:+UseParNewGC switch leads to leak PermGen segment of memory. If you add additional switches: -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled-XX:+CMSPermGenSweepingEnabled the situation will be resolved. Funny, but I never met above mentioned leak with Sun JDK 6 32 bits). Link to an article "Tuning JVM Garbage Collection for Production Deployments".
Your PermGen chunk can be not enough to load classes and related information (actually that most often happens after redeploy under Tomcat, old classes stay in memory and new ones are loading)
From my past experience, debugging that kind of leak is one of the most tricky kind of debugging that I've ever had.
[UPDATED]
Useful article how to eliminate classloader link on an application redeploy.

I use JRockit and I still get PermGen errors if I don't bump up (via -XX:MaxPermSize) the memory. I also can't get anything to work to avoid getting this (other than increasing it).

Perm gen is probably the simplest memory to handle, I doubt there'd be much difference between the various vm implementations.
Make sure all those Tomcat configs that are marked turn off in production are turned off in production.
Yes, some frameworks that do generate a lot of classes an the fly, but they should be cleaning up after themselves, and, in any case, you can fit more than a few classes in 128Mb.
Seriously, if perm gen keeps going up then thats a leak a should be fixed, though it may not be your problem to fix.

The IBM JVM does not (and did not in 2009) have a permgen. You can read more about its Generational Concurrent Garbage Collector which is its default GC for Java 7.
I have sometimes run the Eclipse IDE on IBM JVM specifically because with my favorite plugins it would frequently fill up the HotSpot JVM's permgen. Sure, there was probably a memory leak that someone should have fixed, but meanwhile my IDE was not crashing and I was not busy experimenting with different settings.

Related

Memory leak in a Java web application

I have a Java web application running on Tomcat 7 that appears to have a memory leak. The average memory usage of the application increases linearly over time when under load (determined using JConsole). After the memory usage reaches the plateau, performance degrades significantly. Response times go from ~100ms to [300ms, 2500ms], so this is actually causing real problems.
JConsole memory profile of my application:
Using VisualVM, I see that at least half the memory is being used by character arrays (i.e. char[]) and that most (roughly the same number of each, 300,000 instances) of the strings are one of the following: "Allocation Failure", "Copy", "end of minor GC", all of which seem to be related to garbage collection notification. As far as I know, the application doesn't monitor the garbage collector at all. VisualVM can't find a GC root for any of these strings, so I'm having a hard time tracking this down.
Memory Analyzer heap dump:
I can't explain why the memory usage plateaus like that, but I have a theory as to why performance degrades once it does. If memory is fragmented, the application could take a long time to allocate a contiguous block of memory to handle new requests.
Comparing this to the built-in Tomcat server status application, the memory increases and levels off at, but doesn't hit a high "floor" like my application. It also doesn't have the high number of unreachable char[].
JConsole memory profile of Tomcat server status application:
Memory Analyzer heap dump of Tomcat server status applicationp:
Where could these strings be allocated and why are they not being garbage collected? Are there Tomcat or Java settings that could affect this? Are there specific packages that could be affect this?
I removed the following JMX configuration from tomcat\bin\setenv.bat:
set "JAVA_OPTS=%JAVA_OPTS%
-Dcom.sun.management.jmxremote=true
-Dcom.sun.management.jmxremote.port=9090
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false"
I can't get detailed memory heap dumps anymore, but the memory profile looks much better:
24 hours later, the memory profile looks the same:
I would suggest to use memoryAnalyzer for analyzing your heap, it gives far more information.
http://www.eclipse.org/mat/
there is a standalone application and eclipse embedded one.
you just need to run jmap on your application and analyze the result with this.
The plateau is caused by the available memory dropping below the default percentage threshold which causes a Full GC. This explains why the performance drops as the JVM is constantly pausing while it tries to find and free memory.
I would usually advise to look at object caches but in your case I think your Heap size is simply too low for a Tomcat instance + webapp. I would recommend increasing your heap to 1G (-Xms1024m -Xmx1024m) and then review your memory usage again.
If you still see the same kind of behaviour then you should take another Heap dump and look at the largest consumers after String and Char. It my experience this is usually caching mechanisms. Either increase your memory further or reduce the caching stores if possible. Some caches only define number of objects so you need to understand how big each cached object is.
Once you understand your memory usage, you may be able to lower it again but IMHO 512MB would be a minimum.
Update:
You need not worry about unreachable objects as they should be cleaned up by the GC. Also, it's normal that the largest consumers by type are String and Char - most objects will contain some kind of String so it makes sense that Strings and Chars are the most common by frequency. Understanding what holds the objects that contains the Strings is the key to finding memory consumers.
I can recommend jvisualvm which comes along with every Java installation. Start the programm, connect to your Webapplication. Go to Monitor -> Heap Dump. It now may take some time (depending on the size).
The navigation through the Heap Dump is quite easy, but the meaning you have to figure out yourself (not too complicated though), e.g.
Go to Classes (within the heapdump), select java.lang.String, right click Show in Instances View. After that you'll see on the left side table String instances currently active in your system.
Klick on one String instance and you'll see some String preferenes on the right-upper part of the right table, like the value of the String.
On the bottom-right part of the right table you'll see where this String instance is referenced from. Here you have to check where the most of your *String*s are being referenced from. But with your case (176/210, good propability to find some String examples which causes your problems soon) it should be clear after some inspection where the problem lies.
I just encountered the same problem in a totally different application, so tomcat7 is probably not to blame. Memory Analyzer shows 10M unreachable String instances in the process (which has been running for about 2 months), and most/all of them have values that relate to Garbage Collection (e.g., "Allocation Failure", "end of minor GC")
Memory Analyzer
Full GC is now running every 2s but those Strings don't get collected. My guess is that we've hit a bug in the GC code. We use the following java version:
$ java -version
java version "1.7.0_06"
Java(TM) SE Runtime Environment (build 1.7.0_06-b24)
Java HotSpot(TM) 64-Bit Server VM (build 23.2-b09, mixed mode)
and the following VM parameters:
-Xms256m -Xmx768m -server -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC
-XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:NewSize=32m -XX:MaxNewSize=64m
-XX:SurvivorRatio=8 -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails
-Xloggc:/path/to/file
By accident, I stumbled across the following lines in our Tomcat's conf/catalina.properties file that activate String caching. This might be related to your case if you have any of them turned on. It seems others are warning to use the feature.
tomcat.util.buf.StringCache.byte.enabled=true
#tomcat.util.buf.StringCache.char.enabled=true
#tomcat.util.buf.StringCache.trainThreshold=500000
#tomcat.util.buf.StringCache.cacheSize=5000
Try to use MAT and make sure that when you parse the heapdump, do it not dropping out the unreachable objects.
To do so, follow the tutorial here.
Then you can run a simple Mem Leak Analysis (This is a good tutorial)
That should quickly lead you to the root cause.
As this sounds unspecific, one candidate would have been JSF. But then I would have expected hash maps leaking too.
Should you use JSF:
In web.xml you could try:
javax.faces.STATE_SAVING_METHOD client
com.sun.faces.numberOfViewsInSession 0
com.sun.faces.numberOfLogicalViews 1
As for tools: JavaMelody might be interesting for continual statistics, but needs effort.

Under what circumstances does Java performance degrade with more memory?

We're load testing a Java 1.6 application in our DEV environment. The JVM heap allocation is 2Gb, -Xms2048m -Xmx2048m. Under load testing, the app runs smooth, never uses more than 1.25Gb of heap, and garbage collection is totally normal.
In our UAT environment, we run the load test with the same parameters, the only difference is the JVM, it's allocated 4Gb, -Xms4096m -Xmx4096m, otherwise, the hardware is exactly the same with DEV. But during load testing, the performance is horrendous, the app eats up nearly the entire heap, and garbage collection runs rampant.
We've run these tests over and over again, eliminated all possible symptoms that may influence performance, but the results are the same. Under what circumstances can this be the case?
There is something different about your application in the Production and UAT environments.
Judging from the symptoms, it is (IMO) unlikely to be a hardware, operating system performance tuning or a difference in the JVM versions. It goes without saying that this is unlikely to be due to the application having more memory.
(It is not inconceivable that your application might do something strange ... like sizing some data structures based on the maximum heap size and get the calculations wrong. But I think you'd be aware of that possibility, so lets ignore it for now.)
It is probably related to a difference in the OS environment; e.g. a different version of the OS or some application, differences in the networking, differences in locales, etcetera. But the bottom line is that it is 99% certain that there is a memory leak in your application when run on the UAT, and that memory leak is what is chewing up heap memory and overloading the GC.
My advice would be to treat this as a storage leak problem, and use the standard tools / techniques to track down the cause of the problem. In the process, you will most likely be able to figure out why this only occurs on your UAT.
The culprit could be garbage collection, normal "stop-the-world"-type collection caused us some performance problems; the server-software was running very slow, yet the load of the server was also low. Eventually we found out that there was a single "stop-the-world" -garbage collector thread holding up the entire software being run all the time under certain scenarios (operations producing loads of garbage).
Moving to concurrent garbage collection alleviated the problem with start up parameters -XX:+UseParallelOldGC -XX:ParallelGCThreads=8. We were using "only" 2gb heaps in tests and production, but it is also worthy of noting that the amount of time the GC takes goes up with larger heap (even if your software never actually uses all of it).
You might want to read more about different garbage collector -options and tuning from here: Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning.
Also, answers in this question could provide some help: Java very large heap sizes.
It will be worth while to analyze the heap dumps on both these machines and understand what is consuming the heap differently on these 2 environments. Histograms will help.

PermGen space issue with Glassfish/Hibernate

I'm running a GWT+Hibernate app on Glassfish 3.1. After a few hours, I run out of Permgen space. This is without any webapp reloads. I'm running with –XX:MaxPermSize=256m –XmX1024m.
I took the advice from this page, and found that I'm leaking tons of classes- all of my Hibernate models and all of my GWT RequestFactory proxies.
The guide referenced above says to "inspect the chains, locate the accidental reference, and fix the code". Easier said than done.
The classloader always points back to an instance of org.glassfish.web.loader.WebappClassLoader. Digging further, I find lots of references from $Proxy135 and similar-named objects. But I don't know how else to follow through.
new class objects get placed into the PermGen and thus occupy an ever increasing amount of space. Regardless of how large you make the PermGen space, it will inevitably top out after enough deployments. What you need to do is take measures to flush the PermGen so that you can stabilize its size. There are two JVM flags which handle this cleaning:
-XX:+CMSPermGenSweepingEnabled
This setting includes the PermGen in a garbage collection run. By default, the PermGen space is never included in garbage collection (and thus grows without bounds).
-XX:+CMSClassUnloadingEnabled
This setting tells the PermGen garbage collection sweep to take action on class objects. By default, class objects get an exemption, even when the PermGen space is being visited during a garabage collection.
There are some OK tools to help with this, though you'd never know it. The JDK (1.6 u1 and above) ships with jhat and jmap. These tools will help significantly, especially if you use the jhat JavaScript query support.
See:
http://blog.ringerc.id.au/2011/06/java-ee-application-servers-learning.html
http://blogs.oracle.com/fkieviet/entry/classloader_leaks_the_dreaded_java
http://www.mhaller.de/archives/140-Memory-leaks-et-alii.html
http://blogs.oracle.com/sundararajan/entry/jhat_s_javascript_interface
I "solved" this by moving to Tomcat.
(I can't view the link you provided as it's blocked by websense so if I'm restating anything I apologize)
It sounds like you have a class loader leak. These are difficult to track down, add these options to the JVM Options in your instance configuration
-XX:+PrintGCDetails
-XX:+TraceClassUnloading
-XX:+TraceClassLoading
Now when you run your app, you can look at the jvm.log located in your domain/logs folder and see what's loading and unloading. Mostly likely, you'll see the same class(es) loading over and over again.
A good culprit is JAXB, especially if you're creating a new JAXBContext over and over again.

Memory Leak in a Java based application

There is a memory leak happens in an application when a short lived object holds a long lived object,
My question is how can we identify
1) which object lives longer and shorter, any tool which measures life of an object?
2nd Question
I am constantly getting the Out of Memory Space Error and I tried increasing the Heap memory to 2 GB, but still i am getting, please suggest me any open source tool with which i can identify the memory leak issue and fix.
At present I am restarting the server every time as a temporary solution, but Suggest me any thing which i can fix permanently.
You can use the VisualVM tool included in the JDK:
http://download.oracle.com/javase/6/docs/technotes/tools/share/jvisualvm.html
Documentation available here:
https://visualvm.dev.java.net/docindex.html
There are 2 options:
It just may be your application doesn't have enough heap allocated. Measure size of your input and give application corresponding heap;
There's memory-leak: take profiler, examine your heap, find objects which shouldn't be there or there too much of them ('short-living objects', in your terms), identify which 'long-living' object holds them, fix this. You should know your code to understand which objects must be 'short-living' and which must be 'long-living'.
I've found the Heap Walker in Netbeans very usefull
As said, jvisualvm have good tools to analyze the heap live.
But you can also use jvisualvm or -XX:+HeapDumpOnOutOfMemoryError to take a heap dump in a file. And then take the file to your destkop, to open it in Eclipse Memory Analyzer. Eclipse MAT is even better to analyze the memory.
Out of Memory occurs on a server because it literally uses up all memory it's allowed to have. Not sure about what application you're using for hosting the server, but for Apache, you need to add the line -Xmx512m where 512 is the maximum amount of megabytes it's allowed to have.
If you leave the application to run long enough, it's going to happen. This isn't because of memory leaks in Java but the server itself which has a tendency to do so. You can't change this behavior, but you can at least increase the default memory of 256 mb. With the heavy loading site that I work on everyday, 256 mb lasts about 30 minutes for me unfortunately. I've found that 1024 mb is reasonable and rarely crashes due to out of memory exceptions.
I'd strike me as very unusual for Java to be incapable of garbage collecting correctly unless the programmer took a hand at overriding typical functionality.
I think you can track memory leaks with jsconsole (which comes shipped with JDK6 if i'm not mistaken).
A short-lived object holding a reference to a long-lived object will not cause problems. (a good overview , including generational garbage collection).
2GB is an awful lot of objects/references. If you're running out of heap space at 2Gb you're likely holding onto massive amounts of data and/or keeping open resources when you're done with them. You should post at the very least a description of what your application does and how long it takes to die.
You can get some sense of what's happening quickly by watching the garbage collector (e.g. run with "-verbose:gc" which will tell you when the garbage collector is running and how much it collects).

Strategies for the diagnosis of Java memory issues

I've been tasked with debugging a Java (J2SE) application which after some period of activity begins to throw OutOfMemory exceptions. I am new to Java, but have programming experience. I'm interested in getting your opinions on what a good approach to diagnosing a problem like this might be?
This far I've employed JConsole to get a picture of what's going on. I have a hunch that there are object which are not being released properly and therefor not being cleaned up during garbage collection.
Are there any tools I might use to get a picture of the object ecosystem? Where would you start?
I'd start with a proper Java profiler. JConsole is free, but it's nowhere near as full featured as the ones that cost money. I used JProfiler, and it was well worth the money. See https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler for more options and opinions.
Try the Eclipse Memory Analyzer, or any other tool that can process a java heap dump, and then run your app with the flap that generates a heap dump when you run out of memory.
Then analyze the heap dump and look for suspiciously high object counts.
See this article for more information on the heap dump.
EDIT: Also, please note that your app may just legitimately require more memory than you initially thought. You might try increasing the java minimum and maximum memory allocation to something significantly larger first and see if your application runs indefinitely or simply gets slightly further.
The latest version of the Sun JDK includes VisualVM which is essentially the Netbeans profiler by itself. It works really well.
http://www.yourkit.com/download/index.jsp is the only tool you'll need.
You can take snapshots at (1) app start time, and (2) after running app for N amount of time, then comparing the snapshots to see where memory gets allocated. It will also take a snapshot on OutOfMemoryError so you can compare this snapshot with (1).
For instance, the latest project I had to troubleshoot threw OutOfMemoryError exceptions, and after firing up YourKit I realised that most memory were in fact being allocated to some ehcache "LFU " class, the point being that we specified loads of a certain POJO to be cached in memory, but us not specifying enough -Xms and -Xmx (starting- and max- JVM memory allocation).
I've also used Linux's vmstat e.g. some Linux platforms just don't have enough swap enabled, or don't allocate contiguous blocks of memory, and then there's jstat (bundled with JDK).
UPDATE see https://stackoverflow.com/questions/14762/please-recommend-a-java-profiler
You can also add an "UnhandledExceptionHandler" to your Application's Thread. This will catch 'uncaught' exception, like an out of memory error, and you will at least have an idea where the exception was thrown. Usually this not were the problem is but the 'new' that couldn't be satisfied. As a rule I always add the UnhandledExceptionHandler to a Thread if nothing else to add logging.

Categories

Resources