Permgen out of memory - java

Running Tomcat for an Enterprise level App. Been getting "Permgen out of memory" messages.
I am running this on:
Windows 2008 R2 server,
Java 1.6_43,
Running Tomcat as a service.
No multiple deployments. Service is started, and App runs. Eventually I get Permgen errors.
I can delay the errors by increasing the perm size, however I'd like to actually fix the problem. The vendor is disowning the issue. I don't know if it is a memory leak, as the vender simply say "runs fine with Jrockit". Ofc, that would have been nice to have in the documentation, like 3mos ago. Plus, some posts suggest that Jrockit just expands permspace to fit, up to 4gb if you have the mem (not sure that is accurate...).
Anyway, I see some posts for a potential fix in Java 1.5 with the options
"-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"
However, these seem to have been deprecated in Java 1.6, and now the only GC that seems to be available is "-XX:+UseG1GC".
The best link I could find, anywhere, is:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#G1Options
Does anyone know if the new G1 garbage collector includes the permspace? or am I missing an option or 2 in the new Java6 GC settings that maybe i am not understanding?
Any help appreciated!

I wouldn't just increase the permgen space as this error is usually a sign of something wrong in software/setup. Is their a specific webapp that causes this? Without more info, I can only give basic advice.
1) Use the memory leak detector (Tomcat 6+) called Find Leaks
2) Turn off auto-deployment
3) Move JDBC drivers and logging software to the java classpath instead of tomcat per this blog entry

In earlier versions of Sun Java 1.6, the CMSPermGenSweepingEnabled option is functional only if UseConcMarkSweepGC is also set. See these answers:
CMSPermGenSweepingEnabled vs CMSClassUnloadingEnabled
What does JVM flag CMSClassUnloadingEnabled actually do?
I don't know if it's functional in later versions of 1.6 though.
A common cause for these errors/bugs in the past was dynamic class generation, particularly for libraries and frameworks that created dynamic proxies or used aspects. Subtle misuse of Spring and Hibernate (or more specifically cglib and/or aspectj) were common culprits. The underlying issue was that new dynamic classes were getting created on every request, eventually exhausting permgen space. The CMSPermGenSweepingEnabled option was a common workaround/fix. Recent versions of those frameworks no longer have the problem.

Related

How to enable java HotSpot VM compiler

I am using java 1.8.0_05, Java HotSpot(TM) 64-Bit Server VM
I am running a java web app on tomcat 8.0.43
I recently deployed my .war file by dropping it in the webapps folder.
This resulted in the following message being logged:
Java HotSpot(TM) 64-Bit Server VM warning: CodeCache is full. Compiler
has been disabled. Java HotSpot(TM) 64-Bit Server VM warning: Try
increasing the code cache size using -XX:ReservedCodeCacheSize=
CodeCache: size=245760Kb used=244058Kb max_used=244079Kb free=1701Kb
bounds [...]
total_blobs=48344 nmethods=47669 adapters=584 compilation: disabled
(not enough contiguous free space left)
How can I check what the current status of the compiler is now, to see if it's still disabled?
How can I enable the compiler? Can I simply restart tomcat?
There doesn't seem to be any noticeable different in how my application is running (eg: in terms of speed).
Interestingly, I didn't get this message when deploying the same application to an identical server. This is why I would like to first just turn the compiler back on rather than changing settings (eg: ReservedCodeCacheSize) as the message recommends.
Then, if the problem persists I can see which settings I need to change.
Addressing your individual questions + 1 recommendation:
How to check if the JIT compiler is still disabled?
The easiest thing to do is to start up a jvisualvm (already shipped with JDK), then check the used codecache space. If your CodeCache is full, the JIT compiler will remain disabled. to check the Code Cache memory space:
install the MBeans JVisualVM plugin.
go to Mbeans
open java.lang/MemoryPool/Code Cache
check variable "Usage" (double-click)
This will give you an overview of where you are.
How can I enable the compiler? Can I simply restart tomcat?
Yes, a restart will certainly reset the state of the cache. The only other way to restart your compiler would be if you have already started the JVM with the right parameters. (enabling UseCodeCacheFlushing)
No difference in how my application is running?
JIT optimizes your code, but depending on your application and the way you use it, you might not see any noticable difference. Assuming you run a webapp (because of Tomcat), the network transmission speed or your browser rendering pages are likely orders of magnitude slower than what JIT gains you in terms of core Java speed.
"I didn't get this message when deploying the same application"
JIT compiling is dependent on the code that is being executed at that moment. The same application might run quite differently under the hood on the level where JIT works. When it comes to low-level functions, the more 3rd party libraries you use, the less you can be sure about what is happening on all those threads you have no control over of.
the suggestion:
Please upgrade that Java version. It is very rare (u_05) to be on such an early JDK8 version, and quite dangerous. Java8 was not the most stable release when it came out, and had easily reproducible bugs even at later releases. There have been over 1000 bugs fixed in JDK8. Many of these were directly addressing JIT issues. If you have any control over the environment you are talking to, upgrade it. If you do not, notify the responsible person.
I had this issue a while ago and this is what I cant tell you:
Once the Code Cache becomes full the compiler is automatically disabled.
Will it be automatically restarted?
No. And it will stay down until the JVM is restarted.
Can I simply restart tomcat?
Yes. But it will probably happen again.
There doesn't seem to be any noticeable different in how my application is running (eg: in terms of speed).
In the long run there will be some issues since code that could be cached and optimized can no longer be compiled and stored there.
What can you do?
You could increase a bit -XX:ReservedCodeCacheSize
You could enable -XX:+UseCodeCacheFlushing. The drawback is that if your CodeCache size is way too low, and you constantly hit the flushing threshold, the performance will be affected since you are spending CPU resources in the flushing process.
I would increase a bit the CodeCacheSize, enable the flush, and monitor the App with VisualVM or something that lets you look at the current state of the CodeCache. Monitoring will help you understand if you are reaching the thresholds once in a while or if it happens a lot.
Remember that CodeCache is separated from the Heap, so looking at HeapSize won't help you.
Edit:
Regarding VisualVM, here are the official steps to connect to a remote JVM:
https://docs.oracle.com/javase/8/docs/technotes/guides/visualvm/applications_remote.html
Just make sure JMX is enabled and it should work right away.
Regarding the issue with many apps running at the same time... Well yeah, technically Standard Tomcat starts one JVM for all the apps. Cache Space will be shared.
You could also monitor this case by Attaching VisualVM to the JVM, undeploying an app and checking if the space has been freed.
You could also consider using an Enterprise container which will let you create one JVM per App.

What are risks of updating JRE for a desktop application on Windows

I'm planning to update the JRE to the latest version for Java application on windows. The application runs on windows 7 and JRE 7u17.
While I updated it without any issues. I just have these two questions:
Are there risks I should consider while upgrading? Is there a better way to test if the application still runs the same as on JRE7.
Thanks in advance,
Best
There are no risk but there are things to take care of while upgrading from 7 -> 8
In my personal experience I found following things
In my personal experience I had to update all the frameworks which deal with class level operations (spring, tapestry-plastic, etc.. ) and some of them had API changes as well making a huge change in code base
apart from language side there are some changes in VM too, for example: metaspace is introduced and no more permgen space, some stuff from permgen moved to heap, so you might have to re-tune your JVM, there are other things in new JVM you could take advantage of as well

NetBeans/Glassfish and PermGen space bug when redeploying (yes, STILL happening)

I know this has probably been asked many times before, but I still haven't seen an actual fix for it.
My day-to-day development environment is as follows:
1. NetBeans (latest), 2. Glassfish (latest as bundled with NB), 3. JPA, JSF, JAXB, Jersey for JAX-RS
I have about 600 classes in my project, spread across two EJB projects and one WAR project, all inside an EAR.
I am on latest JDK 7 (on OS X) and I am on an hourly basis getting the infamous "PermGen space" bug. Let's say if I am doing 3 incremental re-deploys a minute, I can only work for a short while before either:
Glassfish run out of PermGen space, so I just have to kill the process.
Deployment becomes extremely slow, due to me having increase max permgen space (as one is advised to do from dozens of answers on S.O.)
Often the only solution is to kill glassfish every 30 minute or so. It's definitely due to a bug somewhere that simply loads new classes for every new incremental re-deploy instead of getting rid of the old ones. I thought this was supposed to be fixed in JDK 7?
This has been a long standing bug in the kind of development environment, and I am rather shocked that it's still going on after my 5+ years of Java development. It's just so frustrating and incredibly unproductive.
(Just before anyone suggests increasing permgen space, believe me I've tried that, and the only thing it "solves" is to prolong the inevitable. I've seen redeployments take up to 400 seconds at its worst. Redeployment is supposed to take 5-6 seconds for a project this size, no more.)
EDIT: I ran jmap and jhat on the Glassfish process after the following steps:
Start glassfish
Deploy my EA
Undeploy my EA
Then did a heap dump with jmap
It turns out that all my classes (which should have been unloaded) are still loaded! Hopefully this is useful information to someone reading this...
Surely, that is a bug, and I don't think that there is an easy solution for that. (If there were, probably you have had it already).
What you can try: Use some hot code replacement tool for example JRebel, This way you don't have to deploy all the time, instead this tool watches the changes of the .class files (and even other web resources, if you configure so), and replaces the class definition within the running JVM. Sounds cool, right?
It works as a Java agent, it starts when your JVM starts.
There are 3 drawbacks of this solution: The deployment is a bit slower, it's harder to debug, and it's a proprietary software (but does not cost much)
When developing with Netbeans + Glassfish and using "Deploy on Save" we've found that libraries packaged within an application are not unloaded when the project is re-deployed; this causes GF to slow down and quickly run out of memory.
Try de-selecting "Package" for all compile-time libraries and place those not already in the Glassfish classpath in the domainX/lib directory.
Not sure but this may be related to GLASSFISH-17449 or GLASSFISH-16283.

Java Webapp Performance Issues

I have a Web Application, Made entirely with Java. The Webapp doesn't use any Graphical / Model Framework, instead, the webapp uses The Model-View Controller. It's made only with Servlet specification (Servlet ver. 2.4).
The webapp it's developed since 2001, and it's very complex. Initially, was built for work with Tomcat 4.x/5.x. Actually, runs on Tomcat 6.x. But, we still having memory Leaks.
In Depth, the specifications of The Webapp can resumed as:
Uses Servlet v. 2.4 Specification.
It doesn't use Any Framework
It doesn't use JavaEE (Not EJB)
It's based on JavaSE (With Servlets)
Works Only on IE 6+ (Because of it's age)
Infrastructure Specification
Actually, the webapp works in three environments:
First
IBM Server (I don't remember exactly the model)
Intel Xeon 2.4 Ghz
32GB RAM
1TB HDD
Tomcat (Version 6) is configured to use 8GB of RAM
Second
Dell Server
Intel Xeon 2.0Ghz
4GB RAM
500GB HDD
Tomcat (Version 5.5) is configured to use 1.5GB of RAM
Third
Dell Server
Amd Opteron 1214 2.20Ghz
4GB RAM
320GB HDD
Tomcat (Version 6) is Configured to use 1.5GB of RAM
Database specification
The webapp uses SQL Server 2008 R2 Express Edition as a DBMS, except for the user of the first server-specification, that uses SQL Server 2008 R2 Standard Edition. For the connection pools, the app uses Apache DBCP.
Problem
Well, it has very serious performance issues. The webapp slow down continually, and, many times Denies the Service. The only way to recover the app is restarting The Apache Tomcat Service.
During a performance Audit, i've found several programming issues (Like database connections that never closes, excesive use of Vector collection [instead of ArrayList]).
I want to know how can improve the performance for the app, which applications can help me to monitoring the Tomcat performance and the Webapp Memory usage.
All suggestions are gladly accepted.
You could also try stagemonitor. It is an open source performance monitoring library. It records request response times, JVM metrics, request details including a call stack (profile) of the called methods during the request and more. Because of the low overhead, you can also use it in production.
The tuning procedure would be the following.
Identify slow requests with the Request Dashboard
Analyze the stack trace of the request with the Request Detail Dashboard to find out about slow methods
Dive into your code and try to optimize those slow methods
You can also correlate some metrics like the throughput or number of sessions with the response time or cpu usage
Analyze the heap with the JVM Memory Dashboard
Note: I am the developer of stagemonitor.
I would start with some tools that can help you profiling the application. Since you are developing webapp start with Lambda Probe and Java melody.
The first step is to determine the conditions under which the app starts to behave oddly. Ask yourself few questions:
Do performance issues arise right after applications starts, or overtime?
Do performance issues are correlated to quantity of client requests?
What is the real performance problem - high load on the server or lack of memory (note that they are related, so check which one starts first)
Are there any background processes which are performing some massive operations? Are they scheduled to run at some particular time period?
Try to find some clues before going deep into code. It will help you to narrow down possible causes.
As Joshua Bloch has stated in his book entitled "Effective Java" - performance issues are rarely the effect of some minor mistakes in source code (although, of course, misuse of Java constructs can lead to disaster). Usually the cause is bad system (API) architecture.
The last suggestion based on my experience - try not to think that high memory consumption is something bad. Tomcat will use as much memory as operating system and JVM will let him (not more than max settings) and just when it needs more - Tomcat will perform garbage collection. So a typical (proper!) graph of memory consumption looks like a saw. If you are dealing with memory leak, then the graph will be increasing constantly, but indefinitely. This is the most often misunderstood of memory leaks, so keep it in mind.
To be honest - we cannot help you much further. Those are just pointers, now you will have to make extensive research to figure out the cause :)
The general solution is to use a profiler e.g. YourKit, with a realistic workload which reproduces the problem.
What I do first is a CPU only profile, a memory only profile and finally a CPU & Memory profile on at once (I then look at the CPU profile results)
YourKit can also monitor your high level operations such a Java EE resources and JDBC connections. I haven't tried these as I don't use them. ;)
It can be a good idea to improve the efficiency even if its not the cause of the problem as it will reduce the amount of "noise" in these profiles and make your issues more obvious.
You could try increasing the amount of memory available but a suspect it will just delay the problem.
Ok. So I have seen huge Java applications run lesser configurations. You should try to do the following -
First connect a Profiler to your application and see which part of your application takes the most time. You can use JProfiler or Eclipse MAT ( I personally prefer JProfiler). Also try to take a look at the objects taking the most memory. This will help you narrow down to the parts which you need to rewrite to improve the performance.
Once you have taken a look at the memory leaks update your application to use 64bit JDK(assuming it already does not do so)
Take a look at your JVM arguments and optimize them.
You can try the open source tool Webapp Watcher in order to identify where in the code is the performance issue.
You have first to add a filter in the webapp (as explained here) in order to record metrics, and then import the logs in the WAW Analyzer tool and follow the steps described in the doc to know where is the potential performance issue in the code.

Tomcat 6 Web Application Eating Up Memory Over Time

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

Categories

Resources