I know this has probably been asked many times before, but I still haven't seen an actual fix for it.
My day-to-day development environment is as follows:
1. NetBeans (latest), 2. Glassfish (latest as bundled with NB), 3. JPA, JSF, JAXB, Jersey for JAX-RS
I have about 600 classes in my project, spread across two EJB projects and one WAR project, all inside an EAR.
I am on latest JDK 7 (on OS X) and I am on an hourly basis getting the infamous "PermGen space" bug. Let's say if I am doing 3 incremental re-deploys a minute, I can only work for a short while before either:
Glassfish run out of PermGen space, so I just have to kill the process.
Deployment becomes extremely slow, due to me having increase max permgen space (as one is advised to do from dozens of answers on S.O.)
Often the only solution is to kill glassfish every 30 minute or so. It's definitely due to a bug somewhere that simply loads new classes for every new incremental re-deploy instead of getting rid of the old ones. I thought this was supposed to be fixed in JDK 7?
This has been a long standing bug in the kind of development environment, and I am rather shocked that it's still going on after my 5+ years of Java development. It's just so frustrating and incredibly unproductive.
(Just before anyone suggests increasing permgen space, believe me I've tried that, and the only thing it "solves" is to prolong the inevitable. I've seen redeployments take up to 400 seconds at its worst. Redeployment is supposed to take 5-6 seconds for a project this size, no more.)
EDIT: I ran jmap and jhat on the Glassfish process after the following steps:
Start glassfish
Deploy my EA
Undeploy my EA
Then did a heap dump with jmap
It turns out that all my classes (which should have been unloaded) are still loaded! Hopefully this is useful information to someone reading this...
Surely, that is a bug, and I don't think that there is an easy solution for that. (If there were, probably you have had it already).
What you can try: Use some hot code replacement tool for example JRebel, This way you don't have to deploy all the time, instead this tool watches the changes of the .class files (and even other web resources, if you configure so), and replaces the class definition within the running JVM. Sounds cool, right?
It works as a Java agent, it starts when your JVM starts.
There are 3 drawbacks of this solution: The deployment is a bit slower, it's harder to debug, and it's a proprietary software (but does not cost much)
When developing with Netbeans + Glassfish and using "Deploy on Save" we've found that libraries packaged within an application are not unloaded when the project is re-deployed; this causes GF to slow down and quickly run out of memory.
Try de-selecting "Package" for all compile-time libraries and place those not already in the Glassfish classpath in the domainX/lib directory.
Not sure but this may be related to GLASSFISH-17449 or GLASSFISH-16283.
Related
We have bunch of jar files that are Java applications and run just fine. There are a few however that do nothing although it is expected to run :) with a GUI.
Is this a common issue with jar files that some have difficulties to run?
The OS is Windows 7 and the example not working jar is whitebox a free GIS application, BTW.
We reiterate that we have many jar applications that run like a charm in the above system. This means that it should not be a problem with Java installation (latest update 7u40 exists in the system).
We checked almost all jar failure related topics but no one discussing the issue above which is happening for some applications.
We also mention, we uninstalled and reinstalled java many times but with no success. The application whitebox does nothing. In one try, it did run and when we closed it. And we are since then trying to run it again but nothing is happening! Even nothing appears in the running Processes!
We examined command line and double click. No success. The file type association is correct. Furthermore as we said others are working just fine.
The problem reported was due to inadequate RAM. Whitebox requires 2GB RAM to run smoothly. While this is huge we could run it on an old laptop with only 1GB RAM. The solution was to increase the size of paging file (virtual memory) into the range 1024MB and 2048MB. We also moved its location from C drive into other drives. We the settings mentioned it runs without any problem. We have tried it many times and happy to report for this case the problem is now completely solved.
Conclusion:
For some Java applications if something happend as described in the question it may be due to memory requirement. In this case increasing virtual memory could solve the problem without a need to buy additional RAM.
Running Tomcat for an Enterprise level App. Been getting "Permgen out of memory" messages.
I am running this on:
Windows 2008 R2 server,
Java 1.6_43,
Running Tomcat as a service.
No multiple deployments. Service is started, and App runs. Eventually I get Permgen errors.
I can delay the errors by increasing the perm size, however I'd like to actually fix the problem. The vendor is disowning the issue. I don't know if it is a memory leak, as the vender simply say "runs fine with Jrockit". Ofc, that would have been nice to have in the documentation, like 3mos ago. Plus, some posts suggest that Jrockit just expands permspace to fit, up to 4gb if you have the mem (not sure that is accurate...).
Anyway, I see some posts for a potential fix in Java 1.5 with the options
"-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"
However, these seem to have been deprecated in Java 1.6, and now the only GC that seems to be available is "-XX:+UseG1GC".
The best link I could find, anywhere, is:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#G1Options
Does anyone know if the new G1 garbage collector includes the permspace? or am I missing an option or 2 in the new Java6 GC settings that maybe i am not understanding?
Any help appreciated!
I wouldn't just increase the permgen space as this error is usually a sign of something wrong in software/setup. Is their a specific webapp that causes this? Without more info, I can only give basic advice.
1) Use the memory leak detector (Tomcat 6+) called Find Leaks
2) Turn off auto-deployment
3) Move JDBC drivers and logging software to the java classpath instead of tomcat per this blog entry
In earlier versions of Sun Java 1.6, the CMSPermGenSweepingEnabled option is functional only if UseConcMarkSweepGC is also set. See these answers:
CMSPermGenSweepingEnabled vs CMSClassUnloadingEnabled
What does JVM flag CMSClassUnloadingEnabled actually do?
I don't know if it's functional in later versions of 1.6 though.
A common cause for these errors/bugs in the past was dynamic class generation, particularly for libraries and frameworks that created dynamic proxies or used aspects. Subtle misuse of Spring and Hibernate (or more specifically cglib and/or aspectj) were common culprits. The underlying issue was that new dynamic classes were getting created on every request, eventually exhausting permgen space. The CMSPermGenSweepingEnabled option was a common workaround/fix. Recent versions of those frameworks no longer have the problem.
I have a standalone application that is running in IBM Websphere 7.0.0.19. It is running in Java 6 and we pack an Axis2 JAR in our EAR. We have 'parent last' style class loading and we have disabled the Axis service that is packed with WAS7 by default.
Recently, after 6+ weeks of continuous functioning, the application experienced an OOM. Perplexing point is, the application is deployed seperately on 2 different machines. But only one machine went down. Second machine is still up.
We checked OS, server configuration like classloader policy using WAS console and they are similar in both machines.
When the application crashed, it generated a .phd file which we analysed using Eclipse Memory Analyser Tool (MAT). The analysis is shown in the screenshot.
If I'm correct the bootstrap class loader is repeatedly loading and holding on to references of AxisConfiguraiton and so GC is unable to collect them when it runs. But, if that is the case, then both servers must have come down. But only one server experienced an OOM. Memory allocated to JVM is same in both machines.
We are not sure whether the issue is with WAS 7 or with axis2-kernel-1.4.1.jar or with something else.
http://www.slideshare.net/leefs/axis2-client-memory-leak
https://issues.apache.org/jira/browse/AXIS2-3870
http://java.dzone.com/articles/12-year-old-bug-jdk-still-out
(Links may not refer to the current issue. But they are just pointers)
Has anyone experienced something similar ?
We saw memory growth and sockets left open on WebSphere 6.1 with Axis 2 1.4 in the past. It's been a long time, but my notes suggest it might be worth considering an upgrade to at least Axis 2 1.5.1 to fix this bug with the open sockets and also to ensure you're not creating new objects repeatedly where a singleton exists (e.g. the Service object).
I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.
I have downloaded the latest Eclipse IDE, Galileo, and tested it to see if it good for developing web applications in Java. I have also tried the Ganymede version of Eclipse and find that is it also good.
My Problem is that sometimes it hangs and stops responding while I am developing. Sometimes when I open a file, Eclipse hangs and does not respond for awhile. It seems that Eclipse is going slower and my job is getting slower because of the time that I am spending waiting for the response of Eclipse.
When I went to NetBeans 6.7, it was good and the performance was good. The loading is faster and the IDE responds well during my development testing.
My computer has 1 GB of RAM and a 1.6 GHz CPU.
What can you say about this?
I'm using Eclipse PDT 2.1 (also based on Galileo) for PHP development, and I've been using Eclipse-based IDE for 3 years now ; my observation is that 1 GB of RAM is generally not enough to run Eclipse + some kind of web server + DB server + browser + other stuff :-(
I'm currently working with a 1GB of RAM machine, and it's slow as hell... Few months ago, I had a 2GB of RAM machine, and things were going really fine -- and I'm having less software running on the "new machine" than I had on the other one !
Other things that seem to affect Eclipse's responsivness is :
opening a project that's on a network drive (accessing the sources that are on a development server via samba, for instance)
sometimes, using an SVN-plugin like SUbversive seems to freeze Eclipse for a couple of seconds/minutes
A nice to do with languages like PHP (might not be OK for JAVA projects, though) is to disable "automatically build" in "project"'s menu.
As a sidenote : I've already seen questions about eclipse's speed on SO ; you might want to try so searches, to get answers faster ;-)
This is a common concern and others have posted similar questions. There are optimizations that you can perform on your Eclipse environment. Take a look at the solutions posted here.
netbeans is really damn hot, i just didn get it to automatically release my android projects...
thinking of features.. i'd prefere eclipse...
to fasten it up a little more, just disable 'automatic build' doesnt really change anything (build just takes a little longer)
but it's really feelable faster...
but, after 1 or 2 hours, i also have to close, wait, and re-open it.
kind of sucks... (gotta macbook pro, 2.26 (i think) ghz, 3gb ram,
gave it a minimum of 768MB of ram, and keeps getting slower..
really sucks
::edit::
I also realized, that after opening a XML file, eclipse instantly gets a little bit more laggy (already disabled XML live compiling, or something similiar, makes no difference :( )
Our machines are bigger : 2GB ram, and faster CPU.
I'm sure that, as all software, Eclipse gets bigger and slower when upgrading version, due to all new functionnalities included. The good news is that from time to time, a release also brings some notable performance improvement. But in my experience, each time I tried using a ten-year old software on my current machine, it was lightning fast, so I'm sure the tendency is to get slower. I agree that this is a sad for us, when we don't get a better machine.
There might be some things you can do, to improve the responsiveness of your Eclipse.
I don't know if you already tried everything ... ?
My experience has been that NetBeans, Aptana, and Komodo are fast on computers where Eclipse is painfully slow. Maxing out RAM has seemed to help. Any chance you can bump up to 2 gig?
Netbeans sped up quite a bit in the last few years, perhaps your comparison is relative to the speed of netbeans?
Lately I had to up the size of my eclipse -Xmx from 64mb and decided I might as well go to 512, and it got a bit chunkier. at 64 I never saw the slightest pause, when it actually NEEDS a collection at 512 because of a long-running process that's not letting the background GC thread run, it can get a little pausey
I'm running on a pretty old version of eclipse (customized by the cable industry so it can run and display cable apps on a TV emulator) so your mileage may vary.
Check if you can disable unwanted plugins during start up.