Recently, while working on a JSF web app, using Netbeans 6.8, I am constantly getting PermGen: Out Of Memory Errors. I have also noticed that this is not related to hot swapping the code, as some people suggested on the forums; I generally restart my local web server, Tomcat 6.0, whenever I redeploy the code. This used to happen to me once in awhile, but as of late, it was been occurring constantly. I usually can't go more than two minutes before it crashes.
The important observation I've made about this problem, is that it only seems to happen when running the debugger. If I launch the server, regularly, it will run indefinitely. As soon as I run in debug mode, this problem occurs.
I've tried all the tips I've found so far of increasing the JAVA_OPT memory settings for Java in Tomcat; I've tried increasing the available memory for Netbeans in netbeans.conf. Still no luck. If you want to see the specific configuration changes I've made, I can post that as well.
I've also read that this can be a result of memory leaks in Java. I've tried running Netbean's profiler, but it would generally crash as well before I could do anything really useful. Additionally, when it did run, all the object allocations with ridiculous generations were things in java libraries, or primitives -- char[]s were the biggest memory hog of the app, for example, with the largest generations.
I would really like to know if anyone has had a similar problem before, and if so, how they solved it. This is starting to seriously impede my ability to do my work.
Thanks for any help.
add this entry in catlina.sh(or bat), it worked for me
JAVA_OPTS="-Djava.awt.headless=true -Dfile.encoding=UTF-8
-server -Xms1536m -Xmx1536m
-XX:NewSize=256m -XX:MaxNewSize=512m -XX:PermSize=512m
-XX:MaxPermSize=512m -XX:+DisableExplicitGC
Something I have found useful to track down memory leaks without running a profiler or a debugger is to use the "jmap -histo " command (comes with the jdk). Save the output of this program to a file. Run this every few minutes while your application is running. Collect up the outputs and look for objects that are always increasing in number and size. I even wrote a quick app to graph selected objects over time to really highlight run away objects just to make it easier to see where leaks might be occurring.
Related
I've been stuck on my problem for quite some time. To give you a litte context, I have written a bot in Java and was planning to run it on a Raspberry Pi 3 Model A+ 24/7. To my surprise, when I tested the almost finished program, its memory consumption kept on rising indefinitely.
Soon I realised, I had to limit the memory usage which I looked up on several sites over the past couple months. Unfortunately, most of them are outdated (2013 and older) and the very few newer ones didn't cover the important changes which must have taken place because I'm not able to figure out why my issue is still occurring.
I've tried so many things over such a long period of time that I'm not sure if I'll be able to sum up all the things I've tried so far but will update this post if I remember some important details.
Please see the pictures of my last test with the following settings:
java -Xmx4m -Xms4m -Xss64k -XX:MaxMetaspaceSize=8m -jar bot.jar
As you see the memory was not limited and rose to the point where the process was killed shortly after. In some of my previous tests I used an empty while(true) loop because I don't think I have a memory leak in my program. Weirdly enough, the empty loop also increased the memory size very slowly but did also not decrease over time.
At this point I'm not sure if the JVM is even capable of having a specified memory limit. My code uses the Robot class to make screen captures and fire certain buttons in nested while loops which also remind the garbage collector to cue a collection with System.gc(). Do I also have to use the following argument for the JVM?
-XX:MaxDirectMemorySize
I'm really confused with all the changes on Java as well. I've tried a few different JDKs because I thought that might solve the problem. The last test was compiled with the JDK 14 and runs on Java 11. Do I need to configure something on the OS in order to limit the memory consumption?
Maybe you guys could also recommend me a profiler with which I can check what is even allocating the memory in order to figure out what needs to be limited via the arguments but I would definitely need some help because I have never worked with one before.
I appreciate any help on this topic! Please let me know if you need any additional information and I will do my best to follow up during the week.
Maybe you can use the following args : -XX:+PrintGCDetails -Xloggc:/tmp/jvm-gc.log. It will log gc details in /tmp/jvm-gc.log .
Or you can check the size of the runtime heap with the following command:
# get the pid
ps -aux | bot.jar
# show the heap info
jmap -heap <pid>
I have a problem and my job depend on of that.
There are some java apps with tomcat under Linux that crash ramdonly (the apps are not mine and it can not be modified).
Every day we find in the morning some app broken.
I want to see the java stack just when it crashed the app for seeing the message of the JVM (outofmemory, nullpointer etc). For if i can see an advice for fixing the problem.
I do not know nothing about to do this.
I saw searching in internet visualvm and jconsole for this. Is enough for what i want to do?.
I want to see the messages of the java stack of JVM just when crashes.
I need help. Thank you very much.
looks like you have memory leak issue, does the app works after restart for a particular period of time?
You might want to see what happening inside java heap , for that you can take heap dump.. Use jcmd utility for this , you can find this utility within JDK installed on your server.
jcmd <process id/main class> GC.heap_dump filename=filename
NOTE: This will do a GC every time this runs.
To schedule this you need to set the cronjob.
Alternatively if you specify -XX:+HeapDumpOnOutOfMemoryError command-line option while running your application, then when an OutOfMemoryError exception is thrown, the JVM will generate a heap dump(in the logs).
Hope this helps. :)
Trying to diagnose some bizarre Tomcat (7.0.21) and/or JVM errors on a 64-bit linux (CentOS) machine.
I'm load testing our server application and tried hitting it with 100K messages. Launched jvisualvm and kept my eye on the heap the whole time. Everything was looking great* (see below) until I got to about 93K processed messages and then Tomcat just died. Ran a ps on Tomcat's PID number to confirm it was dead.
Up until this crash:
Load test had been running for about 90 minutes; should have finished shortly thereafter since we were at 93K/100K)
CPU was holding strong around 45%
Used heap was around 2GB (plus or minus a bunch after GCs) but heap size grew from 4GB to MAX_HEAP after about 30 minutes
Class loading/unloading was cycling normally
Thread dumps were normal
Nowhere in the server code are any calls to System.exit() - so we can rule that right out (and yes I've double-checked!!!).
I'm not sure if this is Tomcat crashing or the JVM (how do I tell?). And even if I did know, I can't seem to find any indication of what went wrong:
All of the server app's logs just stop without any ERROR messages (even though we have logging universally set to DEBUG and higher)
Tomcat's catalina.out and respect localhost_access_* files just stop without any info
I've heard it is possible to have Tomcat log a coredump when it does but not sure how to do that and online examples aren't helping much.
How would SO go about diagnosing this? What steps should I take to start ruling out all of the possible factors?
Thanks in advance!
If the JVM crashes, you should have a hs_err_pidNNN.log file; you don't have to do anything to enable this. Its location depends on your OS and how you are running Tomcat. On Windows, they can show up on your desktop, unless you are running as a service. Otherwise, they should be in the current working directory of the crashed process.
Your operating system probably provides additional tools for process monitoring; you could describe your environment more, or perhaps ask at serverfault.com.
It's also possible that jvisualvm is actually causing the crash.
I'd try reproducing the problem, and progressively simplify the scenario to help isolate the cause.
Another possibility is that the OS is running out of memory and the OOM Killer is killing your process. In this case, the JVM wouldn't get an opportunity to write a heap dump, or an hs_err_pid file.
You can use the option java -XX:+HeapDumpOnOutOfMemoryError to create a heap dump for jvm crash due to out of memory error.
More details here Using HeapDumpOnOutOfMemoryError parameter for heap dump for JBoss.
Sorry I had to remove the green check from #erickson. I finally figured out what was killing Tomcat.
It looks like a profiler plugin is not configured correctly with VisualVM and attempting to run a profile on the Tomcat process killed it.
Investigating why right now, and will update this answer once I know more.
I have a java application that runs on Tomcat (which runs as a service on Windows), the java process for which continues to eat up CPU before eventually requiring me to restart the Tomcat service.
First my setup:
Windows 2003 server
Tomcat 6, running as service using Wrapper
JDK: 1.6.0_20
I was seeing catch issues here and there leading up to yesterday. I had to restart midday yesterday, then at 2:30 this morning, then today I could barely restart the application and open jconsole to monitor it before it was hitting 99% CPU usage again. Through a combination of things I'm not quite sure of, it seems like I got the JVM to cycle itself and the app was hovering in the 10-30% CPU usage range for a couple hours. However, then it started to creep up again, finally going into its 99% CPU usage breakdown. I was also having trouble with high memory usage, but that has stayed fairly normal and steady since I so-called got the JVM to "cycle" (bad terminology perhaps, but this is really what it seemed to do - and in the wrapper log there was a dump of all the classes it was reloading after).
Then I was digging around some more and found a JRE 6 Update 24 installed on the server (I didn't install it as I do thorough testing with each java update - but maybe my server admin did the update). I attempted, but can't uninstall this. Thus, I get different versions when I do a java -version versus javac -version
java -version
java version "1.6.0_24"
Java(TM) SE Runtime Environment (build 1.6.0_24-b07)
Java HotSpot(TM) Client VM (build 19.1-b02, mixed mode, sharing)
javac -version
javac 1.6.0_20
Could this difference be causing a JVM conflict of sorts? JAVA_HOME and my PATH variables both point to the correct JDK installation.
Hoping for more stability, I decided to change my app to run on the previous JDK that was still installed - JDK 1.6.0_04. I changed the wrapper.conf, set env variables, cleaned and rebuilt, and started. This does seem more stable and has been up for about 4 hours. The CPU usage has climbed to the 90s, then it seems to clear itself out again.
I've done heapdumps then ran them through the Memory Analyzer in Eclipse (nothing new found there), I've used jconsole with jtop to look at threads - nothing jumps out, thus why I continue to be curious if it's a java/jvm issue. So, I know this is a long post - but I don't really know where to go from here. Any ideas?
(I've done exhaustive web searching on this and some articles have pointed to possibly a Quartz issue or Hibernate queries not flushing. Nothing has changed in the app since I started seeing the CPU issues, so I'm not sure where to start troubleshooting if it could indeed be linked to either.)
This isn't an easy problem. You are doing all of the basics to see if it something jumps out. It sounds like there is either a slow leak that builds up over time to the point where it can't operate. That sounds like GC is thrashing and app comes unresponsive. It could also be runaway background job(s) eating on the CPU and just doesn't complete, that might explain the long delay. You could try turning off any quartz to see if it stays up longer that might help lead you in a direction, or crank it up so it shows up sooner.
I know you've done some jconsole watching, but I think you need to revisit and watch your memory usage, the threads run time, how much time you're spending in GC, and watching what portions of memory are being eaten up (is it Eden, Tenure that's running out?).
I'd make sure you are writing out start and end messages for your background jobs running in Quartz. Then you can correlate when they start and finish with when this problem starts. Also will tell you if your jobs are finishing or not.
It's probably time to drop it into a profiler (instead of jconsole) so you can see where in the code it's spending time or what's blowing up memory. A real profiler will let you see all that data mashed up on your code and classes. My favorites is JProfiler, but YourKit is also good. You can get a 7-30 day trial so you'll have plenty of time to profile and figure your issue out without having to buy it.
Start this early in the morning so you'll hopefully see something by early night.
A Java application I support that runs on JRE 1.4.2_12 is hanging near midnight every night. I'd like to try and record as much profiling information as I can to discover if there is an issue in the JVM or external to the app.
I'd like to use HPROF to collect as much information as possible.
Is there a way to have HPROF dump its cpu sample and memory allocation report every minute instead of at the termination of the JVM?
Is there a different, more appropriate profiler that can collect information like this?
Rather than relying on dump files, I would try hooking up a profiler to the VM and leave it attached until the hang up occurs. Then use the profiler to introspect the state of the threads.
The use of Java 1.4 is a minor issue here, since 1.4's debug interface is not great, but some profilers still support it. I can particularly recommend YourKit, which is commercial, but offers an evaluation licence. It's the best profiler I've used, but some margin.
First things first: did you analyze the thread dump when your application hangs? A lot of the time that has enough information to troubleshoot a hanging java app...
Ctrl-Break in the process window on Windows, or kill -QUIT [pid] on Linux.
I would first try to determine if its actually your app or something else.
Are there any other apps on the box, if so do they run any batch around midnight. It could be a situation of your app suffering from a lack of resources due to other things running on the box or chewing up bandwidth.
Was this always the case or did it start recently. If this is new look at what changed on the box as a whole not just your own app.