I have a Tomcat 7 with a few applications installed, for now it's usual to upload new versions of the applications pretty fast: so I pack the war and undeploy the existing one and deploy the new war or redeploy an existing application, The problem comes few weeks later when I see that the memory is almost full in order to prevent some unexpected outofmemory exceptions that makes our customers without service
Do I have to restart time to time Tomcat? ¿Is that normal?
Is there a pragmatic solution for zero Tomcat restarts?
Please only expert answers
UPDATE: This library can really help to avoid permgem issues and classloader leaks:
https://github.com/mjiderhamn/classloader-leak-prevention, But remember that it's your responsability to not having leaks, there's no silver bullet. Visual VM can really help detecting classloader leaks.
The pragmatic solution for zero Tomcat restarts is coding applications that does not leak memory. It's that easy - Tomcat it not likely to leak that memory on it's own ;)
In other words: As long as your application is leaking memory restarting is unavoidable. You can tune memory settings, but you're just postponing the inevitable.
Tomcat7 can detect some memory leaks (and even remedy some simple ones) - this feature is exposed through the Tomcat Manager if you install that. It does not, however, help you point our where the leak is.
Speaking from recent experience I had an app that did not seem to leak during normal operation, but had a PermGen leak - this means I do not really see the leak until I try to do what you're doing - hot deploy. I could only do a few deploys before filling the PermGen as Tomcat was unable to unload the old instance of the classes...
There may be better explanations of PermGen leak on the web, but a quick google gave me this: http://cdivilly.wordpress.com/2012/04/23/permgen-memory-leak/
If, however, your application crashes after running for some time it is likely that is has a proper memory leak. You can tune heap sizes and garbage collection settings in an attempt to remedy/work around this, but it may very well be that the only real fix is to track down the leak, fix the code and build a version that does not leak.
...but unless you can ensure every release will be leak-free you need a strategy for restarting your service. If you don't do it during release, look into doing it off peak hours.
If you have a cluster you can look into setting up Tomcat with memcached for session replication so as to not disrupt users when restarting a node in the cluster. You can also look into setting up monit, god, runit, upstart, systemd etc. for automatic restart of failed services...
You need add the VM arguments in the eclipse with minimum and maximum length. In eclipse goto window--> preferences-->click on Java-->click on installed JREs. then double click on jre record add the vm arguments as -Xms768m -Xmx1024m.
Related
I am learning jvm and I try to analyse memory used in tomcat. When tomcat started, Eden usage is shown as follows:
Tomcat eden space usage monitored by jconsole
No war was deployed in tomcat, and every default contexts such as hostmanager, manager were removed. Tomcat was started by default configuration and no request was accepted. It was same in debug mode and no debug mode. When GC is running, the memory usage was decreased. What caused memory usage increasing? Could any one help me? Thank you.
Runtime environment:
jdk1.8.0_112
apache-tomcat-8.5.9
I assume your question is in regards to the memory utilization fluctuation you see through JConsole. I will try to provide my two cents in here...
Keep in mind that even though there may be no WAR file deployed, there is still a lot of code actively running on that box to perform various tasks. One example is the file monitor to identify when a new WAR file is deployed to your work directory. Every piece of code routinely allocates memory that eventually gets garbage collected, thus the memory fluctuation you see.
Another piece of code that is likely consuming much of that memory is the profiler itself. For you to be able to get the metrics you get through JConsole, there needs to be an agent running on that box that is collecting (and storing in memory) the information you see.
If you want to have a better idea of what is actually being allocated by tomcat, I suggest getting your hands on a good Java Profiler that can give you more in-depth information on object allocations. I personally suggest YourKit as it is one of the most powerful profilers I have used in my many years of building software.
Now, there may be other reasons, but don't assume that just because you are not directly interacting with Tomcat that it is not doing anything.
Hope this helps, good luck!
I am not a Java dev, but an app landed on my desk. It's a web-service server-side app that runs in a Tomcat container. The users hit it up from a client application.
The users constantly complain about how slow it is and the app has to be restarted about twice a week, cause things get really bad.
The previous developer told me that the app simply runs out of memory (as it loads more data over time) and eventually spends all its time doing garbage collection. Meanwhile, the Heap Size for Tomcat is set at 6GB. The box itself has 32GB of RAM.
Is there any harm in increasing the Heap Size to 16GB?
Seems like an easy way to fix the issue, but I am no Java expert.
You should identify the leak and fix it, not add more heap space. Thats just a stop gap.
You should configure tomcat to dump the heap on error, then analyze the heap in one of any number of tools after a crash. You can compute the retained sizes of all the clases, which should give you a very clear picture of what is wrong.
Im my profile I have a link to a blog post about this, since I had to do it recently.
No, there is no harm in increasing the Heap Size to 16GB.
The previous developer told me that the app simply runs out of memory (as it loads more data over time)
This looks like a memory leak, a serious bug in application. If you increase the amount of memory available from 6 to 16 GiB, you're still gonna have to restart the application, only less frequent. Some experienced developer should take a look at the application heap while running (look at hvgotcodes tips) and fix the application.
To resolve these issues you need to do performance testing. This includes both CPU and memory analysis. The JDK (6) bundles a tool called VisualVM, on my Mac OS X machine this is on the path by default as "jvisualvm". That's free and bundled, so it's a place to start.
Next up is the NetBeans Profiler (netbeans.org). That does more memory and CPU analysis. It's free as well, but a bit more complicated.
If you can spend the money, I highly recommend YourKit (http://www.yourkit.com/). It's not terribly expensive but it has a lot of built-in diagnostics that make it easier to figure out what's going on.
The one thing you can't do is assume that just adding more memory will fix the problem. If it's a leak, adding more memory may just make it run really badly a bit longer between restarts.
I suggest you use a profiling tool like JProfiler, VisualVM, jConsole, YourKit etc. You can take a heap dump of your application and analyze which objects are eating up memory.
I am trying to figure out why Jetty 6.1.22 is running out of memory on my laptop. I have 2 web applications running JBoss Seam, Hibernate (with EHCache), and separate Quartz scheduler instances.
With little load, the server dies throwing OutOfMemory.
What can I look for? Would you think that I am not properly closing handles for input streams or files?
I tried profiling my application with Netbeans, but it works off and on. Usually, it ends up locking up even though it doesn't use that much CPU or memory.
Walter
What are you JVM's execution parameters?
Try increasing available heap memory though -Xms (default) -Xmx (max) and -Xmn (min) JVM's flags.
You can also monitor your Application Server execution with JConsole. It's usually helpful for finding out where is you application leaking.
add -XX:+HeapDumpOnOutOfMemoryError when invoking the jvm and when you get the OOM situation you will get a .hprof file dumped. You can open it later with several tools and you'll be able to see where the memory is going...
The tool I use is Eclipse Memory Analyzer, it's pretty good.
I can strongly recommend attaching to the troublesome program with jvisualvm in the JDK.
This allows you to investigate memory and cpu usage over time and inspect what happens in general.
I develop web applications and I use jBoss 4.0.2 and when I have redeployed my WAR several times with eclipse, jBoss will crash because it runs out of memory. And when I have to install new version to production enviroment, it will consume production servers memory, so that means I have to stop jBoss to prevent redeploying eat memory from customers server. Is there any work around for this problem?
Basically, no. Because of the way the JBoss classloaders work, each deployment will use up a chunk of PermGen that will not be released even if the application is undeployed.
You can mitigate the symptoms by ramping up the PermGen memory pool size to several hundred megs (or even gigs), which makes the problem easier to live with. I've also found that reducing the usage of static fields in your code (especially static fields that refer to large objects) reduces the impact on PermGen.
Ideally, I would not use hot deployment in production, but rather shut the server down, replace the WAR/EAR, then restart it.
I'm not sure it's linked, but I suspect it is - JBoss is not J2EE compliant as far as implementing application separation as it comes out of the box.
As it comes, there is one classloader into which all classes are put and thus it is not possible to unload classes and therefore you are going to have this problem. You can configure jboss to be more J2EE compliant in this respect.
Are you getting the "out of memory Permgen" or are you getting regular out of memory?
I also made progress by connecting JProfiler up to it and checking memory usage with this.
I ended up simply restarting Jboss all the time - didn't take up too much time.
Try this (which applies to Sun's Java):
-XX:+UseConcMarkSweepGC
-XX:+CMSPermGenSweepingEnabled
-XX:+CMSClassUnloadingEnabled
-XX:MaxPermSize=128m
CMS can actually GC the permanent generation heap (the heap where your classes are). Setting MaxPermSize is unnecessary, but the default is low for an application server.
I use the recent Ganymede release of Eclipse, specifically the distro for Java EE and web developers. I have installed a few additional plugins (e.g. Subclipse, Spring, FindBugs) and removed all the Mylyn plugins.
I don't do anything particularly heavy-duty within Eclipse such as starting an app server or connecting to databases, yet for some reason, after several hours use I see that Eclipse is using close to 500MB of memory.
Does anybody know why Eclipse uses so much memory (leaky?), and more importantly, if there's anything I can do to improve this?
I don't know about Eclipse specifically, I use IntelliJ which also suffers from memory growth (whether you're actively using it or not!). Anyway, in IntelliJ, I couldn't eliminate the problem, but I did slow down the memory growth by playing with the runtime VM options. You could try resetting these in Eclipse and see if they make a difference.
You can edit the VM options in the eclipse.ini file in your eclipse folder.
I found that (in IntelliJ) the garbage collector settings had the most effect on how fast the memory grows.
My settings are:
-Xms128m
-Xmx512m
-XX:MaxPermSize=120m
-XX:MaxGCPauseMillis=10
-XX:MaxHeapFreeRatio=70
-XX:+UseConcMarkSweepGC
-XX:+CMSIncrementalMode
-XX:+CMSIncrementalPacing
(See http://piotrga.wordpress.com/2006/12/12/intellij-and-garbage-collection/ for an explanation of the individual settings). As you can see, I'm more concerned with avoiding long pauses during editting than actuial memory usage but you could use this as a start.
I don't think the JVM does a lot of garbage collection unless it has to (i.e. it's getting to its limits). Therefore it grabs all the memory it can get, probably up to the limit set in the eclipse.ini (the -Xmx argument, set to 512MiB here).
You can get a visual representation of the current heap status by checking 'Preferences' -> 'General' -> 'Show heap status'. It will create a small gauge in the status bar which also has a 'trash can' button you can use to trigger a manual garbage collection.
Just for information,
you can add
-Dcom.sun.management.jmxremote
to your eclise.ini file, launch eclipse and then monitor its memory usage through 'jconsole.exe' found in your jdk installation.
C:\[jdk1.6.0_0x path]\bin\jconsole.exe
Choose 'Connection / New connection / 'eclipse' to monitor the memory used by eclipse
always use the latest jvm to launch your eclipse (that does not prevent you to use any other jfk to compile your project within eclipse)
The Ganymede Java EE plugins are absolutely huge when running in memory. Also, I've had bad experiences with FindBugs and its reliability over a long coding session.
If you can't live without these plugins though, then your only recourse is to start closing projects. If you limit the number of open projects in your workspace, the compiler (and FindBugs) will have less to worry about and your memory usage will drop tremendously.
I usually split up my workspaces by customer and then only keep the bare-minimum projects open within each workspace. Note that if you have a particularly large projects (especially ones with a lot of files checked by WST), that will not only chew through your memory, but also cause a noticeable pause in responsiveness when compiling.
Eclipse by itself is pretty bloated, and the more plugins you add only exacerbates the situation. It's still my favorite IDE, as it certainly isn't short on functionality, but if you're looking for a lightweight IDE then I'd suggest ditching Eclipse; it's pretty normal to run up half a gig of memory if you leave it running for awhile.
Eclipse is a pretty bloated IDE. You can minimize it by turning of the automatic project building under Project -> Build Automatically. It also can be helped by closing any open project you are not currently working on.
I'd call it bloated, but not leaky. (If it was leaky it would climb and climb until something crashed.) As others have said, memory is cheap! It seems like a simple decision to me: spend a tiny bit on more memory vs. lose productivity because you don't have the memory budget to run Eclipse # 500MB.
Summarized rhetorical question: What is more valuable:
The productivity gained from using an IDE you know with the plug-ins you want, or
Spending $50-200 on some memory?
RAM is relatively cheap (not that this is an excuse for poor memory managmentment). Unused memory is essentially WASTED memory. If you're hitting limits and the IDE is the problem consider less multitasking, adjusting your memory reqs, or buy more. I wouldn't cripple Eclipse if that's your bread-and-butter IDE.
Instead of whining about how much memory Eclipse takes, just go ahead and analyze where the problem is. I might be just one plugin.
Check the blog here :
"analyzing memory consumption of eclipse"
Regards,
Markus
I had problem with java-based programs memory consumption. I found that it could be related to the chosen jvm (in my case it was). Try to run eclipse with -client switch.
In some operating systems (most of linux distros I believe), the default option is server vm, which will consume noticeable more memory when running applications with gui.
In my case initial memory footprint went down from 300MB to 80MB.
Sorry for my crappy English. I hope I helped.
All Regards
Arkadiusz Jamrocha
Well, you don't specify on which platform this occurs. The memory management may vary if you're using Windows XP, Vista, Linux, OS X, ...
Usually, on my computer (WinXP with 1Gb of Ram), Eclipse take rarely more than 200Mb, depengin of the size of the opened projects, the loaded plugins and the ongoing action.
I usually give Eclipse 512 MB of RAM (using the -Xmx option of the JVM) and I don't have any memory problems with Ganymede. I upgraded to two GB of RAM a few months ago, and I can really recommend it. It makes a huge difference.
Eclipse generally keeps a lot of meta-data in memory to allow for all kinds of IDE gymnastics.
I have found that the default configuration of Eclipse works well for most purposes and that includes a limit (either given explicitly or implictly by the JVM) to how much memory can be consumed, and Eclipse will stay within that.
Is there any particular reason you are concerned about memory usage?