I am learning jvm and I try to analyse memory used in tomcat. When tomcat started, Eden usage is shown as follows:
Tomcat eden space usage monitored by jconsole
No war was deployed in tomcat, and every default contexts such as hostmanager, manager were removed. Tomcat was started by default configuration and no request was accepted. It was same in debug mode and no debug mode. When GC is running, the memory usage was decreased. What caused memory usage increasing? Could any one help me? Thank you.
Runtime environment:
jdk1.8.0_112
apache-tomcat-8.5.9
I assume your question is in regards to the memory utilization fluctuation you see through JConsole. I will try to provide my two cents in here...
Keep in mind that even though there may be no WAR file deployed, there is still a lot of code actively running on that box to perform various tasks. One example is the file monitor to identify when a new WAR file is deployed to your work directory. Every piece of code routinely allocates memory that eventually gets garbage collected, thus the memory fluctuation you see.
Another piece of code that is likely consuming much of that memory is the profiler itself. For you to be able to get the metrics you get through JConsole, there needs to be an agent running on that box that is collecting (and storing in memory) the information you see.
If you want to have a better idea of what is actually being allocated by tomcat, I suggest getting your hands on a good Java Profiler that can give you more in-depth information on object allocations. I personally suggest YourKit as it is one of the most powerful profilers I have used in my many years of building software.
Now, there may be other reasons, but don't assume that just because you are not directly interacting with Tomcat that it is not doing anything.
Hope this helps, good luck!
Related
We are running weblogic and appear to have a memory leak - we eventually run out of heap space.
We have 5 apps (5 war deployments) on the server.
Can you think of a way to gather memory usage on a per application basis?
(Then we can concentrate our search by looking through the code in the appropriate app.)
I have run jmap to get a heap dump and loaded the results in jvisualvm but it's unclear where the bulk of objects have come from - for example Strings.
I was thinking that weblogic perhaps uses separate classloaders per application and so we may be able to figure something out via that route...
try using Eclipse MAT, it gives hint of memory leaks, among others features
I'm running a somewhat classical postgres/hibernate/spring mvc webapp, with quite usual layers/frameworks.
Everything looks fine, except when i look at the memory graph in javamelody :
i periodically it seems like it grows, gc is called, then it grows again :
memory graph
When i dump the memory, it's always a 60/80 Mo file, showing that the total memory used is around 60/80 Mo, and no leak is detected
If i remove javamelody and use jconsole, it kinda shows the same problem, the memory keeps growing (a bit slower tho)
How can i see what are these +100Mo objects, constantly growing then gc'ed ? How can i fix this problem ?
Any help or explanations regarding this kind of problem would be greatly appreciated !
Thanxs in advance
EDIT : i forgot to mention that the graph comes from an isolated env, with absolutely NO user activity on it (no http request / no scheduled job)
That is the expected behavior of the Java garbage collector. Short-lived objects are accumulated in the memory until the garbage collection algorithm determines that it is worth spending time in reclaiming that memory.
You can analyze the memory dump (for instance, with Eclipse Memory Analyzer) in order to discover where are those objects, but remember that this situation is not a problem (unless they eat all of your memory and an OutOfMemoryError is thrown).
It seems that the application server or web container which the application is deployed is running some background process (JBoss has a batch process that try to recovery the distributed transaction). Enable logging trace and see it says something. But it's nothing that you need to worry about.
I'm profiling my webapp using YourKit Java Profiler. The webapp is running on tomcat 7 v30, and I can see that the heap of the JVM is ~30 megabytes, but Tomcat.exe is using 200 megabytes and keeps rising keeps rising.
Screenshot: http://i.imgur.com/Zh9NGJ1.png
(On left is how much memory profiler says Java is using, on right is Windows usage of tomcat.exe)
I've tried adding different flags to tomcat, but still the memory usage keeps rising and rising. I've tried precompiling my .jsp files as well in case that helps, but it hasn't.
The flags I've added to tomcat's java flags:
-XX:+UseG1GC
-XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=10
-XX:GCTimeRatio=1
Tomcat is also running as a windows service if that matters at all.
I need assistance figuring out how to get tomcat to use less memory/know why it's using so much memory. As is is now, it keeps going until it uses the whole system's memory.
So the solution that I found was to add some flags to the tomcat run.
Not sure which flag it was. I think it might've been the jacob library we were using, or some combo of these flags with that. Hopefully this can help people in the future.
-XX:+UseG1GC
-XX:MinHeapFreeRatio=10
-XX:MaxHeapFreeRatio=10
-XX:GCTimeRatio=1
-Dcom.jacob.autogc=true
-Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true
You should look for memory leaks in your application, or large sessions that live too long and not invalidated. Try to think which functionality holds too many objects for long periods.
You could dump Yor memory and see what is using it. Propably it will be a long list of Your application objects, or strings You unknowingly internalize.
You might use a tool like jvisualvm, or a cool eclipse tool: http://www.eclipse.org/mat/ to do that.
If You do that and still dont know why, then post us what objects are in Your memory....
I am not a Java dev, but an app landed on my desk. It's a web-service server-side app that runs in a Tomcat container. The users hit it up from a client application.
The users constantly complain about how slow it is and the app has to be restarted about twice a week, cause things get really bad.
The previous developer told me that the app simply runs out of memory (as it loads more data over time) and eventually spends all its time doing garbage collection. Meanwhile, the Heap Size for Tomcat is set at 6GB. The box itself has 32GB of RAM.
Is there any harm in increasing the Heap Size to 16GB?
Seems like an easy way to fix the issue, but I am no Java expert.
You should identify the leak and fix it, not add more heap space. Thats just a stop gap.
You should configure tomcat to dump the heap on error, then analyze the heap in one of any number of tools after a crash. You can compute the retained sizes of all the clases, which should give you a very clear picture of what is wrong.
Im my profile I have a link to a blog post about this, since I had to do it recently.
No, there is no harm in increasing the Heap Size to 16GB.
The previous developer told me that the app simply runs out of memory (as it loads more data over time)
This looks like a memory leak, a serious bug in application. If you increase the amount of memory available from 6 to 16 GiB, you're still gonna have to restart the application, only less frequent. Some experienced developer should take a look at the application heap while running (look at hvgotcodes tips) and fix the application.
To resolve these issues you need to do performance testing. This includes both CPU and memory analysis. The JDK (6) bundles a tool called VisualVM, on my Mac OS X machine this is on the path by default as "jvisualvm". That's free and bundled, so it's a place to start.
Next up is the NetBeans Profiler (netbeans.org). That does more memory and CPU analysis. It's free as well, but a bit more complicated.
If you can spend the money, I highly recommend YourKit (http://www.yourkit.com/). It's not terribly expensive but it has a lot of built-in diagnostics that make it easier to figure out what's going on.
The one thing you can't do is assume that just adding more memory will fix the problem. If it's a leak, adding more memory may just make it run really badly a bit longer between restarts.
I suggest you use a profiling tool like JProfiler, VisualVM, jConsole, YourKit etc. You can take a heap dump of your application and analyze which objects are eating up memory.
I am trying to figure out why Jetty 6.1.22 is running out of memory on my laptop. I have 2 web applications running JBoss Seam, Hibernate (with EHCache), and separate Quartz scheduler instances.
With little load, the server dies throwing OutOfMemory.
What can I look for? Would you think that I am not properly closing handles for input streams or files?
I tried profiling my application with Netbeans, but it works off and on. Usually, it ends up locking up even though it doesn't use that much CPU or memory.
Walter
What are you JVM's execution parameters?
Try increasing available heap memory though -Xms (default) -Xmx (max) and -Xmn (min) JVM's flags.
You can also monitor your Application Server execution with JConsole. It's usually helpful for finding out where is you application leaking.
add -XX:+HeapDumpOnOutOfMemoryError when invoking the jvm and when you get the OOM situation you will get a .hprof file dumped. You can open it later with several tools and you'll be able to see where the memory is going...
The tool I use is Eclipse Memory Analyzer, it's pretty good.
I can strongly recommend attaching to the troublesome program with jvisualvm in the JDK.
This allows you to investigate memory and cpu usage over time and inspect what happens in general.