Is my Glassfish setup leaking memory? - java

I've got a Glassfish v3 server running a few web applications (servlets, JSP, JDBC). I'd been noticing that if I let Glassfish run for a long time, it will consume all of the memory available (this is running on a server with 750 MB of memory).
I figured that there must be a memory leak, so I ran the server while monitoring it with JProfiler and noticed that when I get a peak in traffic, my memory usage shoots up (as expected), but then quickly drops back down.
I'm wondering if the issue is less of a memory leak, and more that Glassfish expands its heap size when the spikes occur (this does seem to be happening) but never decreases the heap size when the actual memory usage declines.
However, based on this graph, it does seem like the memory usage (blue) is trending upwards as the server runs longer.
My question is two-fold:
Is there any way to have the heap size decreased when the actual memory usage drops after a spike?
Is it probable that I do have a memory leak, or is this normal? What can I do to investigate this memory usage further?

It does not look like a memory leak, as memory would keep on growing forever and it would really start blowing up with OOM errors, this is most likely the HotSpot compiler turning interpreted code into native code and this is surely going to claim memory and never give it back, as this memory goes to the eternal generation.
You should probably use a tool like JConsole or VisualVM to make sure that this is a leak and not something else.

For 1. There is no way to do this.
For 2. you can use VisualVm to see where you are actually using memory.

Related

Memory running out side of Xmx and Xms

I have run into an issue with a java application I wrote causing hardware performance issues. The problem (I'm fairly certain), is that a few of the machines that I'm running the application on only have 1GB of memory. When I start my java application, I"m setting the heap size to -Xms 512m -Xmx 1024m.
My first question, is my assumption correct that this will obviously cause performance problems because I'm allocating all of the machines memory to the java heap?
This leads to another question. I'm running jconsole on the app and monitoring the apps memory usage. What I'm seeing is that the app consumes about 30mb at startup, gets to about 150mb and the garbage collector runs and it goes back down to 30mb. What I'm also seeing using top on the pid is that the application starts by using about 6% memory then slowly climbs up to about 20%. I do not understand this. Why would it only get up to 20% memory usage when I'm allocating 1GB to it. Shouldn't it go to 100%. Also, why is it using that much memory (20%) when it doesn't appear that the app ever uses more than 150mb?
I think its pretty obvious I need to adjust my Xms and Xmx and that should resolve the issue, but I'm trying to understand better what exactly is happening.
Two possibilities for the memory use:
Your app just does not use that much memory
Or
Your app does not use that much memory fast enough.
What happens:
The garbage collector has several points where it will execute:
Just scheduled: It will clean up easy to remove objects
Full collection: This runs when you hit the set memory limits.
If options 1, the general much lower impact quick collection, can keep your memory use under control, it will not hit the full collection unless it the JVM GC options are set to run a full on a schedule.
With your application I would start setting lower xmx/xms values so that more guaranteed resources are left for the OS, and maybe some paging is prevented.

Java memory usage via linux top keep increasing in small percentage?

We have a socket listener programme running on centos machine. What is worrying is that the memory usage for the application via the top keep showing come minor increment. On the other hand the if we use the jstat gcutil it shows some minor increase in the Permanent Generation but so far they have been no FGC but many YGC. Could this be indicating any memory issue? Both max and initial memory have been set to 256M.
Could this be indicating any memory issue?
Maybe. What you are describing could be a memory leak caused by a bug in your application. If that is the problem, then eventually the application will fill up the Java heap .... and die with an OutOfMemoryError.
If you want to confirm this, try running the application with a much smaller heap; i.e. a smaller max heap size. If you have a leak, the application will crash after a shorter time.
There are lots of resources on finding Java memory leaks. Here are some:
General strategy to resolve Java memory leak?
How to find a Java Memory Leak
http://netbeans.org/kb/articles/nb-profiler-uncoveringleaks_pt1.html
http://rejeev.blogspot.com.au/2009/04/analyzing-memory-leak-in-java.html
There are other possible explanations for this ... including "there is no problem". But if you get OOME's then you do have a real problem.

UsageMemory threashold in JConsole

I am looking into how to use JConsole to detect memory leaks.
I see that in Memory Pool in my MBeans I can define UsageThreashold for my Tenured Generation.
So if my application exceeds this threashold the heap memory becomes red in the Memory tab.
Question: How does this help? I mean how am I supposed to use this setting to analyze my memory? How am I supposed to figure out this value?
In my opinion I don't think that UsageThreashold parameter is the most helpful for you to detect memory leaks (but if someone knows some tricks with it, please do share). In my experience that parameter is more helpful to visually understand if my application is getting way too near my max heap size and I'm in danger of getting an OutOfMemoryException.
Still regarding using JConsole to search for memory leaks, I don't think there's a silver bullet for the process. But what I usually do is the following:
If exists a memory leak, it means that the objects (the ones that are leaking) won't get collected, hence, your Tenured Generation won't fully recover after any amount of GCs.
With the application running I connect JConsole and try to spot a leak by observing the memory tab, if after several computations of my application and also after various GCs occurring (including pressing the Perform GC button, which will result in a full gc) the memory never goes below, or at least to the memory value, it started tracking there's a great possibility that something is leaking. When the leak is big, you can even see a "stair graph" pattern in your memory.
Keep in mind that if your application has long computations running, which may consume memory this analyzes must be done carefully. You must understand when those processes have finished. For example, just run one of those computations and track the total evolution of memory, before, during and afterwards.
Also, I suggest you to try visualVM instead, because it also allows you to create heap dumps, which you can use in order to understand which objects are still in memory and explore the references graph to understand why they are not being collected.
you can use JMAP to see the histogram and/or to create heap dumps and study your memory consumption with tools like Eclipse MAT or YourKit.
JConsole is used more for monitoring and running MBeans and less for analysis and in my expirence JVisualvm is better for that since you can use it for sampling your code and see what methods are CPU consuming.

Garbage Collector going crazy after a few hours

Our JBoss 3.2.6 application server is having some performance issues and after turning on the verbose GC logging and analyzing these logs with GCViewer we've noticed that after a while (7 to 35 hours after a server restart) the GC going crazy. It seems that initially the GC is working fine and doing a GC every hour or so but at a certain point it starts going crazy and performing full GC's every minute. As this only happens in our production environment have not been able to try turning off explicit GCs (-XX:-DisableExplicitGC) or modify the RMI GC interval yet but as this happens after a few hours it does not seem to be caused by the know RMI GC issues.
Any ideas?
Update:
I'm not able to post the GCViewer output just yet but it does not seem to be hitting the max heap limitations at all. Before the GC goes crazy it is GC-ing just fine but when the GC goes crazy the heap doesn't get above 2GB (24GB max).
Besides RMI are there any other ways explicit GC can be triggered? (I checked our code and no calls to System.gc() are being made)
Is your heap filling up? Sometimes the VM will get stuck in a 'GC loop' when it can free up just enough memory to prevent a real OutOfMemoryError but not enough to actually keep the application running steadily.
Normally this would trigger an "OutOfMemoryError: GC overhead limit exceeded", but there is a certain threshold that must be crossed before this happens (98% CPU time spent on GC off the top of my head).
Have you tried enlarging heap size? Have you inspected your code / used a profiler to detect memory leaks?
You almost certainly have a memory leak and the if you let the application server continue to run it will eventually crash with an OutOfMemoryException. You need to use a memory analysis tool - one example would be VisualVM - and determine what is the source of the problem. Usually memory leaks are caused by some static or global objects that never release object references that they store.
Good luck!
Update:
Rereading your question it sounds like things are fine and then suddenly you get in this situation where GC is working much harder to reclaim space. That sounds like there is some specific operation that occurs that consumes (and doesn't release) a large amount of heap.
Perhaps, as #Tim suggests, your heap requirements are just at the threshold of max heap size, but in my experience, you'd need to pretty lucky to hit that exactly. At any rate some analysis should determine whether it is a leak or you just need to increase the size of the heap.
Apart from the more likely event of a memory leak in your application, there could be 1-2 other reasons for this.
On a Solaris environment, I've once had such an issue when I allocated almost all of the available 4GB of physical memory to the JVM, leaving only around 200-300MB to the operating system. This lead to the VM process suddenly swapping to the disk whenever the OS had some increased load. The solution was not to exceed 3.2GB. A real corner-case, but maybe it's the same issue as yours?
The reason why this lead to increased GC activity is the fact that heavy swapping slows down the JVM's memory management, which lead to many short-lived objects escaping the survivor space, ending up in the tenured space, which again filled up much more quickly.
I recommend when this happens that you do a stack dump.
More often or not I have seen this happen with a thread population explosion.
Anyway look at the stack dump file and see whats running. You could easily setup some cron jobs or monitoring scripts to run jstack periodically.
You can also compare the size of the stack dump. If it grows really big you have something thats making lots of threads.
If it doesn't get bigger you can at least see which objects (call stacks) are running.
You can use VisualVM or some fancy JMX crap later if that doesn't work but first start with jstack as its easy to use.

Java/Tomcat heap size question

I am not a Java dev, but an app landed on my desk. It's a web-service server-side app that runs in a Tomcat container. The users hit it up from a client application.
The users constantly complain about how slow it is and the app has to be restarted about twice a week, cause things get really bad.
The previous developer told me that the app simply runs out of memory (as it loads more data over time) and eventually spends all its time doing garbage collection. Meanwhile, the Heap Size for Tomcat is set at 6GB. The box itself has 32GB of RAM.
Is there any harm in increasing the Heap Size to 16GB?
Seems like an easy way to fix the issue, but I am no Java expert.
You should identify the leak and fix it, not add more heap space. Thats just a stop gap.
You should configure tomcat to dump the heap on error, then analyze the heap in one of any number of tools after a crash. You can compute the retained sizes of all the clases, which should give you a very clear picture of what is wrong.
Im my profile I have a link to a blog post about this, since I had to do it recently.
No, there is no harm in increasing the Heap Size to 16GB.
The previous developer told me that the app simply runs out of memory (as it loads more data over time)
This looks like a memory leak, a serious bug in application. If you increase the amount of memory available from 6 to 16 GiB, you're still gonna have to restart the application, only less frequent. Some experienced developer should take a look at the application heap while running (look at hvgotcodes tips) and fix the application.
To resolve these issues you need to do performance testing. This includes both CPU and memory analysis. The JDK (6) bundles a tool called VisualVM, on my Mac OS X machine this is on the path by default as "jvisualvm". That's free and bundled, so it's a place to start.
Next up is the NetBeans Profiler (netbeans.org). That does more memory and CPU analysis. It's free as well, but a bit more complicated.
If you can spend the money, I highly recommend YourKit (http://www.yourkit.com/). It's not terribly expensive but it has a lot of built-in diagnostics that make it easier to figure out what's going on.
The one thing you can't do is assume that just adding more memory will fix the problem. If it's a leak, adding more memory may just make it run really badly a bit longer between restarts.
I suggest you use a profiling tool like JProfiler, VisualVM, jConsole, YourKit etc. You can take a heap dump of your application and analyze which objects are eating up memory.

Categories

Resources