I have run into an issue with a java application I wrote causing hardware performance issues. The problem (I'm fairly certain), is that a few of the machines that I'm running the application on only have 1GB of memory. When I start my java application, I"m setting the heap size to -Xms 512m -Xmx 1024m.
My first question, is my assumption correct that this will obviously cause performance problems because I'm allocating all of the machines memory to the java heap?
This leads to another question. I'm running jconsole on the app and monitoring the apps memory usage. What I'm seeing is that the app consumes about 30mb at startup, gets to about 150mb and the garbage collector runs and it goes back down to 30mb. What I'm also seeing using top on the pid is that the application starts by using about 6% memory then slowly climbs up to about 20%. I do not understand this. Why would it only get up to 20% memory usage when I'm allocating 1GB to it. Shouldn't it go to 100%. Also, why is it using that much memory (20%) when it doesn't appear that the app ever uses more than 150mb?
I think its pretty obvious I need to adjust my Xms and Xmx and that should resolve the issue, but I'm trying to understand better what exactly is happening.
Two possibilities for the memory use:
Your app just does not use that much memory
Or
Your app does not use that much memory fast enough.
What happens:
The garbage collector has several points where it will execute:
Just scheduled: It will clean up easy to remove objects
Full collection: This runs when you hit the set memory limits.
If options 1, the general much lower impact quick collection, can keep your memory use under control, it will not hit the full collection unless it the JVM GC options are set to run a full on a schedule.
With your application I would start setting lower xmx/xms values so that more guaranteed resources are left for the OS, and maybe some paging is prevented.
Related
I have a binary that contains a list of short strings which is loaded on startup and stored in memory as a map from string to protobuf (that contains the string..). (Not ideal, but hard to change that design due to legacy issues)
Recently that list has grown from ~2M to ~20M entries causing it to fail when constructing the map.
First I got OutOfMemoryError: Java heap space.
When I increased the heap size using the xms and xmx we ran into GC overhead limit exceeded.
Runs on a Linux 64-bit machine with 15GB available memory and the following JVM args (I increased the RAM 10G->15G and the heap flags 6000M -> 9000M):
-Xms9000M -Xmx9000M -XX:PermSize=512m -XX:MaxPermSize=2018m
This binary does a whole lot of things and is serving live traffic so I can't afford it being occasionally stuck.
Edit: I eventually went and did the obvious thing, which is fixing the code (change from HashMap to ImmutableSet) and adding more RAM (-Xmx11000M).
I'm looking for a temporary solution if that's possible until we have a more scalable one.
First, you need to figure out if the "OOME: GC overhead limit exceeded" is due to the heap being:
too small ... causing the JVM to do repeated Full GCs, or
too large ... causing the JVM to thrash the virtual memory when a Full GC is run.
You should be able to distinguish these two cases by turning on and examining the GC logs, and using OS-level monitoring tools to check for excessive paging loads. (When checking the paging levels, also check that the problem isn't due to competition for RAM between your JVM and another memory-hungry application.)
If the heap is too small, try making it bigger. If it is too big, make it smaller. If you system is showing both symptoms ... then you have a big problem.
You should also check that "compressed oops" is enabled for your JVM, as that will reduce your JVM's memory footprint. The -XshowSettings option lists the settings in effect when the JVM starts. Use -XX:+UseCompressedOops to enable compressed oops if they are disabled.
(You will probably find that compressed oops are enabled by default, but it is worth checking. This would be an easy fix ... )
If none of the above work, then your only quick fix is to get more RAM.
But obviously, the real solution is to reengineer the code so that you don't need a huge (and increasing over time) in-memory data structure.
We developed an highly CPU intensive Java server application that is having a serious memory leak (or so it seems). As time passes, the application seems to eat up increasingly more memory (as seen with Windows Task Manager) but if I analyse it a specialized Java profiler the memory seems to be staying the same. For example, in task manager I see the application taking over 8gb of memory and growing, but in the Java profiler I see that heap memory is at most 2gb. I tried all possible combinations of JAVA_OPTS (-Xmx, -Xms, all types of GC) and nothing worked, Is the Java process not releasing memory back to OS? Is there any way to force it to do so?
1)
I suggest you to set -Xmx2100m and observe heap usage under load.
JVM may take as much OS memory as it decide to be performant, until it reaches Xmx limit. In modern JVMs default Xmx is calculated upon total memory available in OS, so it may be large value.
I think your app does not have memory leak, your JVM simply allocate a lot of memory, because it can.
Observe your JVM thru jvisualvm.
2)
Second suggestion - do you use any JNI code? Does your app call any native library (ie. dll under windows)?
Our JBoss 3.2.6 application server is having some performance issues and after turning on the verbose GC logging and analyzing these logs with GCViewer we've noticed that after a while (7 to 35 hours after a server restart) the GC going crazy. It seems that initially the GC is working fine and doing a GC every hour or so but at a certain point it starts going crazy and performing full GC's every minute. As this only happens in our production environment have not been able to try turning off explicit GCs (-XX:-DisableExplicitGC) or modify the RMI GC interval yet but as this happens after a few hours it does not seem to be caused by the know RMI GC issues.
Any ideas?
Update:
I'm not able to post the GCViewer output just yet but it does not seem to be hitting the max heap limitations at all. Before the GC goes crazy it is GC-ing just fine but when the GC goes crazy the heap doesn't get above 2GB (24GB max).
Besides RMI are there any other ways explicit GC can be triggered? (I checked our code and no calls to System.gc() are being made)
Is your heap filling up? Sometimes the VM will get stuck in a 'GC loop' when it can free up just enough memory to prevent a real OutOfMemoryError but not enough to actually keep the application running steadily.
Normally this would trigger an "OutOfMemoryError: GC overhead limit exceeded", but there is a certain threshold that must be crossed before this happens (98% CPU time spent on GC off the top of my head).
Have you tried enlarging heap size? Have you inspected your code / used a profiler to detect memory leaks?
You almost certainly have a memory leak and the if you let the application server continue to run it will eventually crash with an OutOfMemoryException. You need to use a memory analysis tool - one example would be VisualVM - and determine what is the source of the problem. Usually memory leaks are caused by some static or global objects that never release object references that they store.
Good luck!
Update:
Rereading your question it sounds like things are fine and then suddenly you get in this situation where GC is working much harder to reclaim space. That sounds like there is some specific operation that occurs that consumes (and doesn't release) a large amount of heap.
Perhaps, as #Tim suggests, your heap requirements are just at the threshold of max heap size, but in my experience, you'd need to pretty lucky to hit that exactly. At any rate some analysis should determine whether it is a leak or you just need to increase the size of the heap.
Apart from the more likely event of a memory leak in your application, there could be 1-2 other reasons for this.
On a Solaris environment, I've once had such an issue when I allocated almost all of the available 4GB of physical memory to the JVM, leaving only around 200-300MB to the operating system. This lead to the VM process suddenly swapping to the disk whenever the OS had some increased load. The solution was not to exceed 3.2GB. A real corner-case, but maybe it's the same issue as yours?
The reason why this lead to increased GC activity is the fact that heavy swapping slows down the JVM's memory management, which lead to many short-lived objects escaping the survivor space, ending up in the tenured space, which again filled up much more quickly.
I recommend when this happens that you do a stack dump.
More often or not I have seen this happen with a thread population explosion.
Anyway look at the stack dump file and see whats running. You could easily setup some cron jobs or monitoring scripts to run jstack periodically.
You can also compare the size of the stack dump. If it grows really big you have something thats making lots of threads.
If it doesn't get bigger you can at least see which objects (call stacks) are running.
You can use VisualVM or some fancy JMX crap later if that doesn't work but first start with jstack as its easy to use.
My Linux server need to be able to handle 30+ eclipse instances for developers. I did a quick test of running 10 eclipse instances. The Java process associated with each eclipse initially around 200MB RSS memory, increased up to around 550MB, when more projects are loaded.
But Java process doesn't seem to release memory, after closing/deleting all projects within eclipse instances. I still see it uses over 550MB RSS.
How can I change Eclipse or Java settings so that memory foot print got reduced when developers closed down projects or being idle for a while?
Thanks
You may want to experiment with these (and other) JVM tuning options to make the JVM less reluctant to return memory to the OS:
-XX:MaxHeapFreeRatio Maximum percentage of heap free after GC to avoid shrinking. Default is 70.
-XX:MinHeapFreeRatio Minimum percentage of heap free after GC to avoid expansion. Default is 40.
However, I suspect that you won't see the eclipse process shrink to anywhere near its initial size, since eclipse is a huge, complex application that probably lazy-loads (but does not unload, once used) a lot of classes and associated data structures.
I've never seen Java release memory.
I don't think you will get any value out of trying to get it to release memory with Eclipse, I've watched that little memory counter for YEARS and never once see the allocated memory drop.
You might try one of these.
After each session, exit the JVM and restart.
Set your -Xmx lower.
Separate your instances into categories with high -Xmx and low -Xmx and let the user determine which one he wants.
As a side-thought, if it really mattered to you, you MIGHT be able to run multiple eclipse instances under one VM. It would probably be WAY too much work (man-weeks to man-years), but if you could get it right you could reduce overhead by like 150-200mb/instance. The disadvantage would be that a VM crash (Pretty rare these days) would kill everyone.
Testing this theory would be a matter of calling eclipse's main from within an existing JVM and trying to get it to display somewhere useful. The rest of the man-year is spent trying to figure out where they used evil static variables or singletons and changing them to something else.
Switch the Java to use the G1 garbage collector with the HeapFreeRatio parameters. Use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory will be released back to the operating system.
I would suggest checking on garbage collection, setting right options or even forcing GC periodically might increase time till eclipse memory usage grows high.
Following link might be useful http://www.eclipsezone.com/eclipse/forums/t93757.html
I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC