My Linux server need to be able to handle 30+ eclipse instances for developers. I did a quick test of running 10 eclipse instances. The Java process associated with each eclipse initially around 200MB RSS memory, increased up to around 550MB, when more projects are loaded.
But Java process doesn't seem to release memory, after closing/deleting all projects within eclipse instances. I still see it uses over 550MB RSS.
How can I change Eclipse or Java settings so that memory foot print got reduced when developers closed down projects or being idle for a while?
Thanks
You may want to experiment with these (and other) JVM tuning options to make the JVM less reluctant to return memory to the OS:
-XX:MaxHeapFreeRatio Maximum percentage of heap free after GC to avoid shrinking. Default is 70.
-XX:MinHeapFreeRatio Minimum percentage of heap free after GC to avoid expansion. Default is 40.
However, I suspect that you won't see the eclipse process shrink to anywhere near its initial size, since eclipse is a huge, complex application that probably lazy-loads (but does not unload, once used) a lot of classes and associated data structures.
I've never seen Java release memory.
I don't think you will get any value out of trying to get it to release memory with Eclipse, I've watched that little memory counter for YEARS and never once see the allocated memory drop.
You might try one of these.
After each session, exit the JVM and restart.
Set your -Xmx lower.
Separate your instances into categories with high -Xmx and low -Xmx and let the user determine which one he wants.
As a side-thought, if it really mattered to you, you MIGHT be able to run multiple eclipse instances under one VM. It would probably be WAY too much work (man-weeks to man-years), but if you could get it right you could reduce overhead by like 150-200mb/instance. The disadvantage would be that a VM crash (Pretty rare these days) would kill everyone.
Testing this theory would be a matter of calling eclipse's main from within an existing JVM and trying to get it to display somewhere useful. The rest of the man-year is spent trying to figure out where they used evil static variables or singletons and changing them to something else.
Switch the Java to use the G1 garbage collector with the HeapFreeRatio parameters. Use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory will be released back to the operating system.
I would suggest checking on garbage collection, setting right options or even forcing GC periodically might increase time till eclipse memory usage grows high.
Following link might be useful http://www.eclipsezone.com/eclipse/forums/t93757.html
Related
we have a java8 web application running on tomcat8.5.47 server.we have only 20-60 users sessions per time but most of time up to 600mb uploading files on server.we also use hibernate and c3p0 for manage database connections.
we monitored server several days and saw sometimes java reserved ram increased suddenly and garbage collector did not released it.how can we manage this?and is there any way to release reserved ram and prevent tomcat from increasing ram? and also any way to decrease used ram in task manager?
these are our settings:
-XX:MaxPermSize=1g -XX:+UseG1GC -XX:+UseStringDeduplication -XX:MaxHeapFreeRatio=15 -XX:MinHeapFreeRatio=5 -XX:-UseGCOverheadLimit -Xmn1g -XX:+UseCompressedOops -Xms10g -Xmx56g
and it is an image of profiler when this happened:
and it is an image of profiler and also task manager after 2 hours:
P.s. we use jprofiler to profile and the green colour shows reserved ram and the blue colour is for used ram.also in second box you can track gc activity and third is for classes and forth shows threads activities and last is for cpu activities.
Thank you all for your answers.
These types of questions are never easy, mainly because to get it "right", the person asking them needs to have some basic understanding of how an OS treats and deals with memory; and the fact that there are different types of memory (at least resident, committed and reserved). I am by far not versatile enough to get this entirely right too, but I keep learning and getting better at this. They mean very different things and some of them are usually irrelevant (I find reserved to be such). You are using windows, as such this, imho is a must watch to begin with.
After you watch that, you need to move to the JVM world and how a JVM process. The heap is managed by a garbage collector, so to shrink some un-used heap - the GC needs to be able to do that. And while, before jdk-12, G1 could do that - it was never very eager to. Since jdk-12, there is this JEP that will return memory back, i.e.: it will un-commit memory back. Be sure to read when that happens, though. Also notice that other collectors like Shenandoah and/or ZGC do it much more often.
Of course, since you disable -UseGCOverheadLimit, you get a huge spike in CPU (GC threads are running like crazy to free space) and of course everything slows down. If I were you, i would enable that one back, let GC fail and analyze GC logs to understand what is going on. 56GB of Heap is a huge number for 20-60 users (this surely looks like a leak?). Notice, that without GC logs, this might be impossible to give a solution to.
P.S. Look at the first screen you shared and notice how there are two colors there: green and blue. I don't know what tool is that, but it looks like green is for "reserved memory" and blue is "used" (this is what used means). But it would be great if you said exactly what those are.
Java8 doesn't return allocated RAM back to OS even if JVM doesn't need it. For that feature you need to move to another version of JDK. This is JEP for that https://openjdk.java.net/jeps/346 it says that it was delivered in version 12 so I assume JDKs with version after 12 should have that feature.
The only way to prevent increasing of reserved memory is to decrease Xmx value. And since you are setting it to 56g I assume you are OK with Tomcat consuming up to 56g of memory. So if you think that it is too much then just decrease that number.
I'm running java with java -Xmx240g mypackage.myClass
OS is Ubuntu 12.10.
top says MiB Mem 245743 total, and shows that java process has virt 254g since the very beginning, and res is steadily increasing up to 169g. At that point it looks like it starts garbage collect a lot, I think so because the program is single-threaded at that point, and CPU% is mostly 100% up to this point, and it jumps around 1300-2000 at this point (I conclude it is multithreaded garbage collector), and then res slowly moves to 172g. At that point java crashes with
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at the line with new double[2000][5]
java -version says
java version "1.7.0_15"
OpenJDK Runtime Environment (IcedTea7 2.3.7) (7u15-2.3.7-0ubuntu1~12.10)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
Hardware is Amazon cr1.8xlarge instance
It seems to me that java crashes even when there's a lot of memory available. It is clearly not possible, I have to interpret some numbers wrong. Where should I look to understand what's going on?
Edit:
I don't specify any GC options. The only command-line option is -Xmx240g
My program is successfully working on many inputs, and top said sometimes that it uses up to 98.3% of memory. However I reproduced the situation described above with certain program input.
Edit2:
This is scientific application. It has gigantic tree (1-10 millions of nodes), in each node there are couple double arrays with size approx. 300x3 - 900x5. After initial tree creation program does not allocate much memory. Most of the time there are some arithmetic operations going on with these arrays.
Edit3:
HotSpot JVM died the same way, used CPU a lot at 170-172g mark and crashed with the same error. Looks like 70-75% of memory is some magical line that JVM does not want to cross.
Final solution:
With -XX:+UseConcMarkSweepGC -XX:NewRatio=12 program made it through 170g mark and is happily working further.
Analysis
The first thing you need to do is get a heap dump so you can figure out exactly what the heap looks like when the JVM crashes. Add this set of flags to the command line:
-XX:+HeapDumpOnOutOfMemoryError -verbose:gc -XX:+PrintGCDetails
When a crash happens, the JVM is going to write out the heap to disk. And frankly, its going to take a long time on a heap that size. Download Eclipse MAT or install the plugin if you're already running Eclipse. From there, you can load up the heap dump and run a couple of canned reports. You'll want to check the Leak Suspects and Dominator Tree to see where your memory is going and determine that you don't have an actual leak.
After that, I would recommend you read this document by Oracle about Garbage Collection, however here are some things you can consider:
Concurrent GC
-XX:+UseConcMarkSweepGC
I've never heard of anyone getting away with using the parallel only collector on a heap that size. You can activate the concurrent collector, and you'll want to read up on incremental mode and determine if its right for your workload / hardware combo.
Heap Free Ratio
-XX:MinHeapFreeRatio=25
Dial this down to lower the bar for the garbage collector when you do a full collection. This may prevent you from running out of memory doing a full collection. 40% is the default, experiment with smaller values.
New Ratio
-XX:NewRatio
We'll need to hear more about your actual workload: is this a webapp? A swing app? Depending on how long objects are expected to remain alive on the heap will have an impact on the new ratio value. Server-mode VMs like the one you're running have a fairly high new ratio by default (8:1), this may not be ideal for you if you have a lot of long-lived objects.
If I understood your question correcly, it looks like memory leak actually happening before the program hits the line new double[2000][5]. It seems the memory is already low whe nthe line is hit, thus it throws up when this line asks for more memory.
I would use jvisualvm or similar tools to find out where the memory leak is. Memory leak I've encountered mostly to do with Strings being created in a loop, Cache not being cleared etc.
As a general advice, NEVER use OpenJDK, even less for production environments, it is much slower than the one from Sun/Oracle.
Apart from that I have never seen VM using sooo much memory, but I guess that is what you need (or maybe you have a code using more memory than needed?)
EDIT : OpenJDK for server is fine, only differences with Sun/Oracle JDK is regarding desktop stuff (sound, gui...) so ignore that part.
I'm experiencing a very odd problem with a Java application running under Tomcat.
We tried to update the production code from a fresh newly produced in a 1-week sprint, the application has been running over months without hiccups and then this new code makes our Linux servers start swapping after some time.
The very strange thing is that when looking at VisualVM for memory usage it never exceeds the maximum heap size, the JVM does not throw an OutOfMemory, the machine only starts swapping and the JVM keeps running even after that.
So, it seems that's leaking memory from somewhere, it seems like it's from the new code but it's odd that it's not inside the JVM, any ideas in how to debug that?
Thanks!
Swapping is not a conclusive indicator of leakage. It results from low physical memory. Use vmstat on Linux to get swap usage. Try using a different machine, experiment with configurations --swap size, physical memory size, address space.
If you are confident that the problem is in your program try this:
Estimate the median and peak memory that your program should use. You must be able to account for all deviations from these metrics. If you cannot, proceed to step 3.
Assuming you did step 1 correctly and were able to account for all deviations, you can rule out the leak (sorry about such vague suggestions but debugging is only as good as the detective). You should now focus on GC tuning. First, enable GC logging. See if your heap is actually full and where the GC is spending most of its time collecting. This may be a good starting point to start optimizations. Try to see if adjusting GC options helps. Try experimenting with collection algorithms, max/min heap sizes, gen ratios etc. Only experiment when you have ruled out a leak (step 1).
Assuming you did step 1 correctly and were not able to account for all deviations, you can assume that you have a leak somwhere. Use a memory profiler to see what objects contribute to the heap size growth most. Leave a profiler running for an extended period of time --have your program handle some requests it routinely expects to get and then leave it relatively isolated after that. If the memory level keeps on growing you may have a leak somewhere. If not, then it is probably not a memory leak. Can you pin point the part of your program that may be creating them? If yes, try sending several requests that only target that part of your program. Does it replicate the problem deterministically? If no, repeat step 3. If yes, use divide and conquer and reapply step 3 till you can find the class/method that are the culprits. It can be a certain combination of multiple portions as well (meaning that individually they may look innocent but together they may form a brilliant crime syndicate).
Hope this helps, if not then please leave a comment to my post.
All the very best on your exercise!
I would suggest you look into creating heap dumps without using jvisualvm. For Unix-based Oracle JVM's this is normally done by sending a signal 3 to the JVM using kill.
For full details see http://www.startux.de/index.php/java/45-java-heap-dumpyvComment45
You can then see if the patterns changes.
If you do not get an idea from this, then this might be because you are storing a sub-string from a very large original string (which carries the underlying string array around), or because you hold on to operating system resources like open database connections etc.
You have checked your connection pool looks good?
If you aren't using it, I'd recommend using visual VM version 1.3.2 and all the plug-ins. It's a big jump up from earlier versions.
What happens to the perm gen space?
What are the memory settings you're using? Min and max, of course, but what about perm space size?
I know how to set the Java heap size in Tomcat and Eclipse. My question is why? Was there an arbitrary limit set on the initial heap back when Java was first introduced so the VM wouldn't grow over a certain size? It seems with most machines today with large memory space available this isn't something we should have to deal with.
Thanks,
Tom
Even now, the heap doesn't grow without limit.
When the oldest generation is full, should you expand it or just GC? Or should you only expand it if a GC doesn't free any memory?
.NET takes the approach you'd like: you can't tell it to only use a certain amount of heap. Sometimes it feels like that's a better idea, but other times it's nice to be able to have two processes on the same machine and know that neither of them will be able to hog the whole of the memory...
I glanced by this the other day, but I'm not sure if this is what you want: -XX:+AggressiveHeap. According to Sun:
This option instructs the JVM to push
memory use to the limit: the overall
heap is more than 3850MB, the
allocation area of each thread is
256K, the memory management policy
defers collection as long as possible,
and (beginning with J2SE 1.3.1_02)
some GC activity is done in parallel.
Because this option sets heap size, do
not use the -Xms or -Xmx options in
conjunction with -XX:+AggressiveHeap.
Doing so will cause the options to
override each other's settings for
heap size.
I wasn't sure if this really meant what I thought it meant, though - that you could just let the JVM gobble up heap space until it is satisfied. However, it doesn't sound like it's a good option to use for most situations.
I would think that it's good to be able to provide a limit so that if you have a memory issue it doesn't gobble up all the system memory leaving you with only a reboot option.
Java is a cross-platform system. Some systems (like Unix and derviates) have a ulimit command which allows you to limit how much memory a process can use. Others don't. Plus Java is sometimes run embedded, for example in a web browser. You don't want a broken applet to bring down your desktop (well, that was at least the idea but applets never really caught on but that's another story). Essentially, this option is one of the key cornerstones for sandboxing.
So the VM developers needed a portable solution: They added an option to the VM which would allow anyone (user, admin, web browser) to control how much RAM a VM could allocate tops. The needs of the various uses of Java are just too diverse to have one size fits all.
This becomes even more important today when you look at mobile devices. You desktop has 2-8GB RAM but your mobile has probably much less. And for these things, you really don't want one bad app to bring down the device because there might not even be a user who could check.
I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC