JVM performance with these garbage collection settings - java

I have an enterprise level Java application that serves a few thousand users per day. This is a JAXB web service on weblogic 10.3.6 (Java 1.6 JVM), using Hibernate to hit an Oracle database. It also calls other web services.
We have it tuned the following GC settings on our production system:
-server -Xms2048m -Xmx2048m -XX:PermSize=512m -XX:MaxPermSize=512m
What is the effect of this GC sizing? The hardware has more than enough capacity to handle it.
I know that this sets the heap size and perm gen at a stable level. But what's the impact of that when you eventually have to do garbage collection?
To me it seems that it would make GC happen less frequently, but take longer when it does happen. Does that sound correct?

I would say please monitor the GC before deciding on the sizing as you never know how the application will behave under load. Have a look at this link and this it has some good references about GC and tools to calculate the same.

it would make GC happen less frequently, but take longer when it does happen
It might, it depends on your use case. You might even find that the GC is shorter in rare case.
A 2 GB heap isn't that much and I would use up to 26 GB without worrying about heap size. Above this size memory accesses are a little slower or use more memory.

Setting -Xmx & -Xms and PermSize & MaxPermSize to equal sizes will stop the JVM from resizing the heaps based on your requirement. These resizes are expensive as they trigger a Full GC.
-server will allow the JVM to make use of Server Compiler which will do more aggressive optimizations before compiling your code to native assembly instructions. Although now-a-days any machine with 2 or more cores and 2GB+ of memory will have server compiler on by default.
Increasing the memory doesn't always fix a problem. Sometimes adding more memory will be an overhead.
If you need details regarding GC, you can try this link
The very reason to tune something is to improve your application's performance and there by achieve your throughput and latency goals.

Related

How much headroom to leave between Java Xmx and Docker container RAM size?

I'm 100% aware of the pitfalls of the JVM and advances/strides that have been done in the JVM world for it to understand the ergonomics/resources of a container. Also, yes I know that Java by default tries to allocate 1/4 the RAM available in the environment it's running...
So I had some thoughts and questions. I have made a 50/50 rule. If my application needs 1GB of Xmx, then I create 2GB container which gives another 1GB of RAM for the JVM overhead and any container/swap stuff (though not sure how swap really works within a container).
So I was thinking if my application needs 6GB of Xmx do I really need to create 12GB container or can I get away with 7GB or 8GB container? How much headroom do we need to give inside the container when it comes to RAM?
If your container is dedicated to the JVM, then you don't need to use a percentage.
You need to know how much java heap you need -- in this case 6GB -- and how much everything else needs, which it looks like you've already proven is less than 2GB.
Then just add them up. An 8GB container should be fine in this case. Note that stack and heap memory are separate, though, so if you will need a large number of threads running simultaneously, don't forget to add 1MB of stack space for each one (or whatever you set with -Xss)
ALSO, I really recommend setting the minimum and maximum Java heap size to the SAME value -- 6GB in this case. Then you're guaranteed that there will no surprises when Java tries to grow the heap.
This is a very complicated question. The heap is only a portion of Java process; which has a lot of native resources also, not tracked with -Xms and -Xmx. Here is a very nice summary about what these might be.
Then, there is the garbage collector algorithm that you use: the more freedom (extra-space) it has - the better. This has to do with so-called GC barriers. In very simple words, when GC is executed - all the heap manipulations will be "intercepted" and tracked separately. If the allocation rate is high, the space needed to track these changes (while an active GC happens) tends to grow. The more space you have - the better the GC will perform.
Then, it matters what java version you are using (because -Xmx and -Xms might mean different things under different java versions with respect to containers), for example take a look here.
So there is no easy answer here. If you can afford 12GB of memory - do so. Memory is a lot cheaper (usually) than debugging any of the problems above.

JAVA : Why releasing Heap memory not reflecting in Task manager [duplicate]

When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link

Does GC release back memory to OS?

When the garbage collector runs and releases memory does this memory go back to the OS or is it being kept as part of the process. I was under the strong impression that the memory is never actually released back to OS but kept as part of the memory area/pool to be reused by the same process.
As a result the actual memory of a process would never decrease. An article that reminded me was this and Java’s Runtime is written in C/C++ so I guess the same thing applies?
Update
My question is about Java. I am mentioning C/C++ since I assume the Java’s allocation/deallocation is done by JRE using some form of malloc/delete
The HotSpot JVM does release memory back to the OS, but does so reluctantly since resizing the heap is expensive and it is assumed that if you needed that heap once you'll need it again.
In general shrinking ability and behavior depends on the chosen garbage collector, the JVM version since shrinking capability was often introduced in later versions long after the GC itself was added. Some collectors may also require additional options to be passed to opt into shrinking. And some most likely never will support it, e.g. EpsilonGC.
So if heap shrinking is desired it should be tested for a particular JVM version and GC configuration.
JDK 8 and earlier
There are no explicit options for prompt memory reclamation in these versions but you can make the GC more aggressive in general by setting -XX:GCTimeRatio=19 -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=30 which will allow it to spend more CPU time on collecting and constrain the amount of allocated-but-unused heap memory after a GC cycle.
If you're using a concurrent collector you can also set -XX:InitiatingHeapOccupancyPercent=N with N to some low value to let the GC run concurrent collections almost continuously, which will consume even more CPU cycles but shrink the heap sooner. This generally is not a good idea, but on some types of machines with lots of spare CPU cores but short on memory it can make sense.
If you're using G1GC note that it only gained the ability to yield back unused chunks in the middle of the heap with jdk8u20, earlier versions were only able to return chunks at the end of the heap which put significant limits on how much could be reclaimed.
If you're using a collector with a default pause time goal (e.g. CMS or G1) you can also relax that goal to place fewer constraints on the collector, or you can switch go the parallel collector to prioritize footprint over pause times.
To verify that shrinking occurs or to diagnose why a GC decides not to shrink you can use GC Logging with -XX:+PrintAdaptiveSizePolicy may also provide insight, e.g. when the JVM tries to use more memory for the young generation to meet some goals.
JDK 9
Added the -XX:-ShrinkHeapInSteps option which can be be used to apply the shrinking caused by the options mentioned in the previous section more aggressively. Relevant OpenJDK bug.
For logging -XX:+PrintAdaptiveSizePolicy has been replaced with -Xlog:gc+ergo
JDK 12
Introduced the option to enable prompt memory release for G1GC via the G1PeriodicGCInterval (JEP 346), again at the expense of some additional CPU. The JEP also mentions similar features in Shenandoah and the OpenJ9 VM.
JDK 13
Adds similar behavior for ZGC, in this case it is enabled by default. Additionally XXSoftMaxHeapSize can be helpful for some workloads to keep the average heap size below some threshold while still allowing transient spikes.
The JVM does release back memory under some circumstances, but (for performance reasons) this does not happen whenever some memory is garbage collected. It also depends on the JVM, OS, garbage collector etc. You can watch the memory consumption of your app with JConsole, VisualVM or another profiler.
Also see this related bug report
If you use the G1 collector and call System.gc() occasionally (I do it once a minute), Java will reliably shrink the heap and give memory back to the OS.
Since Java 12, G1 does this automatically if the application is idle.
I recommend using these options combined with the above suggestion for a very compact resident process size:
-XX:+UseG1GC -XX:MaxHeapFreeRatio=30 -XX:MinHeapFreeRatio=10
Been using these options daily for months with a big application (a whole Java-based guest OS) with dynamically loading and unloading classes - and the Java process almost always stays between 400 and 800 MB.
this article here explains how the GC work in Java 7. In a nutshell, there are many different garbage collectors available. Usually the memory is kept for the Java process and only some GCs release it to the system (upon request I think). But, the memory used by the Java process will not grow indefinitely, as there is an upper limit defined by the Xmx option (which is 256m usually, but I think it is OS/machine dependant).
ZGC released in 13 java and it can return unused heap memory to the operating system
Please see the link

Appropriate JVM/GC tuning for 4GB JVM with 3GB cache

I am looking for the appropriate settings to configure the JVM for a web application. I have read about old/young/perm generation, but I have trouble using those parameters at best for this configuration.
Out of the 4 GB, around 3 GB are used for a cache (applicative cache using EhCache), so I'm looking for the best set up considering that. FYI, the cache is static during the lifetime of the application (loaded from disk, never expires), but heavily used.
I have profiled my application already, and I have performed optimization regarding the DB queries, the application's architecture, the cache size, etc... I am just looking for JVM configuration advices here. I have measured 99% throughput for the Garbage Collector, and 6-8s pauses when the Full GC runs (approximately once every 1/2h).
Here are the current JVM parameters:
-XX:+UseParallelGC -XX:+AggressiveHeap -Xms2048m -Xmx4096m
-XX:NewSize=64m -XX:PermSize=64m -XX:MaxPermSize=512m
-verbose:gc -XX:+PrintGCDetails -Xloggc:gc.log
Those parameters may be completely off because they have been written a long time ago... Before the application became that big.
I am using Java 1.5 64 bits.
Do you see any possible improvements?
Edit: the machine has 4 cores.
-XX:+UseParallel*Old*GC should speed up the Full GCs on a multi core machine.
You could also profile with different NewRatio values. Your cached objects will live in the tenured generation so profile it with -XX:NewRatio=7 and then again with some higher and lower values.
You may not be able to accurately replicate realistic use during profiling, so make sure you monitor GC when it is in real life use and then you can make minor changes (e.g. to survivor space etc) and see what effect they have.
Old advice was not to use AggressiveHeap with Xms and Xmx, I am not sure if that is still true.
Edit: Please let us know which OS/hardware platform you are deployed on.
Full collections every 30 mins indicates the old generation is quite full. A high value for newRatio will give it more space at the expense of the young gen. Can you give the JVM more than 4g or are you limited to that?
It would also be useful to know what your goals / non functional requirements are. Do you want to avoid these 6 / 7 second pauses at the risk of lower throughput or are those pauses an acceptable compromise for highest possible throughput?
If you want to minimise the pauses, try the CMS collector by removing both
-XX:+UseParallelGC -XX:+UseParallelOldGC
and adding
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
Profile with that with various NewRatio values and see how you get on.
One downside of the CMS collector is that unlike the parallel old and serial collectors, it doesn't compact the old generation. If the old generation gets too fragmented and a minor collection needs to promote a lot of objects to the old gen at once, a full serial collection may be invoked which could mean a long pause. (I've seen this once in prod but with the IBM JVM which went out of memory instead of invoking a compacting collection!)
This might not be a problem for you - it depends on the nature of the application - but you can insure against it by restarting nightly or weekly.
I would use Java 6 update 30 or 7 update 2, 64-bit as they are much more efficient. e.g. they use 32-bit references by default.
I would also configure Ehcache to use direct memory or a memory mapped file if possible. This should minimise the impact on GC.
Using these options its possible to almost eliminate your heap foot print. e.g. I have an app which uses up to 180 GB of memory mapped files on a machine with 16 GB of memory and the heap size is 6 MB. A full GC takes up to 11 ms when trigger manually, not that it ever GCs. ;)
If you want a simple example where I map in an 8 TB file into memory and update it. http://vanillajava.blogspot.com/2011/12/using-memory-mapped-file-for-huge.html
I hope you just removed -server to not inflate the post, otherwise you should instantly enable it. Apart from the bit longer startup time (which really isn't an issue for a web application that should run days) I don't see any reason to use anything but c2. That could give some nice performance improvements in general. Umn back to topic:
Sadly the best thing I can think of won't work with your ancient JVM. The G1 garbage collector was basically designed to reduce latency. Not only does it try to reduce pauses in general, it also offers some tuning parameters to set pause goals and intervals. See this page.
There is an experimental backport to java6 though I doubt it's kept up to date. And nobody is wasting any time on optimizing GC or anything else for Java 1.5 anymore I fear.
PS: There would also be IBM's JVM and obviously azul systems (ok that wasn't a serious proposition ;) ), but those are obviously out of the question.. just wanted to mention them.

Which heap size do you prefer?

I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Categories

Resources