Full GC do not seem execute (out of memory) - java

I have 2 web servers (4 cores / 16 GB RAM / Cent OS 7) behind a load balancer using round robin algorithm.
Applications are build with Java using Apache/Tomcat.
Servers, Apache, Tomcat and Webapps have the same configuration, with the heap size : -Xms12840m -Xmx12840m
The problem is that the 1st server goes through an out of memory. Kernel kill the java process because of an out of memory. While the 2nd server is more stable.
I tried to monitor and analyse the Heap dump memory using VisualVM and also the GC with jstat.
About the heap dump memory, I didn't found any memory leak, which does not mean that there are not.
But with VisualJM / Monitor I can observe that a Full GC is done on the 2nd server when the old generation is full. Which is not the case on the 1st server. In fact, it seems the first server is constantly more busy than the second despite of the round robin algorithm used by the load balancer.
So, on the 1st server it seems that the JVM has not the time to proceed to a full GC before the out of memory.
By default, the ratio between the young/old generation is 1:2
Minor GC on the young generation are ok, when the Eden if full a minor GC is done. But when the old generation growth near the 100%, there is no Full GC.
So, how can I optimise the GC in order to avoid the out of memory ?
Why the full GC is not done on server 1 ?
Is it because of a peak of requests on the server and then the JVM is not able to proceed a full GC in time ?
Thanks for your help.

Related

VM running out of memory despite java has a ton for free heap

My tomcat Java 8 application is running with 24GB XMX on 32GB Virtual machine with G1 GC. It processes large files. Files have size of 5-7 GBs.
After deployment during the processing of one of such files VM is starting to use > 90% of allocated memory. This value never drops and only increases despite the processing was over and Java has a ton of unused heap (~7GBs).
It however dropped once we triggered Full GC from the VisualVM. We've also noticed the application had only one Major GC during its lifetime - which was triggered with Full GC from VisualVM.
It's clear that java has committed heap and don't wan't to give it back to OS.
Question is: how to get rid of such behavior and return memory back to OS(looks like G1 doesn't do it until java 12) or how to investigate why remaining 8 GBs of non-heap space are full? seems like I can't do it with VisualVM.

What does JProfiler "Run GC" button uses to clean garbage compare to regular GC tuning being done by JVM

I have java enterprise application which is consuming more memory since few days, even though GC is running and we have adequate parameters set (ConcMarkSweepGC) it is not freeing complete memory.
When I have attached JProfiler, it is observed that whenever GC is running it is only clearing lets say if it was consuming 9GB, only around 1 to 1.2 GB is getting cleared. At the same time if I click on "Run GC" button attached with JProfiler it clears atleast 6-7 GB out of 9 GB occupied.
I was trying to understand what extra does Jprofiler GC does compare to regular GC executed by the application.
Following are few of the details required:
- App server: Wildfly 9
- Java version: Java 8
- OS: Windows 2012 - 64Bit
Any help around this would be helpful. Thanks in advance.
The behaviour varies between different GC algorithms but in principle a GC on the Old Space is not supposed to clear all unused memory at all times. In New Space we see a copying parallel GC to combat memory fragmentation but the Old Space is supposed to be significantly larger. Running such a GC would result in a long stop-the-world pause. You selected ConcMarkSweepGC which is a concurrent GC that won't attempt to execute the full stop-the-world GC cycle if there is enough free memory. You probably initiated a full stop-the-world GC on the Old Space with JProfiler.
If you want to understand it in detail read about different GC algorithms in JVM. There is quite a few of them and they are designed with different goals in mind.

Does it help increasing the JVM heap size when the heap size free percentage is above 50%?

I have a J2EE web application running on MS Windows Server 2008. The application server is Oracle WebLogic 11G. The server has 32GB ram but users still keep complaining the application is very slow.
So I check the JVM config and find that the allocated heap size is just 1 GB while the server actually has 32GB ram.
Then I check the JVM heap size free percentage and find that even when the server is most busy, there is still 50% heap free.
So I want to know if it helps if I increase the heap size to say 2GB or 4GB.
As I have read some articles that when allocating too much heap size to a JVM, it will take a long time to perform garbage collection.
The correct way to make an application faster is to use various tools and other sources of information that are available to you to figure out why it is slow. For example:
Use web browser tools to see if there is problem with web performance, etc.
Use a profiler to identify the execution hotspots in the code
Use system level tools to figure out where bottlenecks are; e.g. front-end versus backend, network traffic, disk I/O ... swapping / thrashing effects.
Check system logfiles and application logfiles for clues.
Turn on JVM GC logging and see if there is a correlation between GC and "sluggish" requests.
Then you address the causes.
Randomly fiddling with the heap / GC parameters based only on gut feeling is unlikely to work.
FWIW: I expect that increasing the heap size is unlikely to improve things.
You can increase your heap memory to about 75% of physical memory .But chances for that to solve the 'slow problem' are way too low.Problem its not there , you should analyse duration of most used functions and see where it last longer than user should expect.

Java app gets slower and slower until a full GC is performed

I have a program which receives UDP packets, parses some data from them, and saves it to a DB, in multiple threads. It uses Hibernate and Spring via Grails (GORM stand-alone).
It works OK in one server, it starts fast (20-30 ms per packet -except for the very first ones as JIT kicks in-) and after a while stabilizes at 50-60 ms.
However, in a newer, more powerful server it starts fast but gradually gets slower and slower (it reaches 200 ms or even 300 ms per packet, always with the same load). And then, when the JVM performs a full GC (or I do it manually from Visual VM), it gets fast again and the cycle starts over.
Any ideas about what could cause this behaviour? It seems to be getting slower as the Old Gen fills up. Eden fills up quite fast, but GCs pauses seem to be short. And it works OK in the old server, so it's puzzling me.
Servers and settings:
The servers specs are:
Old server: Intel Xeon E3-1245 V2 # 3.40GHz, 32 GB RAM without ECC
New server: Intel Xeon E5-1620 # 3.60GHz, 64 GB RAM with ECC
OS: Debian 7.6
JVM:
java version "1.7.0_65"
Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
JVM settings:
Old server: was running with no special RAM or GC params, PrintFlagsFinal gives: -XX:InitialHeapSize=525445120 -XX:+ManagementServer -XX:MaxHeapSize=8407121920 -XX:+UseCompressedOops -XX:+UseParallelGC
New server: tried forcing those same flags, same results.
Old server seemed to support UseFastStosb, it was enabled by default. Forcing it in the new server results in a message that says it's not supported.
Can you try to use G1 which is supported by your JVM version?
Applications running with either the CMS or the Parallel Old GC garbage collector would benefit switching to G1 if the application has one or more of the following traits.
(1) Full GC durations are too long or too frequent.
(2) The rate of object allocation rate or promotion varies significantly.
(3) Undesired long garbage collection or compaction pauses (longer than 0.5 to 1 second)
I cannot possibly say if there's anything wrong with your application / the server VM defaults.
Try adding -XX:+PrintGCDetails to know more about young and old generation sizes at the time of garbage collection. According the values, ur initial heap size starts around 525MB and maximum heap is around 8.4GB. The JVM will resize the heap based on the requirement and everytime it resizes this heap, all the young and old generations are resized accordingly which will cause a Full GC.
Also your flags indicate UseParallelGC which will do the young generation collection using multiple threads but old gen is still serially collected using single thread.
The default value of NewRatio is 2 which means that Young gen takes 1/3 of the heap and old gen takes 2/3 of the heap. If you have too many short living objects try resizing young gen size and probably give G1 GC a try now that u're using 7u65.
But before tuning I strongly recommend you to
(1) Do proper analysis taking GC logs - see if there are any Full GCs during your slow response times
(2) Try Java Mission Control. Use it to monitor your remote server process. Its feature rich and u'll know more information about GC.
You can use -XX:+PrintGCDetails option to see how frequency each GC occurs.
However, I don't think it is GC issue(or GC papameters). As what is said in your post, the program runs OK but problem occurs when it is moved to a new fast machines. My guess is that there is some bottleneck in your program, which in turn slows release references to allocated objects. Consequently, the memory accumlates and VM use a lot of time for GC and memory allocation.
In other means, the procedure who allocates heap memory for package process, and the consumer recycle this memory after packages memory are saved to DB. But the consumer can not catch up procedure speed.
So my suggestion is to check your program and doing some measurements

exceeding maximum memory of a server running java processes

I have a machine with 10GB of RAM. I am running 6 java processes with the -Xmx option set to 2GB. The probability of all 6 processes running simultaneously and consuming the entire 2GB memory is very very low. But I still want to understand this worst case scenario.
What happens when all 6 processes consume a little less than 2GB memory at the same instant such that the JVM does not start garbage collection yet the processes are holding that much memory and the sum of the memory consumed by these 6 processes exceeds the available RAM?
Will this crash the server? OR Will it slow down the processing?
You should expect each JVM could use more than 2 GB. This is because the heap is just one memory region, you also have
shared libraries
thread stacks
direct memory
native memory use by shared libraries
perm gen.
This means that setting a maximum heap of 2 GB doesn't mean your process maximum 2 GB.
Your processes should perform well until they get the point where you have swapped some of the heap and a GC is performed. A GC assumes random access to the whole heap and at this point, your system could start swapping like mad. If you have a SSD for swap your system is likely to stop, or almost stop for very long periods of time. If you have Windows (which I have found is worse than Linux in this regard) and a HDD, you might not get control of the machine back and have to power cycle it.
I would suggest either reducing the heap to say 1.5 GB at most, or buying more memory. You can get 8 GB for about $100.
Your machine will start swapping. As long as each java process uses only a small part of the memory it has allocated, you won't notice the effect, but if they all garbage collect at the same time, accessing all of their memory, your hard disk will have 100% utilization and the machine will "feel" very, very slow.

Categories

Resources