Equal -Xms and -Xmx in JVM - java

I have a Java microserive that is increasing in memory daily in production that I have just inheritted. It is currently using 80% of a 4GB heap and rising a few percent each day.
The -Xms and -Xmx values in the JVM options are both set to 4GB.
Does this mean the garbage collector wont activate until the JVM reaches its heap limit?

Does this mean the garbage collector wont activate until the JVM reaches its heap limit?
No, the garbage collector will activate its collection cycles whenever the GC algorithm tells it to - which will usually be frequent incremental collections, and occasional full collections. Running out of heap can force it to perform a full collection more often and be more aggressive, but it's certainly not true (for all GC implementations I'm aware of anyway) to say it won't do anything until it's out of, or nearly out of heap space.
What could be happening (depending on the GC algorithm) is that the incremental collections are performing regularly & as intended, but is unable to clear everything properly - and the slower full collection won't be performed until you're lower on space. That's perfectly valid behaviour - you've given the VM 4GB heap, and it'll use that in the way it sees fit. If that's the case, you'd see a sudden drop when it gets closer to the 4GB limit and performs a full collection.
Alternatively, it's possible you have a memory leak in the application somewhere and it's going to run out of memory eventually, be unable to free any space, and then throw an OutOfMemoryError.
If it were me I'd profile & stress test the application under a testing environment with the same heap & GC options set to check the behaviour - that's really the only way to find out for sure.

This means the JVM will always allocate 4GB of heap. It may or may not run the GC before heap is full. AFAIK this is up to the GC implementation. My experience is that it won't garbage collect before the heap is (almost) full, but that may not be true for all implementations. If you suspect a memory leak I'd suggest reducing -Xms, then monitor if heap grows beyond -Xms value.
BTW: Is there a reason for this high -Xms value? All cases I have seen where -Xms was specified, this was by mistake / lack of of knowledge

Related

How to determine if Java heap memory is set too high?

How can I tell if the heap memory of 1 Java application is too high by jvm monitoring? Currently our Java application heap memory is set to 16G, but judging from experience, this value is set too high and I want to convince other colleagues to lower this value for saving resources, but I don't know how to visually show that the memory setting is too high through monitoring.
This problem, I think the more difficult point is that memory is not like CPU, how much is actually used and how much is set by Pod, there is an obvious contrast, Java heap memory set to 16G, the startup directly occupies 16G, and regardless of whether the heap memory is set to 8G or 16G, I understand that it is nothing more than a matter of triggering the garbage collection frequency, I can't justify lowering the memory to 8G, or 4G I can't prove that by reducing the memory to 8G or 4G, the program will run smoothly and not trigger an OOM exception.

GC and memory behaviour for Java and PHP app?

This question is regarding the Garbage collection behavior when request need more memory than allocated to Pod . If GC is not able to free memory, will it continue to run GC continuously or throw out
of memory.
One pod contains java based app and another contain PHP based. In case of java xmx value is same as given to pod limit.
I can only talk about Java GC. (PHP's GC behavior will be different.)
If GC is not able to free memory, will it continue to run GC continuously after regular interval or throw out of memory.
It depends on the JVM options.
A JVM starts with an initial size for the heap and will expand it as required. However, it will only expand the heap up to a fixed maximum size. That maximum size is determined when the JVM starts from either an option (-Xmx) or default heap size rules. It can't be changed after startup.
As the heap space used gets close to the limit, the GC is likely to occur more and more frequently. The default behavior on a modern JVM is to monitor the %time spent doing garbage collection. If it exceeds a (configurable) threshold, then you will get an OOME with a message about the GC Overhead Threshold having been exceeded. This can happen even if there is enough space to "limp along" for a bit longer.
You can turn off the GC Overhead Limit stuff, but it is inadvisable.
The JVM will also throw an OOME if it simply doesn't have enough heap space after doing a full garbage collection.
Finally, a JVM will throw an OOME if it tries to grow the heap and the OS refuses to give it the memory it requested. This could happen because:
the OS has run out of RAM
the OS has run out of swap space
the process has exceeded a ulimit, or
the process group (container) has exceeded a container limit.
The JVM is only marginally aware of the memory available in its environment. On a bare metal OS or a VM under a hypervisor, the default heap size depends on the amount of RAM. On a bare metal OS, that is physical RAM. On a VM, it will be ... what ever the guest OS sees as its physical memory.
With Kubernetes, the memory available to an application is likely to be further limited by cgroups or similar. I understand that recent Java releases have tweaks that make them more suitable for running in containers. I think this means that they can use the cgroup memory limits rather than the physical memory size when calculating a default heap size.

Force the JVM to collect garbage early and reduce system memory used early

We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.

Weird behavior of Java -Xmx on large amounts of ram

You can control the maximum heap size in java using the -Xmx option.
We are experiencing some weird behavior on Windows with this switch. We run some very beefy servers (think 196gb ram). Windows version is Windows Server 2008R2
Java version is 1.6.0_18, 64-Bit (obviously).
Anyway, we were having some weird bugs where processes were quitting with out of memory exceptions even though the process was using much less memory than specified by the -Xmx setting.
So we wrote simple program that would allocate a 1GB byte array each time one pressed the enter key, and initialize the byte array to random values (to prevent any memory compression etc).
Basically, whats happening is that if we run the program with -Xmx35000m (roughly 35 gb) we get an out of memory exception when we hit 25 GB of process space (using windows task manager to measure). We hit this after allocating 24 GB worth of 1 GB blocks, BTW, so that checks out.
Simply specifying a larger value for -Xmx option makes the program work fine to larger amounts of ram.
So, what is going on? Is -Xmx just "off". BTW: We need to specify -Xmx55000m to get a 35 GB process space...
Any ideas on what is going on?
Is their a bug in the Windows JVM?
Is it safe to simply set the -Xmx option bigger, even though there is a disconnect between the -Xmx option and what is going on process wise?
Theory #1
When you request a 35Gb heap using -Xmx35000m, what you are actually saying is that to allow the total space used for the heap to be 35Gb. But the total space consists of the Tenured Object space (for objects that survive multiple GC cycles), the Eden space for newly created objects, and other spaces into which objects will be copied during garbage collection.
The issue is that some of the spaces are not and cannot be used for allocating new objects. So in effect, you "lose" a significant percent of your 35Gb to overheads.
There are various -XX options that can be used to tweak the sizes of the respective spaces, etc. You might try fiddling with them to see if they make a difference. Refer to this document for more information. (The commonly used GC tuning options are listed in section 8. The -XX:NewSpace option looks promising ...)
Theory #2
This might be happening because you are allocating huge objects. IIRC, objects above a certain size can be allocated directly into the Tenured Object space. In your (highly artificial) benchmark, this might result in the JVM not putting stuff into the Eden space, and therefore being able to use less of the total heap space than is normal.
As an experiment, try changing your benchmark to allocate lots of small objects, and see if it manages to use more of the available space before OOME-ing.
Here are some other theories that I would discount:
"You are running into OS-imposed limits." I would discount this, since you said that you can get significantly greater memory utilization by increasing the -Xmx... setting.
"The Windows task manager is reporting bogus numbers." I would discount this because the numbers reported roughly match the 25Gb that you think your application had managed to allocate.
"You are losing space to other things; e.g. the permgen heap." AFAIK, the permgen heap size is controlled and accounted independently of the "normal" heaps. Other non-heap memory usage is either a constant (for the app) or dependent on the app doing specific things.
"You are suffering from heap fragmentation." All of the JVM garbage collectors are "copying collectors", and this family of collectors has the property that heap nodes are automatically compacted.
"JVM bug on Windows." Highly unlikely. There must be tens of thousands of 64bit Java on Windows installations that maximize the heap size. Someone else would have noticed ...
Finally, if you are NOT doing this because your application requires you to allocate memory in huge chunks, and hang onto it "for ever" ... there's a good chance that you are chasing shadows. A "normal" large-memory application doesn't do this kind of thing, and the JVM is tuned for normal applications ... not anomalous ones.
And if your application really does behave this way, the pragmatic solution is to just set the -Xmx... option larger, and only worry if you start running into OS-level issues.
To get a feeling for what exactly you are measuring you should use some different tools:
the Windows Task Manager (I only know Windows XP, but I heard rumours that the Task Manager has improved since then.)
procexp and vmmap from Sysinternals
jconsole from the JVM (you are using the SunOracle HotSpot JVM, aren't you?)
Now you should answer the following questions:
What does jconsole say about the used heap size? How does that differ from procexp?
Does the value from procexp change if you fill the byte arrays with non-zero numbers instead of keeping them at 0?
did you try turning on the verbose output for the GC to find out why the last allocation fails. is it because the OS fails to allocate a heap beyond 25GB for the native JVM process or is it because the GC is hitting some sort of limit on the maximum memory it can manage. I would recommend you also connect to the command line process using jconsole to see what the status of the heap is just before the allocation failure. Also tools like the sysinternals process explorer might give better details as where the failure is occurring if it is in the jvm process.
Since the process is dying at 25GB and you have a generational collector maybe the rest of the generations are consuming 10GB. I would recommend you install JDK 1.6_u24 and use jvisualvm with the visualGC plugin to see what the GC is doing especially factor in the size of all the generations to see how the 35GB heap is being chopped up into different regions by the GC / VM memory manager.
see this link if you are not familiar with Generational GC http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing.total_heap
I assume this has to do with fragmenting the heap. The free memory is probably not available as a single contiguous free area and when you try to allocate a large block this fails because the requested memory cannot be allocated in a single piece.
The memory displayed by windows task manager is the total memory allocated to the process which includes memory for code, stack, perm gen and heap.
The memory you measure using your click program is the amount of heap jvm makes available to running jvm programs.
Natrually the total allocated memory to JVM by windows should be greater than what JVM makes available to your program as heap memory.

Which heap size do you prefer?

I know there is no "right" heap size, but which heap size do you use in your applications (application type, jdk, os)?
The JVM Options -Xms (initial/minimum) and -Xmx (maximum) allow for controlling the heap size. What settings make sense under which circumstances? When are the defaults appropriate?
You have to try your application and see how it performs. for example, I used to always run IDEA out of the box until I've got this new job where I work on this huge monolithic project. IDEA was running very slow and regularly throwing out of memory errors when compiling the full project.
first thing I did is to ramp up the heap to 1 gig. this got rid of the out of memory issues but it was still slow. I also noticed IDEA was regularly freezing for 10 seconds or so after which the used memory was cut in half only to ramp up again and , and that triggered the garbage collection idea. I now use it with -Xms512m, -Xmx768m but, I also added -Xincgc, to activate incremental garbage collection
As a result, I've got my old IDEA back: it runs smooth, doesn't freeze anymore and never uses more than 600m of heap.
For your application you have to use a similar approach. try to determine the typical memory usage and tune your heap for the application to run well in those conditions. But also let advanced users tune the setting, to address out of the ordinary data loads.
It depends on the application type. A desktop application is much different than a web application. An application server is much different than a standalone application.
It also depends on the JVM that you are using. JDK5 and later 6 include enhancements that help understand how to tune your application.
Heap size is important, but its also important to know how it plays with the garbage collector.
JDK1.4 Garbage Collector Tuning
JDK5 Garbage Collector Tuning
JDK6 Garbage Collector Tuning
Actually I always considered it very strange that Java limits the heap size. A native application can usually use as much heap as it wants, until it runs out of virtual address space. The only reason to limit the heap in Java seems the garbage collector, which has a certain kind of "laziness" and may not garbage collect objects, unless there is a necessity to do so. That means if you choose the heap too big, your app constantly uses more memory than is really necessary.
However, Sun has improved the GC a lot over the years and to emulate the behavior of a native C app, I would set the initial heap size to 32 MB (for small programs) or 64 MB (for bigger ones) and the maximum to something between 1-2 GB. If your app really needs over a 1 GB of memory, it is most likely broken (unless you deal with data objects that large), but I see no reason why your app should be killed, just because it goes over a certain heap size.
Of course, this is referring to normal PCs. If you create Java code for mobile phones or other limited devices, you should probably adopt the initial and maximum heap size to the limitations of that device.
Typically i try not to use heaps which are larger than 1GB.
It will cost you on major garbage collections.
Sometime it is better to split your application to a few JVM on the same machine and not you large heap sizes.
Major collection with a large heap size can take >10 mintues (on unoptimized GC applications).
This is entirely dependent on your application and any hardware limitations you may have. There is no one size fits all.
jmap can be used to have a look at what heap you are actually using and is a good starting point for right-sizing the heap.
You need to spend quite some time in JConsole or visualvm to get a clear picture on what the plateau memory usage is. Wait until everything is stable and you see the characteristic sawtooth curve of heap memory usage. The peaks should be your 70-80% heap, depending on what garbage collector you use.
Most garbage collectors trigger full GCs when heap usage reaches a certain percentage. This percentage is from 60% to 80% of max heap, depending on what strategy is involved.
1.3Gb for a heavy GUI application.
Unfortunately on Linux the JVM seems to pre-request 1.3G of virtual memory in that situation, which looks bad even if it's not needed (and causes a lot of confused grumbling from users)
On my most memory intensive app:
-Xms250M -Xmx1500M -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC

Categories

Resources