VisualVM heap size doesn't follow used size - java

I'm using the VisualVM to profile a javafx 8 application that does some graphing and uses a lot more memory than I would like. It doesn't seem to be leaking, but for some reason my total heap never decreases even though my used heap spikes up and down as I select different files to graph. All the spikes up are when I graph a new file and then drops back down when I exit that screen, but total heap just shoots up and remains constant. Is this normal?

Yes that's normal. The JVM allocates more heap memory from the OS and doesn't give it back but the actual usage may vary, i.e. the currently unused portion of the heap may change.
One reason is that allocating memory from the OS is somewhat costly and the data might actually be fragmented once written. Garbage collection might free chunks of memory and thus the used size gets reduced but the still used chunks might be spread out all over the total heap memory.
I'm not sure how the JVM actually handles that in detail (in fact different JVMs might handle that differently) but you might look up free lists, buddy systems etc. that are used in such situations.
One could argue that the JVM could "defragment" the memory after garbage collection and release excess heap memory afterwards but that would be quite a performance hit, especially if lots of data would have to be moved around in RAM or even virtual/swap memory. As with many computing problems it's a tradeoff between space and cpu usage.

Related

Java heap space : pros/cons of size in performance terms

What are the limitations on Java heap space? What are the advantages/disadvantages of not increasing the size to a large value?
This question stems from a program that I am running sometimes running out of space. I am aware that we can change the size of the heap to a larger value in order to reduce the chance of this. However, I am looking for pros and cons of keeping the heap size small or large. Does decreasing it increase speed?
Increasing the heap size may, under some circumstances, delay the activities of the garbage collector. If you are lucky, it may be delayed until your program has finished its work and terminated.
Depending on the data structures that you are moving around in your program, a larger heap size may also improve performance because the GC can work more effectively. But to tune a program into that direction is … tricky, at best.
Using in-memory caches in your program will definitely improve performance (if you have a decent cache-hit ratio), and of course, this will need heap space, perhaps a lot.
But if your program reports OutOfMemoryError: heap space because of the data to process, you do not have much alternatives other than increasing the heap space; performance is your least problem in this case. Or you change your algorithm that it will not load all data into memory, instead processing it on disk. But then again, performance is out of the door.
If you run a server of some kind, having about 80% to 85% heap utilisation is a good value, if you do not have heavy spikes. If you know for sure that incoming request do not cause significant additional memory load, you may even go up to 95%. You want value for money, and you paid for the memory, one or the other way – so you want to use it!
You can even set Xms and Xmx to different values; then the JVM can increase the heap space when needed, and today it can even release that additional memory if no longer needed. But this increase costs performance – on the other side, a slow system is always better than one that crashes.
A too small heap size may affect performance if your system also does not have enough cores, so that the garbage collectors do compete over the CPU with the business threads. At some point, the CPU spends a significant time on garbage collection. Again, if you run a server, this behaviour is nearly unavoidable, but for a tool-like program, increasing the heap can prevent it (because the program may come to an end before the GC needs to get active). This is already said in the beginning …

Force the JVM to collect garbage early and reduce system memory used early

We operate a Java application that we did not develop.
This application uses quite a lot of memory for certain tasks, depending on the data, that is manipulated, up to 4GB. At other times, very little memory is needed, around 300MB.
Once the JVM grabs hold of a lot of memory, it takes a very long time until the garbage is collected and even longer until memory is returned back to the operating system. That's a problem for us.
What happens is as follows: The JVM needs a lot of memory for a task and grabs 4GB of Ram to create a 4GB Heap. Then, after processing finished, the memory is filled only 30%-50%. It takes a long time for memory consumption to change. When I trigger a GC (via jConsole) the heap shrinks below 500MB. Another triggered GC and the heap shrinks to 200MB. Sometimes memory is returned to the system, often not.
Here is typical screenshot of VisualVM. The Heap is collected (Used heap goes down) but Heap size stays up. Only when I trigger the GC through the "Perform GC"-Button, Heap size is reduced.
How can we tune the GC to collect memory much earlier? Performance and GC-Pause-Times are not much of an issue for us. We would rather have more and earlier GCs to reduce memory in time.
And how can we tweak the JVM to release memory back to the operating system, making the memory used by the JVM smaller?
I do know -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio, which helps a bit, but watching the heap with VisualVM shows us, that it's not always obeyed. We set MaxHeapFreeRatio to 40% and see in VisualVM, that the heap is only filled to about 10%.
We can't reduce the maximum memory (-Xmx) since sometimes, for a short period, a lot of memory is acutally needed.
We are not limited to a particular GC. So any GC that solves the problem best can be applied.
We use the Oracle Hotspot JVM 1.8
I'm assuming you use the HotSpot JVM.
Then you can use the JVM-Option -XX:InitiatingHeapOccupancyPercent=n (0<=n<=100) to force the JVM to do garbage collection more often. when you set n to 0, constant GC should take place. But I'm not sure whether this is a good idea regarding your applications response time.

Java: Commandline parameter to release unused memory

In Bash, I use the commmand java -Xmx8192m -Xms512m -jar jarfile to start a Java process with an initial heap space of 512MB and maximum heap space of 8GB.
I like how the heap space increases based on demand, but once the heap space has been increased, it doesn't release although the process doesn't need the memory. How can I release the memory that isn't being used by the process?
Example: Process starts, and uses 600MB of memory. Heap space increases from 512MB to a little over 600MB. Process then drops down to 400MB RAM usage, but heap allocation stays at 600MB. How would I make the allocation stay near the RAM usage?
You cannot; it's simply not designed to work that way. Note that unused memory pages will simply be mapped out by your hardware, and so won't consume any real memory.
Generally you would not like JVM to return memory to the OS and later claim in back as both operations are not so cheap.
There are a couple XX parameters that may or may not work with your preferred garbage collector, namely
-XX:MaxHeapFreeRatio=70 Maximum percentage of heap free after GC to avoid shrinking.
-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to avoid expansion.
Source
I believe you'd need stop the world collector for them to be enforced.
Other JVMs may have their own parameters.
I'd normally have not replied but the amount of negative/false info ain't cool.
No, it is a required function. I think, the JVM in Android probably can do this, but I'm not sure.
But most of them - including all Java EE VMs - simply doesn't interested about this.
This is not so simple, as it seems - the VM is a process from the OS view, and has somewhere a mapped memory region for it, which is a stack or data segment.
In most cases it needs to be a continous interval. Memory allocation and release from the OS view happens with a system call, which the process uses to ask the OS its new segment limit.
What to do, if you have for example 2 gigabytes of RAM for your JVM, which uses only 500 megs, but this 500 meg is dispersed in some ten-bytes fragment in this 2 gigs? This memory release function would need also a defragmentation step, which would multiply the resource costs of the GC runs.
As Java runs, and Java objects are constructed and destructed by the garbage collector, the free and allocated memory areas are dispersed in the stack/data segment.
When we don't see java, but native OS processes, the situation is the same: if you malloc() ten 1meg block, and then release the first 9, there is no way to give it back to the OS, altough newer libraries and os apis have extensive development about this. Of course, if you later allocates memory again, this allocation will be done from the just-freed regions.
My opinion is, that even if this is a little bit costly and complex (and a quite large programming work), it worths its price, and I think it isn't the best image from our collective programming culture, that it isn't done since decades in everything, included the java vms.

Performance of setting Java Initial and Maximum memory to the same value

In my work environment, we have a number of Enterprise Java applications that run on Windows Server 2008 hardware. An external consultant has been reviewing the performance of our Java applications and have suggested that we change out initial and maximum memory values to be identical.
So, at present, we're running an application with 1GB initial and 2GB maximum memory. Their recommendation is to change the initial and maximum memory both to 2GB.
Their reasoning is 2-fold...
By allocating the initial and maximum at 2GB, when the Java application is started it will grab a 2GB block of memory straight up. This prevents any other applications from accessing this memory block, so this memory will always be available to the Java application. This is opposed to the current situation, where it would get an initial block of 1GB only, which means potentially other applications could consume the remaining available memory on the server, and the enterprise Java application wouldn't be able to grow to the 2GB memory maximum.
There is a small performance hit every time that Java needs to allocate memory between the initial and maximum size. This is because it need to go to Windows, assign the new block of the required size, then use it. This performance hit will occur every time that memory is required between the initial size and the maximum size. Therefore, setting them both to the same value means that this performance impact is removed.
I think their 1st point is valid from a safety perspective, as it means the java application will always have access to the maximum memory regardless of what other applications are doing on the server. However, I also think that this could be a negative point, as it means that there is a big block of memory that can't be used by other applications if they need it, which could cause general server performance problems.
I think the performance impact they discuss in point 2 is probably so negligible that it isn't worth worrying about. If they're concerned about performance, they would be better off tuning things like the garbage collection aspect of Java rather than worrying about the tiny performance impact of allocating a block of memory.
Could anyone please tell me whether there is real benefit in their request to assign the initial and maximum memory to the same value. Are there any general recommendations either way?
setting them the same increase predictability. If you don't set them the same when the GC decides it needs more memory it will take time to allocate and shuffle around objects to make room. This is in addition to the GC activities. While this is happening requests are slower and your throughput will suffer. On a 2GB heap you will probably be allocating around 400mb of memory each time more is needed and 100mb removed each time the memory isn't needed. Increase the heap size and these numbers increase. The heap would be ever changing between your values, it isn't like it just allocates and keeps that memory.
For the argument #1 on ensuring the OS always has the memory available for you I believe is a moot point. If your server is hurting for memory and you are already capping out the machine to run the software then you are running it on the wrong server. Get the hardware you need and give it room to grow. If you say your application could use 2GB, personally I would have that on a machine with 4GB or more free. If my client/user base grows the freedom is there to increase the heap to accommodate new users.
IMO, even the 1st suggestion is not so important. Modern OS has virtual memory. So even if you allocate 2GB memory to your Java process, it's not guaranteed that these 2GB memory will always reside in physic memory. If there are other applications in the same box which consumes a lot of memory, you'll get poor performance no matter when you allocate memory.
The 2nd point is VERY valid. Grabbing memory is a slow IO operation. Especially large chunks. However, 2GB is a large block. What kind of hardware are you running this on? If it has quite a bit of memory, it would be a good idea.
HOWEVER, lets say your computer doesn't really have a lot of memory, allocating a 2GB block can be dangerous and greedy. You should instead create a object pooling scheme. Remember, the garbage collector takes memory and does its own memory pool, but in cases of 2GB, it is likely some memory will be released to the OS.
Do some profiling, you would be surprised how much time can be saved by not wasting memory and allocating memory all the time. The garbage collector will shield you from a lot of it, but not as much with a 2GB block.
Ultimately, if possible... you should look into using LESS memory. For REAL performance, look into keeping buffers and arrays in the cache. This is a bit more complex, but it can generate vast results.
Best of luck
When you start a JVM it grabs the maximum heap size as one continuous block whether you set the initial size or not. How much its uses depends on usage. e..g a hello world with an initial size of 2G will not use 2G. What the initial size does is not attempt to limit usage until the initial size is reached. (But it won't allocate main memory to it if it not actually needed)
Setting the initial can improve the startup time. I tend to set the initial size to the size the JVM grows to anyway (or not bother if the startup time is short)
The real impact is that a small heap size can result in temporary objects moved into tenured space which is expensive to cleanup. A larger heap size can mean temporary objects are discarded before being copied many times and removed with a full gc.

Which GC to use when profiling memory?

I use the NetBeans profiler (which is actually an embedded VisualVM) to monitor the memory consumption of my Java applications. I use the heap view, the surviving generation view, and memory dumps to track memory leaks.
The heap view shows the total of used memory, but it's a bit chaotic, due to the way the garbage collector manages the memory. The graph is essentially sawtooth-shaped, and thus not particularly readable. Sometimes, I force the GC to happen, so that I can have a more precise value of the real memory consumption.
I was wondering : is there a garbage collector which is more appropriate for memory profiling, and which would yield a heap graph closer to the real memory usage ? Or more generally, what JVM settings (-XX options or other) can I use in order to efficiently track memory leaks ?
What you are seeing in your graph is the real behavior of your applications memory utilization. The repeated sawtooth pattern is likely due to allocations of short lived objects which are being scavenged. If you believe you have a memory leak, take a heap dump snapshot and see what objects are being retained in the heap. You can take a snapshot using JConsole and open the resulting dumpfile using HPjmeter.
I suggest you use the GC you intend to use without the profiler. Using this approach you will get a graph which is more like how the application will behave, though not always as readable.
If you want a graph which is more readable, but not as realistic, you can increase the minimum memory size to say 1 GB. This will result in less GCs and a less spikey graph but may not help you except make the graph prettier.

Categories

Resources