Java BufferedImage memory consumption - java

Our application generates images. The memory consumed by BufferedImage generates an out of memory exception:
java.lang.OutOfMemoryError: Java heap space
This happens with the following line:
BufferedImage result = new BufferedImage(2540, 2028, BufferedImage.TYPE_INT_ARGB);
When checking free memory just before this instruction it shows that I have 108MB free memory. The approach I use to check memory is:
Runtime rt = Runtime.getRuntime();
rt.gc();
long maxMemory = rt.maxMemory();
long usedMemory = rt.totalMemory() - rt.freeMemory();
long freeMem = maxMemory - usedMemory;
We don't understand how the BufferedImage can consume more than 100MB of memory. It should use 2540 * 2028 * 4 bytes, which is ~20 MB.
Why is so much memory consumed when creating the BufferedImage? What we can do to reduce this?

Asking Runtime for the amount of free memory is not really reliable in a multithreaded environment, as the memory could be used up by another thread right after you measured. Also, you are using maxMemory - usedMemory, which is not the amount of free memory, but rather what the VM thinks it can make available at most - it may be that the host system can not satisfy a request for more memory, while the VM still believes it can enlarge the heap.
It's also fully possible that your VM has 108 MB free, but no 20MB in one chunk is available. The type of BufferedImage you are trying to create is ultimately backed by an int[] array, which must be allocated as a contiguous memory block. That means if no contiguous 20MB block is available on the heap, no matter how much total free memory there is otherwise, you will get an OutOfMemoryError. The situation is further complicated by the garbage collector used - each GC has different strategies for memory allocation; a sizable portion of the heap may be set aside for thread local memory allocation.
Without any information how large the heap is in total, which GC you are using (and which VM for the matter), there are too many variables to point a finger on a culprit.
Edit: Find out which GC is used (Java 7 (JDK 7) garbage collection and documentation on G1) and have a glance on its specific pros and cons - especially what capabilities it offers in terms of heap compaction and how large its generations are by default. That would be the parameters to play with. Running the application with GC messages on may also provide insight on whats going on.
Considering your heap is only 900MB in size, 100MB free means your pretty close to the limit already - my first go to cure would be to simply assign the VM a much larger heap, lets say 2GB. If you need to conserve memory your only bet is tuning the GC parameters (possibly select another GC) - and to be honest I have no experience with that. There are plenty of articles on the topic of GC tuning available, though.

Related

Java 8 JVM Heap size keeps shrinking [duplicate]

This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC.
-Xms is the default and -Xmx is 2048mb.
What is happening!!? How can I avoid this?
http://imagebin.org/92614
Shrinking heap http://imagebin.org/index.php?mode=image&id=92614
n.b originally posted on serverfault.com, moved to stackoverflow.com as requested
Google found me the following, from the IBM JVM FAQ (how's that for an NLA):
When does the Java heap shrink?
Heap shrinkage occurs when GC determines that there is a lot of free heap storage, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.
The Sun JVM does something similar. Below is an excerpt from an Oracle Technology Network article entitled Ergonomics in the 5.0 Java Virtual Machine.
The heap will grow or shrink to a size that will support the chosen throughput goal. Some oscillations in the size of the heap during initialization and during a change in the application's behavior can be expected.
...
It is typical that the size of the heap will oscillate as the garbage collector tries to satisfy competing goals. This is true even if the application has reached a steady state. The pressure to achieve a throughput goal (which may require a larger heap) competes with the goals for a maximum pause time and a minimum footprint (which both may require a small heap).
I suggest you have a look at the rest of that document; it may have more information relevant to your problem.
There is a JVM argument that controls when the heap is resized.
-XX:MaxHeapFreeRatio
The default value for this is 70. The free ratio is the amount of space not allocated on the heap over the total heap size. It the percentage of free space rises above the default of 70% the jvm will rreduce the size of the heap to allow the OS to use the memory.
If the heap is shrinking too often you can increase the value of -XX:MaxHeapFreeRatio. If it is set to 100 presumably it will never skrink.
Just a guess:
It looks like the system is pretty much idle. There might be some caching going on, and stuff drops out of the cache and gets gc'd. Or since it is a queuing system, maybe it has some messages, in the queue, which slowly get delivered and gc'd afterwards.
The increased frequence of gc-runs might be due to ever decreasing load on the system.
As to how to avoid it. Why do you want to avoid it? It seems like your CPU load is zero. So you are free to let the gc do whatever it wants

Under which circumstance will JVM decide to grow size of heap?

A JVM application runs on Oracle Hotspot JVM, it starts up with default JVM settings, but with 100MB of initial heap size and 1GB of maximum heap size.
Under which circumstances will JVM decide to grow the current heap size, instead of trying GC?
HotSpot JVM continuously monitors allocation rates and objects lifetimes. It tries to achieve two key factors:
let short-lived objects die in eden
promote long-lived object to heap on time to prevent unnecessarily copying between survivor spaces
In a nutshell you can describe it as the HotSpot have some configured threshold which indicates how much pecentage of total allocated heap have to by free after running garbage collector. For example if this threshold is configured for 70% and after running full GC heap usage will be 80%, then additional memory will be allocated to hit the threshold. Of course bigger heap means longer pauses while smaller heap means more frequent collections.
But you have to remember that JVM is very complex, and you can change this behaviour, for example by using flags:
AdaptiveSizePausePolicy, which will pick heap size to achieve shortest pauses
AdaptiveSizeThroughPutPolicy, which will pick heap size to achieve highest throughtput
GCTimeLimit and GCTimeRatio, which sets time spent in application execution
Number of object which occupies the Heap increases while garbage collection is not possible.
When objects not possible to collect as garbage since they are use by current process, JVM need to increase it's heap size towards it is maximum to allow to create new objects.

Efficient GC collection with large heap of 30 - 100GB

Can Java 7 now handle large heap of 30 - 100GB efficiently without significant GC pause?
There are tuning options available, and concurrent GC, but there will still be some pauses during the GC of the tenured generation.
Angelika Langer explains this in detail in this presentation:
http://vimeo.com/28761227
Another option is to use Terracotta BigMemory. This is useful if you are storing objects in a big cache in heap. This is not open source but in my opinion, reasonably priced. BigMemory basically allocates object memory outside heap and hence the heap size can be kept to a minimum or medium size.

Heap memory behaviour

I always had a question about heap memory behaviour.
Profiling my app i get the above graph. Seems all fine. But what i don't understand why,at GC time, the heap grows a litle bit, even there is enough memory (red circle).
That means for a long running app that it will run out of heap space at some time ?
Not necessarily. The garbage collector is free to use up to the maximum allocated heap in any way it sees fit. Extrapolating future GC behaviour based on current behaviour (but with different memory conditions) is in no way guaranteed to be accurate.
This does have the unfortunate side effect that it's very difficult to determine whether an OutOfMemoryError is going to happen unless it does. A legal (yet probably quite inefficient) garbage collector could simply do nothing until the memory ceiling was hit, then do a stop-the-world mark and sweep of the entire heap. With this implementation, you'd see your memory constantly increasing, and might be tempted to say that an OOME was imminent, but you just can't tell.
With such small heap sizes, the increase here is likely just due to bookkeeping/cache size alignment/etc. You're talking about less than 50KB or so looking at the resolution on the scale, so I shouldn't be worried.
If you do think there's a legitimate risk of OutOfMemoryErrors, the only way to show this is to put a stress test together and show that the application really does run out of heap space.
The HotSpot garbage collectors decide to increase the total heap size immediately after a full GC has completed if the ratio of free space to total heap size falls below a certain threshold. This ratio can be tuned using one of the many -XX options for the garbage collector(s).
Looking at the memory graph, you will see that the heap size increases occur at the "saw points"; i.e. the local maxima. Each of these correspond to running a full GC. If you look really carefully at the "points" where the heap gets expanded, you will see that in each case the amount of free space immediately following the full GC is a bit higher than the previous such "point".
I image that what is happening is that you application's memory usage is cyclical. If the GC runs at or near a high point of the cycle, it won't be able to free as much memory as if the GC runs at or near a low point. This variability may be enough to cause the GC to expand the heap.
(Another possibility is that your application has a slow memory leak.)
That means for a long running app that it will run out of heap space at some time ?
No. Assuming that your application's memory usage (i.e. the integral of space occupied by reachable objects) is cyclic, the heap size will approach a fixed high limit and never exceed it. Certainly OOME's are not inevitable.

Java heap keeps on shrinking! What is happening in this graph of heap size?

This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC.
-Xms is the default and -Xmx is 2048mb.
What is happening!!? How can I avoid this?
http://imagebin.org/92614
Shrinking heap http://imagebin.org/index.php?mode=image&id=92614
n.b originally posted on serverfault.com, moved to stackoverflow.com as requested
Google found me the following, from the IBM JVM FAQ (how's that for an NLA):
When does the Java heap shrink?
Heap shrinkage occurs when GC determines that there is a lot of free heap storage, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.
The Sun JVM does something similar. Below is an excerpt from an Oracle Technology Network article entitled Ergonomics in the 5.0 Java Virtual Machine.
The heap will grow or shrink to a size that will support the chosen throughput goal. Some oscillations in the size of the heap during initialization and during a change in the application's behavior can be expected.
...
It is typical that the size of the heap will oscillate as the garbage collector tries to satisfy competing goals. This is true even if the application has reached a steady state. The pressure to achieve a throughput goal (which may require a larger heap) competes with the goals for a maximum pause time and a minimum footprint (which both may require a small heap).
I suggest you have a look at the rest of that document; it may have more information relevant to your problem.
There is a JVM argument that controls when the heap is resized.
-XX:MaxHeapFreeRatio
The default value for this is 70. The free ratio is the amount of space not allocated on the heap over the total heap size. It the percentage of free space rises above the default of 70% the jvm will rreduce the size of the heap to allow the OS to use the memory.
If the heap is shrinking too often you can increase the value of -XX:MaxHeapFreeRatio. If it is set to 100 presumably it will never skrink.
Just a guess:
It looks like the system is pretty much idle. There might be some caching going on, and stuff drops out of the cache and gets gc'd. Or since it is a queuing system, maybe it has some messages, in the queue, which slowly get delivered and gc'd afterwards.
The increased frequence of gc-runs might be due to ever decreasing load on the system.
As to how to avoid it. Why do you want to avoid it? It seems like your CPU load is zero. So you are free to let the gc do whatever it wants

Categories

Resources