How to pre-allocate max heap space on Android - java

I am working on an image processing app that needs a lot of heap space, and am sending these images over the wire. The problem is that each image results in message like
Grow heap (frag case) to 16.38M for 1536000 byte allocation
This growing seems to be taking a long time, since when I send the images from a computer it takes 1 second, but with my android app it take about 1 minute 30 seconds.
My question is, is there a way to pre-allocate the max heap size so that the heap doesn't have to keep growing?

... is there a way to pre-allocate the max heap size so that the heap doesn't have to keep growing?
I don't think so. Reading through the Android memory management article reveals nothing.
In classic Java, you can set an initial heap size to avoid the performance overhead of continually growing the heap. But the assumption on Android seems to be that you should be trying to minimize memory usage at all times.
You might get some "joy" if you simply allocate a very large array and immediately make it unreachable. The GC should grow the heap so that it can hold the array ... and the space occupied by the array will typically be recycled and made available on the next GC cycle. On the other hand, when the GC reclaims a huge amount of memory, it might shrink the heap to give most of the space back to the operating system.
Having said that, you would probably be advised to change your app so that it doesn't need to hold lots of images in memory. It sounds to me like you might have a memory leak of some type. Read these links:
https://developer.android.com/training/articles/memory.html
https://developer.android.com/tools/debugging/debugging-memory.html
https://developer.android.com/training/displaying-bitmaps/manage-memory.html

Related

Java 8 JVM Heap size keeps shrinking [duplicate]

This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC.
-Xms is the default and -Xmx is 2048mb.
What is happening!!? How can I avoid this?
http://imagebin.org/92614
Shrinking heap http://imagebin.org/index.php?mode=image&id=92614
n.b originally posted on serverfault.com, moved to stackoverflow.com as requested
Google found me the following, from the IBM JVM FAQ (how's that for an NLA):
When does the Java heap shrink?
Heap shrinkage occurs when GC determines that there is a lot of free heap storage, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.
The Sun JVM does something similar. Below is an excerpt from an Oracle Technology Network article entitled Ergonomics in the 5.0 Java Virtual Machine.
The heap will grow or shrink to a size that will support the chosen throughput goal. Some oscillations in the size of the heap during initialization and during a change in the application's behavior can be expected.
...
It is typical that the size of the heap will oscillate as the garbage collector tries to satisfy competing goals. This is true even if the application has reached a steady state. The pressure to achieve a throughput goal (which may require a larger heap) competes with the goals for a maximum pause time and a minimum footprint (which both may require a small heap).
I suggest you have a look at the rest of that document; it may have more information relevant to your problem.
There is a JVM argument that controls when the heap is resized.
-XX:MaxHeapFreeRatio
The default value for this is 70. The free ratio is the amount of space not allocated on the heap over the total heap size. It the percentage of free space rises above the default of 70% the jvm will rreduce the size of the heap to allow the OS to use the memory.
If the heap is shrinking too often you can increase the value of -XX:MaxHeapFreeRatio. If it is set to 100 presumably it will never skrink.
Just a guess:
It looks like the system is pretty much idle. There might be some caching going on, and stuff drops out of the cache and gets gc'd. Or since it is a queuing system, maybe it has some messages, in the queue, which slowly get delivered and gc'd afterwards.
The increased frequence of gc-runs might be due to ever decreasing load on the system.
As to how to avoid it. Why do you want to avoid it? It seems like your CPU load is zero. So you are free to let the gc do whatever it wants

Java Heap Size Reduction

BACKGROUND
I recently wrote a java application that consumes a specified amount of MB. I am doing this purposefully to see how another Java application reacts to specific RAM loads (I am sure there are tools for this purpose, but this was the fastest). The memory consumer app is very simple. I enter the number of MB I want to consume and create a vector of that many bytes. I also have a reset button that removes the elements of the vector and prompts for a new number of bytes.
QUESTION
I noticed that the heap size of the java process never reduces once the vector is cleared. I tried clear(), but the heap remains the same size. It seems like the heap grows with the elements, but even though the elements are removed the size remains. Is there a way in java code to reduce heap size? Is there a detail about the java heap that I am missing? I feel like this is an important question because if I wanted to keep a low memory footprint in any java application, I would need a way to keep the heap size from growing or at least not large for long lengths of time.
Try garbage collection by making call to System.gc()
This might help you - When does System.gc() do anything
Calling GC extensively is not recommended.
You should provide max heap size with -Xmx option, and watch memory allocation by you app. Also use weak references for objects which have short time lifecycle and GC remove them automatically.

can a OOM be caused by not finding enough contiguous memory?

I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace:
at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209)
at java.lang.String.<init>([CII)V (String.java:215)
at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430)
...
This comes from a large string I am copying.
I remember reading somewhere (cannot find where) what happened is these cases is:
process still has not consumed 1gb of memory, is way below
even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM.
I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any.
The other possible culprit would be not enough permgen memory.
Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.
If you are using the concurrent mark sweep collector you can get fragmentation. however for new objects, provided there is a enough young generation space you don't need to worry about fragmentation as the free Eden space is always continuous.
In many applications, only a small portion of the heap is given to the young generation so if you have a fragmented tenured space and you create a relatively small object (as small as 5% of the maximum memory size) you can get an OutOfMemoryError.
Given you will have very poor performance if you run close to the maximum memory, I would suggest you either make you application use less memory or increase the maximum. This increases the your generation size as well. Alternatively you could set -XX:NewSize=512m

Heap memory behaviour

I always had a question about heap memory behaviour.
Profiling my app i get the above graph. Seems all fine. But what i don't understand why,at GC time, the heap grows a litle bit, even there is enough memory (red circle).
That means for a long running app that it will run out of heap space at some time ?
Not necessarily. The garbage collector is free to use up to the maximum allocated heap in any way it sees fit. Extrapolating future GC behaviour based on current behaviour (but with different memory conditions) is in no way guaranteed to be accurate.
This does have the unfortunate side effect that it's very difficult to determine whether an OutOfMemoryError is going to happen unless it does. A legal (yet probably quite inefficient) garbage collector could simply do nothing until the memory ceiling was hit, then do a stop-the-world mark and sweep of the entire heap. With this implementation, you'd see your memory constantly increasing, and might be tempted to say that an OOME was imminent, but you just can't tell.
With such small heap sizes, the increase here is likely just due to bookkeeping/cache size alignment/etc. You're talking about less than 50KB or so looking at the resolution on the scale, so I shouldn't be worried.
If you do think there's a legitimate risk of OutOfMemoryErrors, the only way to show this is to put a stress test together and show that the application really does run out of heap space.
The HotSpot garbage collectors decide to increase the total heap size immediately after a full GC has completed if the ratio of free space to total heap size falls below a certain threshold. This ratio can be tuned using one of the many -XX options for the garbage collector(s).
Looking at the memory graph, you will see that the heap size increases occur at the "saw points"; i.e. the local maxima. Each of these correspond to running a full GC. If you look really carefully at the "points" where the heap gets expanded, you will see that in each case the amount of free space immediately following the full GC is a bit higher than the previous such "point".
I image that what is happening is that you application's memory usage is cyclical. If the GC runs at or near a high point of the cycle, it won't be able to free as much memory as if the GC runs at or near a low point. This variability may be enough to cause the GC to expand the heap.
(Another possibility is that your application has a slow memory leak.)
That means for a long running app that it will run out of heap space at some time ?
No. Assuming that your application's memory usage (i.e. the integral of space occupied by reachable objects) is cyclic, the heap size will approach a fixed high limit and never exceed it. Certainly OOME's are not inevitable.

Java heap keeps on shrinking! What is happening in this graph of heap size?

This is a screen shot of a JVM (win64, 6u17) running ActiveMQ, after every garbage collection the heap size is reducing. As the heap size reduces garbage collection gets more frequent and the heap reduces more quickly. Eventually the VM locks up as it's spending all it's time in GC.
-Xms is the default and -Xmx is 2048mb.
What is happening!!? How can I avoid this?
http://imagebin.org/92614
Shrinking heap http://imagebin.org/index.php?mode=image&id=92614
n.b originally posted on serverfault.com, moved to stackoverflow.com as requested
Google found me the following, from the IBM JVM FAQ (how's that for an NLA):
When does the Java heap shrink?
Heap shrinkage occurs when GC determines that there is a lot of free heap storage, and releasing some heap memory is beneficial for system performance. Heap shrinkage occurs after GC, but when all the threads are still suspended.
The Sun JVM does something similar. Below is an excerpt from an Oracle Technology Network article entitled Ergonomics in the 5.0 Java Virtual Machine.
The heap will grow or shrink to a size that will support the chosen throughput goal. Some oscillations in the size of the heap during initialization and during a change in the application's behavior can be expected.
...
It is typical that the size of the heap will oscillate as the garbage collector tries to satisfy competing goals. This is true even if the application has reached a steady state. The pressure to achieve a throughput goal (which may require a larger heap) competes with the goals for a maximum pause time and a minimum footprint (which both may require a small heap).
I suggest you have a look at the rest of that document; it may have more information relevant to your problem.
There is a JVM argument that controls when the heap is resized.
-XX:MaxHeapFreeRatio
The default value for this is 70. The free ratio is the amount of space not allocated on the heap over the total heap size. It the percentage of free space rises above the default of 70% the jvm will rreduce the size of the heap to allow the OS to use the memory.
If the heap is shrinking too often you can increase the value of -XX:MaxHeapFreeRatio. If it is set to 100 presumably it will never skrink.
Just a guess:
It looks like the system is pretty much idle. There might be some caching going on, and stuff drops out of the cache and gets gc'd. Or since it is a queuing system, maybe it has some messages, in the queue, which slowly get delivered and gc'd afterwards.
The increased frequence of gc-runs might be due to ever decreasing load on the system.
As to how to avoid it. Why do you want to avoid it? It seems like your CPU load is zero. So you are free to let the gc do whatever it wants

Categories

Resources