API 26 adds new option Bitmap.Config.HARDWARE:
Special configuration, when bitmap is stored only in graphic memory.
Bitmaps in this configuration are always immutable. It is optimal for
cases, when the only operation with the bitmap is to draw it on a
screen.
Questions that aren't explained in docs:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over
Bitmap.Config.RGB_565 when speed is of top priority and quality
and mutability are not (e.g. for thumbnails, etc)?
Does pixel data after decoding using this option actually NOT
consume ANY heap memory and resides in GPU memory only? If so, this seems
to finally be a relief for OutOfMemoryException concern when
working with images.
What quality compared to RGB_565, RGBA_F16 or ARGB_8888 should we expect
from this option?
Is speed of decoding itself the same/better/worth compared to
decoding with RGB_565?
(Thanks #CommonsWare for pointing to it in comments) What would
happen if we exceed GPU memory when decoding an image using this
option? Would some exception be thrown (maybe the same OutOfMemoryException :)?
Documentation and public source code is not pushed yet to Google's git. So my research is based only on partial information, some experiments, and on my own experience porting JVM's to various devices.
My test created large mutable Bitmap and copied it into a new HARDWARE Bitmap on a click of a button, adding it into a bitmap list. I managed to create several instances of the large bitmaps before it crashed.
I was able to find this in the android-o-preview-4 git push:
+struct AHardwareBuffer;
+#ifdef EGL_EGLEXT_PROTOTYPES
+EGLAPI EGLClientBuffer eglGetNativeClientBufferANDROID (const struct AHardwareBuffer *buffer);
+#else
+typedef EGLClientBuffer (EGLAPIENTRYP PFNEGLGETNATIVECLIENTBUFFERANDROID) (const struct AHardwareBuffer *buffer);
And looking for the documentation of AHardwareBuffer, under the hood it is creating an EGLClientBuffer backed by ANativeWindowBuffer (native graphic buffer) in Android shared memory ("ashmem"). But the actual implementation may vary across hardware.
So as to the questions:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over Bitmap.Config.RGB_565...?
For SDK >= 26, HARDWARE configuration can improve the low level bitmap drawing by preventing the need to copy the pixel data to the GPU every time the same bitmap returns to the screen. I guess it can prevent losing some frames when a bitmap is added to the screen.
The memory is not counted against your app, and my test confirmed this.
The native library docs say it will return null if memory allocation was unsuccessful.
Without the source code, it is not clear what the Java implementation (the API implementors) will do in this case - it might decide to throw OutOfMemoryException or fallback to a different type of allocation.
Update: Experiment reveals that no OutOfMemoryException is thrown. While the allocation is successful - everything works fine. Upon failed allocation - the emulator crashed (just gone). On other occasions I've got a weird NullPointerException when allocating Bitmap in app memory.
Due to the unpredictable stability, I would not recommend using this new API in production currently. At least not without extensive testing.
Does pixel data after decoding using this option actually NOT consume ANY heap memory and resides in GPU memory only? If so, this
seems to finally be a relief for OutOfMemoryException concern when
working with images.
Pixel data will be in shared memory (probably texture memory), but there still be a small Bitmap object in Java referencing it (so "ANY" is inaccurate).
Every vendor can decide to implement the actual allocation differently, it's not a public API they are bound to.
So OutOfMemoryException may still be an issue. I'm not sure how it can be handled correctly.
What quality compared to RGB_565/ARGB_8888?
The HARDWARE flag is not about quality, but about pixel storage location. Since the configuration flags cannot be OR-ed, I suppose that the default (ARGB_8888) is used for the decoding.
(Actually, the HARDWARE enum seem like a hack to me).
Is speed of decoding itself the same/better/worse...?
HARDWARE flag seem unrelated to decoding, so the same as ARGB_8888.
What would happen if we exceed GPU memory?
My test result in very bad things when memory is running out.
The emulator crashed horribly sometimes, and I've got unexpected unrelated NPE on other occasions. No OutOfMemoryException occurred, and there was also no way to tell when the GPU memory is running out, so no way to foresee this.
Related
I have an app which displays grid of thumbnails. The app decodes the input stream to bitmap with BitmapFactory.decodeStream() for each thumbnail in order to display it.
I noticed that GC if super active when I scroll up/down fast enough, which makes the scrolling jerky.
I tried to isolate the problem and wrote a simple app where I do 10000 decodeStream() calls in a loop and noticed that even though there is enough of memory, the GC is still getting triggered constantly (even if I call bitmap.recycle() after each iteration).
Question: how to prevent GC from being too active while executing BitmapFactory.decodeStream()?
The general approach to dealing with memory in Android is the same as the mantra for environmental concerns: reduce, reuse, recycle. "Reduce" means "request less" (e.g., use inSampleSize on BitmapFactory.Options to only load in a downsampled image). "Recycle" means "make sure it can get garbage-collected ASAP".
But, before "recycle" comes "reuse". The Dalvik garbage collector is not a compacting or moving collector, so heap can become fragmented. If you already have an allocation that's the right size, reuse it, rather than let it be collected and then have to re-allocated it again. With bitmaps, that means use inBitmap on BitmapFactory.Options, or use an image-loading library that does this for you.
Will it give the same boost on Android >=5.0
Generally yes, though the exact impacts may vary somewhat.
or the optimizations made on L make the use of inBitmap not necessary (not worth added complexity)?
ART's garbage collector has a variety of improvements. The big one is that it is a compacting or moving collector, though only while your app is in the background, which will not help you much in your case.
However, ART also has a separate area of the heap for large byte arrays (or other large objects that do not have any pointers to other objects inside of them). ART is much more efficient about collecting these, and they will cause less heap fragmentation.
That being said, I'd still use inBitmap. If your minSdkVersion was 21+, maybe you might try skipping inBitmap and see how it goes. But if your minSdkVersion is below 21, you need inBitmap anyway, and I'd just use that code across the board.
I'm currently making an Android App that modifies some bytes of an image. For this, I've written this code:
Bitmap bmp = BitmapFactory.decodeStream(new FileInputStream(path));
ByteBuffer buffer = ByteBuffer.allocate(bmp.getWidth()*bmp.getHeight());
bmp.copyPixelsToBuffer(buffer);
return buffer.array();
The problem is that this way uses too much Heap memory, and throws OutOfMemoryException.
I know that I can make the heap memory for the App bigger, but it doesn't seem like a good design choice.
Is there a more memory-friendly way of changing bytes of an image?
It looks like there are two copies of the pixel data on the managed heap:
The uncompressed data in the Bitmap
The copy of the data in the ByteBuffer
The memory requirement could be halved by leaving the data in the Bitmap and using getPixel() / setPixel() (or perhaps editing a row at a time with the "bulk" variants), but that adds some overhead.
Depending on the nature of the image, you may be able to use a less precise format (e.g. RGB 565 instead of 8888), halving the memory requirement.
As noted in one of the comments, you could uncompress the data to a file, memory-map it with java.nio.channels.FileChannel#map(), and access it through a MappedByteBuffer. This adds a fair bit of overhead to loading and saving, and may be annoying since you have to work through a ByteBuffer rather than a byte[].
Another option is expanding the heap with android:largeHeap (documented here), though in some respects you're just postponing the inevitable: you may be asked to edit an image that is too large for the "large" heap. Also, the capacity of a "large" heap varies from device to device, just as the "normal-sized" heap does. Whether or not this makes sense depends in part on how large the images you're loading are.
Before you do any of this I'd recommend using the heap analysis tools (see e.g. this blog post) to see where your memory is going. Also, look at the logcat above the out-of-memory exception; it should identify the size of the allocation that failed. Make sure it looks "reasonable", i.e. you're not inadvertently allocating significantly more than you think you are.
I am creating this Android game in Java. I have quite a lot of images but don't need to use them all at once so I have created a Resource Manger class which takes care of the Bitmaps that are in use. However, I have found it quite slow to clear the Bitmap out of the memory. I am currently doing something like this:
bitmap.recycle()
bitmap = null
System.gc (also tried Runtime.getRuntime().gc())
Firstly, is there any way to quicker unload the bitmaps from the memory or is it possible to somehow check if they actually ARE cleared so I can make the loading screen depend on that as well?
There is no guarantee that the garbage collector will actually be run when we attempt for System.gc() as gc() expects certain preconditions like resource hunger. So it is quite obvious that calling gc() is just wasting critical CPU Cycles. As a developer we can make unnecessary objects for gc collectable by nullifying the references.
There are couple of optimization techniques that can be helpful while creating a gaming system(game).
Use Texture. Here is an example.
Use Sprite and SpriteSheets( It gives less overhead to the system than loading individual bitmaps). many open source game engines are there who uses this.If you don't want to use them get an idea how to create from scratch from these sources.
Use these standard android doc for how to Loading Large Bitmaps Efficiently
and Caching Bitmaps for better usage of bitmap. The idea is when users device is not efficient enough to handle the amount of processing and/or the memory is less for your game you can always scale down the bitmap(compromise with quality for better response).
Always test your app against memory leak problems. Here is a nice post that will help.
Keep InMemory(don't release once used) items that are used several times inside the game in the same scene. The reason is it takes lot of time to load images into the memory.
Hope this will help you.
As SylvainL said, System.gc and friends collects the full garbage and can be quite slow. The Java machine runs the GC periodically, and period is finetuned depending on how much free memory is available at a given moment.
Best choice for me is to use some kind of bitmap pooling: having a set of prefab Bitmap instances that you can acquire from and release to the pool, and managing Buffer instances in a cache applying LRU policies.
With proper finetuning, you can get zero cost on creating and destroying Bitmap instances as they're pooled, and Buffer instances containing bitmap data will be dynamically loaded to and unloaded from memory depending on usage.
I've written a simple application that works with database. My program have a table to show data from database. When I try to expand frame the program fails with OutOfMemory error, but if i don't try to do this, it works well.
I start my program with -Xmx4m parametre. Does it really need more than 4 megabytes to be in expanded state?
Another question: if I run the java visualVM I see the saw-edged chart of the heap usage of my program while other programs which is using java VM(such as netbeans) have more rectilinear charts. Why is heap usage of my program so unstable even if it does nothing(only waiting for user to push a button)?
You may want to try setting this value to generate a detailed heap dump to show you exactly what is going on.
-XX:+HeapDumpOnOutOfMemoryError
A typical "small" Java desktop application in 2011 is going to run with ~64-128MB. Unless you have a really pressing need, I would start by leaving it set to the default (i.e. no setting).
If you are trying to do something different (e.g. run this on an Android device), you are going to need to get very comfortable with profiling (and you should probably post with that tag).
Keep in mind that your 100 record cache (~12 bytes) may (probably) is double that if you are storing character data (Java uses UCS-16 internally).
RE: the "unstability", the JVM is going handling memory usage for you, and will perform garbage collection according to whatever algos it chooses (these have changed dramatically over the years). The graphing may just be an artifact of the tool and the sample period. The performance in a desktop app is affected by a huge number of factors.
As an example, we once had a huge memory "leak" that only showed up in one automated test but never showed up in normal real world usage. Turned out the test left the mouse hovering over a tool tip which included the name of the open file, which in turn had a set of references back to the entire (huge) project. Wiggling the mouse a few pixels got rid of the tooltip, which meant that the references all cleared up and the garbage collector took out the trash.
Moral of the story? You need to capture the exact heap dump at time of the out-of-memory and review it very carefully.
Why would you set your maximum heap size to 4 megabytes? Java is often memory intensive, so setting it at such a ridiculously low level is a recipe for disaster.
It also depends on how many objects are being created and destroyed by your code, and the underlying Swing (I am assuming) components use components to draw the elements, and how these elements are created and destroyed each time a components is redrawn.
Look at the CellRenderer code and this will show you why objects are being created and destroyed often, and why the garbage collector does such a wonderful job.
Try playing with the Xmx setting and see how the charts flatten out. I would expect Xmx64m or Xmx128m would be suitable (although the amount of data coming out of your database will obviously be an important contributing factor.
You may need more than 4Mb for a GUI with an expanded screen if you are using a double buffer. This will generate multiple image of the UI. It does this to show them quickly on the screen. Usually this is done assuming you have lots and lots of memory.
The Sawtooth memory allocation is due to something being done, then garbage collected. This may be on a repaint operation or other timer. Is there a timer in your code to check some process or value being changed. Or have you added code to a object repaint or other process?
I think 4mb is too small for anything except a trivial program - for example lots of GUI libraries (Swing included) will need to allocate temporary working space for graphics that alone may exceed that amount.
If you want to avoid out of memory errors but also want to avoid over-allocating memory to the JVM, I'd recommend setting a large maximum heap size and a small initial heap size.
Xmx (the maximum heap size) should
generally be quite large, e.g. 256mb
Xms (the initial heap size) can be
much smaller, 4mb should work -
though remember that if the application needs more
than this there will be a temporary performance
hit while it is resized
I have a large scene graph in Java 3D consisting out of a Group which contains around 3500 Switches, each containing a Shape3D and a Group, the latter contains two more Shape3Ds.
The reason for this setup is that each of the 3500 Switches must be able to be either completely hidden or have either of its two children visible.
The problem occurs when I try to modify the geometry of the two Shape3Ds in the Group in a Switch. I have attempted the following:
Change Group to BranchGroup. When the geometry needs to be changed I detach the BranchGroup and create a new one, with updated geometry, to replace it. Leaks huge amounts of memory. For example, the initial memory usage will be around 100 MB. A change in geometry later it is around 400 MB.
Make the Geometry editable. When the geometry needs to be changed I edit it directly. Leaks huge amounts of memory. Similar to above.
Make the Geometry editable, but by reference. When the geometry needs to be changed I call updateData(...) with an appropriate GeometryUpdater, which then does its thing. Leaks memory.
Recreate the entire scene graph. When the geometry needs to be changed, I detach the entire scene graph, recreate it from scratch using the updated geometry, and attach the new scene graph. Leaks memory.
I can't help but feel there is something basic about Java 3D memory management that I'm missing and that is common to all my attempts.
The speed of changing the geometry is not an issue, as it is a rare occurence. The memory problem, however, is serious.
It's usually misleading to use tools that monitor memory at the operating system level to deduce memory leaks in a Java Virtual Machine. The JVM has its own ideas on when it is efficient to claim and reclaim memory.
If you could explain how you are observing the memory leak and why it is a serious problem then it might be easier to answer your question.
How are you measuring memory usage?
If you force a garbage collection and output the memory usage do you still see the leak?
Does the memory problem cause a java.lang.OutOfMemoryError ?
You might also be interested in this question: https://stackoverflow.com/questions/1716597/java-memory-leak-detection-tools
Attach to your program with visualvm (available as jvisualvm binary in the JDK), and use the profiler to get an idea where your memory goes.