Quickly unload bitmaps from memory - java

I am creating this Android game in Java. I have quite a lot of images but don't need to use them all at once so I have created a Resource Manger class which takes care of the Bitmaps that are in use. However, I have found it quite slow to clear the Bitmap out of the memory. I am currently doing something like this:
bitmap.recycle()
bitmap = null
System.gc (also tried Runtime.getRuntime().gc())
Firstly, is there any way to quicker unload the bitmaps from the memory or is it possible to somehow check if they actually ARE cleared so I can make the loading screen depend on that as well?

There is no guarantee that the garbage collector will actually be run when we attempt for System.gc() as gc() expects certain preconditions like resource hunger. So it is quite obvious that calling gc() is just wasting critical CPU Cycles. As a developer we can make unnecessary objects for gc collectable by nullifying the references.
There are couple of optimization techniques that can be helpful while creating a gaming system(game).
Use Texture. Here is an example.
Use Sprite and SpriteSheets( It gives less overhead to the system than loading individual bitmaps). many open source game engines are there who uses this.If you don't want to use them get an idea how to create from scratch from these sources.
Use these standard android doc for how to Loading Large Bitmaps Efficiently
and Caching Bitmaps for better usage of bitmap. The idea is when users device is not efficient enough to handle the amount of processing and/or the memory is less for your game you can always scale down the bitmap(compromise with quality for better response).
Always test your app against memory leak problems. Here is a nice post that will help.
Keep InMemory(don't release once used) items that are used several times inside the game in the same scene. The reason is it takes lot of time to load images into the memory.
Hope this will help you.

As SylvainL said, System.gc and friends collects the full garbage and can be quite slow. The Java machine runs the GC periodically, and period is finetuned depending on how much free memory is available at a given moment.
Best choice for me is to use some kind of bitmap pooling: having a set of prefab Bitmap instances that you can acquire from and release to the pool, and managing Buffer instances in a cache applying LRU policies.
With proper finetuning, you can get zero cost on creating and destroying Bitmap instances as they're pooled, and Buffer instances containing bitmap data will be dynamically loaded to and unloaded from memory depending on usage.

Related

Garbage collection loading screen

I have two activities in my android application. When I switch from first activity to second activity, gc starts and makes second activity to lag until it completes. I decided to make a splash screen (loading screen) that will not close until gc finishes but I do not know how to get gc status pro-grammatically. Is there any class of it? Please let me know how can I get this scenario!
To begin with, in Android, garbage collection is organized by the ART - Android Runtime or DVM - Dalvik Virtual Machine (on older devices). As ART/Dalvik are essentially specialized versions of JVM, they have similar approach to GC, hence it is solely managed by the system and not by the user.
Hence, you don't get to control the garbage collection in Android.
Indeed, you can call System.gc(), but it's nor guaranteed nor recommended to do. You are expected to completely forget about garbage collection process and leave it to the system.
While you cannot control it, you are still responsible to manage the memory and prevent excessive memory usage as much as possible. A few tips, you should consider:
Release bulky objects (remove hard references pointing to them) as soon as you're done working with them;
Utilize multithreading to your needs, threads will work in parallel and faster (especially on multi-core processors);
Optimize your algorithms, even basic list iterations could potentially slow the process and leak memory if done incorrectly
Thank you guys for answers. After some working i found what the problem in code.
I was executing this async class in while loop with new instance. So it keeps memory increasing and after two hours it starts hangs or when activity switched gc executes.
I think the answer by #Serj sums it up quite good. Maybe you find a workaround to get the GC triggered if you keep the instance of your old activity, and thus have it still being referenced, until your splash screen is set up. Then you remove the last references and hope for the GC to be called - but yet it could happen that it will get called later. It's a good question how to see the status of the GC, maybe you can read out the memory and see if its filled or not?
The best advice is refactoring and using objects only in scopes in which they are needed.

Bitmap.Config.HARDWARE vs Bitmap.Config.RGB_565

API 26 adds new option Bitmap.Config.HARDWARE:
Special configuration, when bitmap is stored only in graphic memory.
Bitmaps in this configuration are always immutable. It is optimal for
cases, when the only operation with the bitmap is to draw it on a
screen.
Questions that aren't explained in docs:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over
Bitmap.Config.RGB_565 when speed is of top priority and quality
and mutability are not (e.g. for thumbnails, etc)?
Does pixel data after decoding using this option actually NOT
consume ANY heap memory and resides in GPU memory only? If so, this seems
to finally be a relief for OutOfMemoryException concern when
working with images.
What quality compared to RGB_565, RGBA_F16 or ARGB_8888 should we expect
from this option?
Is speed of decoding itself the same/better/worth compared to
decoding with RGB_565?
(Thanks #CommonsWare for pointing to it in comments) What would
happen if we exceed GPU memory when decoding an image using this
option? Would some exception be thrown (maybe the same OutOfMemoryException :)?
Documentation and public source code is not pushed yet to Google's git. So my research is based only on partial information, some experiments, and on my own experience porting JVM's to various devices.
My test created large mutable Bitmap and copied it into a new HARDWARE Bitmap on a click of a button, adding it into a bitmap list. I managed to create several instances of the large bitmaps before it crashed.
I was able to find this in the android-o-preview-4 git push:
+struct AHardwareBuffer;
+#ifdef EGL_EGLEXT_PROTOTYPES
+EGLAPI EGLClientBuffer eglGetNativeClientBufferANDROID (const struct AHardwareBuffer *buffer);
+#else
+typedef EGLClientBuffer (EGLAPIENTRYP PFNEGLGETNATIVECLIENTBUFFERANDROID) (const struct AHardwareBuffer *buffer);
And looking for the documentation of AHardwareBuffer, under the hood it is creating an EGLClientBuffer backed by ANativeWindowBuffer (native graphic buffer) in Android shared memory ("ashmem"). But the actual implementation may vary across hardware.
So as to the questions:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over Bitmap.Config.RGB_565...?
For SDK >= 26, HARDWARE configuration can improve the low level bitmap drawing by preventing the need to copy the pixel data to the GPU every time the same bitmap returns to the screen. I guess it can prevent losing some frames when a bitmap is added to the screen.
The memory is not counted against your app, and my test confirmed this.
The native library docs say it will return null if memory allocation was unsuccessful.
Without the source code, it is not clear what the Java implementation (the API implementors) will do in this case - it might decide to throw OutOfMemoryException or fallback to a different type of allocation.
Update: Experiment reveals that no OutOfMemoryException is thrown. While the allocation is successful - everything works fine. Upon failed allocation - the emulator crashed (just gone). On other occasions I've got a weird NullPointerException when allocating Bitmap in app memory.
Due to the unpredictable stability, I would not recommend using this new API in production currently. At least not without extensive testing.
Does pixel data after decoding using this option actually NOT consume ANY heap memory and resides in GPU memory only? If so, this
seems to finally be a relief for OutOfMemoryException concern when
working with images.
Pixel data will be in shared memory (probably texture memory), but there still be a small Bitmap object in Java referencing it (so "ANY" is inaccurate).
Every vendor can decide to implement the actual allocation differently, it's not a public API they are bound to.
So OutOfMemoryException may still be an issue. I'm not sure how it can be handled correctly.
What quality compared to RGB_565/ARGB_8888?
The HARDWARE flag is not about quality, but about pixel storage location. Since the configuration flags cannot be OR-ed, I suppose that the default (ARGB_8888) is used for the decoding.
(Actually, the HARDWARE enum seem like a hack to me).
Is speed of decoding itself the same/better/worse...?
HARDWARE flag seem unrelated to decoding, so the same as ARGB_8888.
What would happen if we exceed GPU memory?
My test result in very bad things when memory is running out.
The emulator crashed horribly sometimes, and I've got unexpected unrelated NPE on other occasions. No OutOfMemoryException occurred, and there was also no way to tell when the GPU memory is running out, so no way to foresee this.

GC is too active on BitmapFactory.decodeStream()

I have an app which displays grid of thumbnails. The app decodes the input stream to bitmap with BitmapFactory.decodeStream() for each thumbnail in order to display it.
I noticed that GC if super active when I scroll up/down fast enough, which makes the scrolling jerky.
I tried to isolate the problem and wrote a simple app where I do 10000 decodeStream() calls in a loop and noticed that even though there is enough of memory, the GC is still getting triggered constantly (even if I call bitmap.recycle() after each iteration).
Question: how to prevent GC from being too active while executing BitmapFactory.decodeStream()?
The general approach to dealing with memory in Android is the same as the mantra for environmental concerns: reduce, reuse, recycle. "Reduce" means "request less" (e.g., use inSampleSize on BitmapFactory.Options to only load in a downsampled image). "Recycle" means "make sure it can get garbage-collected ASAP".
But, before "recycle" comes "reuse". The Dalvik garbage collector is not a compacting or moving collector, so heap can become fragmented. If you already have an allocation that's the right size, reuse it, rather than let it be collected and then have to re-allocated it again. With bitmaps, that means use inBitmap on BitmapFactory.Options, or use an image-loading library that does this for you.
Will it give the same boost on Android >=5.0
Generally yes, though the exact impacts may vary somewhat.
or the optimizations made on L make the use of inBitmap not necessary (not worth added complexity)?
ART's garbage collector has a variety of improvements. The big one is that it is a compacting or moving collector, though only while your app is in the background, which will not help you much in your case.
However, ART also has a separate area of the heap for large byte arrays (or other large objects that do not have any pointers to other objects inside of them). ART is much more efficient about collecting these, and they will cause less heap fragmentation.
That being said, I'd still use inBitmap. If your minSdkVersion was 21+, maybe you might try skipping inBitmap and see how it goes. But if your minSdkVersion is below 21, you need inBitmap anyway, and I'd just use that code across the board.

System.gc will work in android(Andengine)

I developing a game using andengine. In J2me game when exit , i made all object as null
ie:
Image img;
Sprite s1;
When exit application ,
img=null;
s1=null;
In android i will use System.gc() or i need to make all texture, textureRegion and sprite as make as null, when exit appliaction ?
i think you should not call System.gc() explicitly. Android OS takes care of that.
"Calling System.gc() from your app is like providing electricity connection from your home to light up your complete society's lights"
I mean it slows down your app to clean all the garbages of the system.......
N_JOY.
Java garbage collection should take care of that. You don't need to do that.
However I would close open connections, file handles, etc..
System.gc() is just a hint to the JVM that garbage collection is suggested, however Java is running it at its own will.
In Android in the presence of a garbage collector, it is never good practice to manually call the GC. A GC is organized around heuristic algorithms which work best when left to their own devices. Calling the GC manually often decreases performance.
Occasionally, in some relatively rare situations, one may find that a particular GC gets it wrong, and a manual call to the GC may then improves things, performance-wise. This is because it is not really possible to implement a "perfect" GC which will manage memory optimally in all cases. Such situations are hard to predict and depend on many subtle implementation details. The "good practice" is to let the GC run by itself; a manual call to the GC is the exception, which should be envisioned only after an actual performance issue has been duly witnessed.
It's better to spend more effort in avoiding the unnecessary creation of objects (like creation of objects inside loops)..
Look at the Question Garbage collector in Android

Java 3D Memory Leak

I have a large scene graph in Java 3D consisting out of a Group which contains around 3500 Switches, each containing a Shape3D and a Group, the latter contains two more Shape3Ds.
The reason for this setup is that each of the 3500 Switches must be able to be either completely hidden or have either of its two children visible.
The problem occurs when I try to modify the geometry of the two Shape3Ds in the Group in a Switch. I have attempted the following:
Change Group to BranchGroup. When the geometry needs to be changed I detach the BranchGroup and create a new one, with updated geometry, to replace it. Leaks huge amounts of memory. For example, the initial memory usage will be around 100 MB. A change in geometry later it is around 400 MB.
Make the Geometry editable. When the geometry needs to be changed I edit it directly. Leaks huge amounts of memory. Similar to above.
Make the Geometry editable, but by reference. When the geometry needs to be changed I call updateData(...) with an appropriate GeometryUpdater, which then does its thing. Leaks memory.
Recreate the entire scene graph. When the geometry needs to be changed, I detach the entire scene graph, recreate it from scratch using the updated geometry, and attach the new scene graph. Leaks memory.
I can't help but feel there is something basic about Java 3D memory management that I'm missing and that is common to all my attempts.
The speed of changing the geometry is not an issue, as it is a rare occurence. The memory problem, however, is serious.
It's usually misleading to use tools that monitor memory at the operating system level to deduce memory leaks in a Java Virtual Machine. The JVM has its own ideas on when it is efficient to claim and reclaim memory.
If you could explain how you are observing the memory leak and why it is a serious problem then it might be easier to answer your question.
How are you measuring memory usage?
If you force a garbage collection and output the memory usage do you still see the leak?
Does the memory problem cause a java.lang.OutOfMemoryError ?
You might also be interested in this question: https://stackoverflow.com/questions/1716597/java-memory-leak-detection-tools
Attach to your program with visualvm (available as jvisualvm binary in the JDK), and use the profiler to get an idea where your memory goes.

Categories

Resources