Java 3D Memory Leak - java

I have a large scene graph in Java 3D consisting out of a Group which contains around 3500 Switches, each containing a Shape3D and a Group, the latter contains two more Shape3Ds.
The reason for this setup is that each of the 3500 Switches must be able to be either completely hidden or have either of its two children visible.
The problem occurs when I try to modify the geometry of the two Shape3Ds in the Group in a Switch. I have attempted the following:
Change Group to BranchGroup. When the geometry needs to be changed I detach the BranchGroup and create a new one, with updated geometry, to replace it. Leaks huge amounts of memory. For example, the initial memory usage will be around 100 MB. A change in geometry later it is around 400 MB.
Make the Geometry editable. When the geometry needs to be changed I edit it directly. Leaks huge amounts of memory. Similar to above.
Make the Geometry editable, but by reference. When the geometry needs to be changed I call updateData(...) with an appropriate GeometryUpdater, which then does its thing. Leaks memory.
Recreate the entire scene graph. When the geometry needs to be changed, I detach the entire scene graph, recreate it from scratch using the updated geometry, and attach the new scene graph. Leaks memory.
I can't help but feel there is something basic about Java 3D memory management that I'm missing and that is common to all my attempts.
The speed of changing the geometry is not an issue, as it is a rare occurence. The memory problem, however, is serious.

It's usually misleading to use tools that monitor memory at the operating system level to deduce memory leaks in a Java Virtual Machine. The JVM has its own ideas on when it is efficient to claim and reclaim memory.
If you could explain how you are observing the memory leak and why it is a serious problem then it might be easier to answer your question.
How are you measuring memory usage?
If you force a garbage collection and output the memory usage do you still see the leak?
Does the memory problem cause a java.lang.OutOfMemoryError ?
You might also be interested in this question: https://stackoverflow.com/questions/1716597/java-memory-leak-detection-tools

Attach to your program with visualvm (available as jvisualvm binary in the JDK), and use the profiler to get an idea where your memory goes.

Related

Bitmap.Config.HARDWARE vs Bitmap.Config.RGB_565

API 26 adds new option Bitmap.Config.HARDWARE:
Special configuration, when bitmap is stored only in graphic memory.
Bitmaps in this configuration are always immutable. It is optimal for
cases, when the only operation with the bitmap is to draw it on a
screen.
Questions that aren't explained in docs:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over
Bitmap.Config.RGB_565 when speed is of top priority and quality
and mutability are not (e.g. for thumbnails, etc)?
Does pixel data after decoding using this option actually NOT
consume ANY heap memory and resides in GPU memory only? If so, this seems
to finally be a relief for OutOfMemoryException concern when
working with images.
What quality compared to RGB_565, RGBA_F16 or ARGB_8888 should we expect
from this option?
Is speed of decoding itself the same/better/worth compared to
decoding with RGB_565?
(Thanks #CommonsWare for pointing to it in comments) What would
happen if we exceed GPU memory when decoding an image using this
option? Would some exception be thrown (maybe the same OutOfMemoryException :)?
Documentation and public source code is not pushed yet to Google's git. So my research is based only on partial information, some experiments, and on my own experience porting JVM's to various devices.
My test created large mutable Bitmap and copied it into a new HARDWARE Bitmap on a click of a button, adding it into a bitmap list. I managed to create several instances of the large bitmaps before it crashed.
I was able to find this in the android-o-preview-4 git push:
+struct AHardwareBuffer;
+#ifdef EGL_EGLEXT_PROTOTYPES
+EGLAPI EGLClientBuffer eglGetNativeClientBufferANDROID (const struct AHardwareBuffer *buffer);
+#else
+typedef EGLClientBuffer (EGLAPIENTRYP PFNEGLGETNATIVECLIENTBUFFERANDROID) (const struct AHardwareBuffer *buffer);
And looking for the documentation of AHardwareBuffer, under the hood it is creating an EGLClientBuffer backed by ANativeWindowBuffer (native graphic buffer) in Android shared memory ("ashmem"). But the actual implementation may vary across hardware.
So as to the questions:
Should we ALWAYS prefer now Bitmap.Config.HARDWARE over Bitmap.Config.RGB_565...?
For SDK >= 26, HARDWARE configuration can improve the low level bitmap drawing by preventing the need to copy the pixel data to the GPU every time the same bitmap returns to the screen. I guess it can prevent losing some frames when a bitmap is added to the screen.
The memory is not counted against your app, and my test confirmed this.
The native library docs say it will return null if memory allocation was unsuccessful.
Without the source code, it is not clear what the Java implementation (the API implementors) will do in this case - it might decide to throw OutOfMemoryException or fallback to a different type of allocation.
Update: Experiment reveals that no OutOfMemoryException is thrown. While the allocation is successful - everything works fine. Upon failed allocation - the emulator crashed (just gone). On other occasions I've got a weird NullPointerException when allocating Bitmap in app memory.
Due to the unpredictable stability, I would not recommend using this new API in production currently. At least not without extensive testing.
Does pixel data after decoding using this option actually NOT consume ANY heap memory and resides in GPU memory only? If so, this
seems to finally be a relief for OutOfMemoryException concern when
working with images.
Pixel data will be in shared memory (probably texture memory), but there still be a small Bitmap object in Java referencing it (so "ANY" is inaccurate).
Every vendor can decide to implement the actual allocation differently, it's not a public API they are bound to.
So OutOfMemoryException may still be an issue. I'm not sure how it can be handled correctly.
What quality compared to RGB_565/ARGB_8888?
The HARDWARE flag is not about quality, but about pixel storage location. Since the configuration flags cannot be OR-ed, I suppose that the default (ARGB_8888) is used for the decoding.
(Actually, the HARDWARE enum seem like a hack to me).
Is speed of decoding itself the same/better/worse...?
HARDWARE flag seem unrelated to decoding, so the same as ARGB_8888.
What would happen if we exceed GPU memory?
My test result in very bad things when memory is running out.
The emulator crashed horribly sometimes, and I've got unexpected unrelated NPE on other occasions. No OutOfMemoryException occurred, and there was also no way to tell when the GPU memory is running out, so no way to foresee this.

Fast jvm start / jvm persistancy - starting jvm with data from heap dump

I am developing an in memory data structure, and would like to add persistency.
I am looking for a way to do this fast. I thought about dumpping a heap-dump once in a while.
Is there a way to load this java heap dump as is, into my memory? or is it impossible?
Otherwise, other suggestions for fast write and fast read of the entire information?
(Serialization might take a lot of time)
-----------------edited explination:--------
Since my memory might be full of small pieces of information, referencing each other - and so serialization may require me to in efficeintly scan all my memory. reloading is also possibly problematic.
On the other hand, I can define a gigantic array, and each object I create, I shall put it in the array. Links will be a long number, reperesnting the place in the array. Now, I can just dump this array as is - and also reload it as is.
There are even some jvms like JRockit that utilize the disk space, and so maybe it is possible maybe to dump as is very quickly and to re-load very quicky.
To prove my point, java dump contains all the information of the jvm, and it is produced quickly.
Sorry, but serialization of 4GB isn't even close to being in the seconds dump is.
Also, memory is memory and there are operating systems that allow you a ram memory dump quicky.
https://superuser.com/questions/164960/how-do-i-dump-physical-memory-in-linux
When you think about it... this is quite a good strategy for persistant data structures. There is quite a hype about in-memory data bases in the last decade. But why settel for that? What if I want a fibonacci heap - to be "almost persistant". That is, every 5 minutes I will dump the inforamtion (quickly) and in case of a electrical outage, I have a backup from 5 minutes ago.
-----------------end of edited explination:--------
Thank you.
In general, there is no way to do this on HotSpot.
Objects in the heap have 2 words of header, the second of which points into permgen for the class metadata (known as a klassOop). You would have to dump all of permgen as well, which includes all the pointers to compiled code - so basically the entire process.
There would be no sane way to recover the heap state correctly.
It may be better to explain precisely what you want to build & why already-existing products don't do what you need.
Use Serialization. Implement java.io.Serializable, add serialVersionUID to all of your classes, and you can persist them to any OutputStream (file, network, whatever). Just create a starting object from where all your object are reachable (even indirectly).
I don't think that Serialization would take long time, it's optimized code in the JVM.
You can use jhat or jvisualvm to load your dump to analyze it. I don't know whether the dump file can be loaded and restarted again.

Quickly unload bitmaps from memory

I am creating this Android game in Java. I have quite a lot of images but don't need to use them all at once so I have created a Resource Manger class which takes care of the Bitmaps that are in use. However, I have found it quite slow to clear the Bitmap out of the memory. I am currently doing something like this:
bitmap.recycle()
bitmap = null
System.gc (also tried Runtime.getRuntime().gc())
Firstly, is there any way to quicker unload the bitmaps from the memory or is it possible to somehow check if they actually ARE cleared so I can make the loading screen depend on that as well?
There is no guarantee that the garbage collector will actually be run when we attempt for System.gc() as gc() expects certain preconditions like resource hunger. So it is quite obvious that calling gc() is just wasting critical CPU Cycles. As a developer we can make unnecessary objects for gc collectable by nullifying the references.
There are couple of optimization techniques that can be helpful while creating a gaming system(game).
Use Texture. Here is an example.
Use Sprite and SpriteSheets( It gives less overhead to the system than loading individual bitmaps). many open source game engines are there who uses this.If you don't want to use them get an idea how to create from scratch from these sources.
Use these standard android doc for how to Loading Large Bitmaps Efficiently
and Caching Bitmaps for better usage of bitmap. The idea is when users device is not efficient enough to handle the amount of processing and/or the memory is less for your game you can always scale down the bitmap(compromise with quality for better response).
Always test your app against memory leak problems. Here is a nice post that will help.
Keep InMemory(don't release once used) items that are used several times inside the game in the same scene. The reason is it takes lot of time to load images into the memory.
Hope this will help you.
As SylvainL said, System.gc and friends collects the full garbage and can be quite slow. The Java machine runs the GC periodically, and period is finetuned depending on how much free memory is available at a given moment.
Best choice for me is to use some kind of bitmap pooling: having a set of prefab Bitmap instances that you can acquire from and release to the pool, and managing Buffer instances in a cache applying LRU policies.
With proper finetuning, you can get zero cost on creating and destroying Bitmap instances as they're pooled, and Buffer instances containing bitmap data will be dynamically loaded to and unloaded from memory depending on usage.

Java Heap Hard Drive

I have been working on a Java program that generates fractal orbits for quite some time now. Much like photographs, the larger the image, the better it will be when scaled down. The program uses a 2D object (Point) array, which is written to when a point's value is calculated. That is to say the Point is stored in it's corresponding value, I.e.:
Point p = new Point(25,30);
histogram[25][30] = p;
Of course, this is edited for simplicity. I could just write the point values to a CSV, and apply them to the raster later, but using similar methods has yielded undesirable results. I tried for quite some time because I enjoyed being able to make larger images with the space freed by not having this array. It just won't work. For clarity I'd like to add that the Point object also stores color data.
The next problem is the WriteableRaster, which will have the same dimensions as the array. Combined the two take up a great deal of memory. I have come to accept this, after trying to change the way it is done several times, each with lower quality results.
After trying to optimize for memory and time, I've come to the conclusion that I'm really limited by RAM. This is what I would like to change. I am aware of the -Xmx switch (set to 10GB). Is there any way to use Windows' virtual memory to store the raster and/or the array? I am well aware of the significant performance hit this will cause, but in lieu of lowering quality, there really doesn't seem to be much choice.
The OS is already making hard drive space into RAM for you and every process of course -- no magic needed. This will be more of a performance disaster than you think; it will be so slow as to effectively not work.
Are you looking for memory-mapped files?
http://docs.oracle.com/javase/6/docs/api/java/nio/MappedByteBuffer.html
If this is really to be done in memory, I would bet that you could dramatically lower your memory usage with some optimization. For example, your Point object is mostly overhead and not data. Count up the bytes needed for the reference, then for the Object overhead, compared to two ints.
You could reduce the overhead to nothing with two big parallel int arrays for your x and y coordinates. Of course you'd have to encapsulate this for access in your code. But it could halve your memory usage for this data structure. Millions fewer objects also speeds up GC runs.
Instead of putting a WritableRaster in memory, consider writing out the image file in some simple image format directly, yourself. BMP can be very simple. Then perhaps using an external tool to efficiently convert it.
Try -XX:+UseCompressedOops to reduce object overhead too. Also try -XX:NewRatio=20 or higher to make the JVM reserve almost all its heap for long-lived objects. This can actually let you use more heap.
It is not recommended to configure your JVM memory parameters (Xmx) in order to make the operating system to allocate from it's swap memory. apparently the garbage collection mechanism needs to have random access to heap memory and if doesn't, the program will thrash for a long time and possibly lock up. please check the answer given already to my question (last paragraph):
does large value for -Xmx postpone Garbage Collection

Java heap size usage

I've written a simple application that works with database. My program have a table to show data from database. When I try to expand frame the program fails with OutOfMemory error, but if i don't try to do this, it works well.
I start my program with -Xmx4m parametre. Does it really need more than 4 megabytes to be in expanded state?
Another question: if I run the java visualVM I see the saw-edged chart of the heap usage of my program while other programs which is using java VM(such as netbeans) have more rectilinear charts. Why is heap usage of my program so unstable even if it does nothing(only waiting for user to push a button)?
You may want to try setting this value to generate a detailed heap dump to show you exactly what is going on.
-XX:+HeapDumpOnOutOfMemoryError
A typical "small" Java desktop application in 2011 is going to run with ~64-128MB. Unless you have a really pressing need, I would start by leaving it set to the default (i.e. no setting).
If you are trying to do something different (e.g. run this on an Android device), you are going to need to get very comfortable with profiling (and you should probably post with that tag).
Keep in mind that your 100 record cache (~12 bytes) may (probably) is double that if you are storing character data (Java uses UCS-16 internally).
RE: the "unstability", the JVM is going handling memory usage for you, and will perform garbage collection according to whatever algos it chooses (these have changed dramatically over the years). The graphing may just be an artifact of the tool and the sample period. The performance in a desktop app is affected by a huge number of factors.
As an example, we once had a huge memory "leak" that only showed up in one automated test but never showed up in normal real world usage. Turned out the test left the mouse hovering over a tool tip which included the name of the open file, which in turn had a set of references back to the entire (huge) project. Wiggling the mouse a few pixels got rid of the tooltip, which meant that the references all cleared up and the garbage collector took out the trash.
Moral of the story? You need to capture the exact heap dump at time of the out-of-memory and review it very carefully.
Why would you set your maximum heap size to 4 megabytes? Java is often memory intensive, so setting it at such a ridiculously low level is a recipe for disaster.
It also depends on how many objects are being created and destroyed by your code, and the underlying Swing (I am assuming) components use components to draw the elements, and how these elements are created and destroyed each time a components is redrawn.
Look at the CellRenderer code and this will show you why objects are being created and destroyed often, and why the garbage collector does such a wonderful job.
Try playing with the Xmx setting and see how the charts flatten out. I would expect Xmx64m or Xmx128m would be suitable (although the amount of data coming out of your database will obviously be an important contributing factor.
You may need more than 4Mb for a GUI with an expanded screen if you are using a double buffer. This will generate multiple image of the UI. It does this to show them quickly on the screen. Usually this is done assuming you have lots and lots of memory.
The Sawtooth memory allocation is due to something being done, then garbage collected. This may be on a repaint operation or other timer. Is there a timer in your code to check some process or value being changed. Or have you added code to a object repaint or other process?
I think 4mb is too small for anything except a trivial program - for example lots of GUI libraries (Swing included) will need to allocate temporary working space for graphics that alone may exceed that amount.
If you want to avoid out of memory errors but also want to avoid over-allocating memory to the JVM, I'd recommend setting a large maximum heap size and a small initial heap size.
Xmx (the maximum heap size) should
generally be quite large, e.g. 256mb
Xms (the initial heap size) can be
much smaller, 4mb should work -
though remember that if the application needs more
than this there will be a temporary performance
hit while it is resized

Categories

Resources