Java heap size usage - java

I've written a simple application that works with database. My program have a table to show data from database. When I try to expand frame the program fails with OutOfMemory error, but if i don't try to do this, it works well.
I start my program with -Xmx4m parametre. Does it really need more than 4 megabytes to be in expanded state?
Another question: if I run the java visualVM I see the saw-edged chart of the heap usage of my program while other programs which is using java VM(such as netbeans) have more rectilinear charts. Why is heap usage of my program so unstable even if it does nothing(only waiting for user to push a button)?

You may want to try setting this value to generate a detailed heap dump to show you exactly what is going on.
-XX:+HeapDumpOnOutOfMemoryError
A typical "small" Java desktop application in 2011 is going to run with ~64-128MB. Unless you have a really pressing need, I would start by leaving it set to the default (i.e. no setting).
If you are trying to do something different (e.g. run this on an Android device), you are going to need to get very comfortable with profiling (and you should probably post with that tag).
Keep in mind that your 100 record cache (~12 bytes) may (probably) is double that if you are storing character data (Java uses UCS-16 internally).
RE: the "unstability", the JVM is going handling memory usage for you, and will perform garbage collection according to whatever algos it chooses (these have changed dramatically over the years). The graphing may just be an artifact of the tool and the sample period. The performance in a desktop app is affected by a huge number of factors.
As an example, we once had a huge memory "leak" that only showed up in one automated test but never showed up in normal real world usage. Turned out the test left the mouse hovering over a tool tip which included the name of the open file, which in turn had a set of references back to the entire (huge) project. Wiggling the mouse a few pixels got rid of the tooltip, which meant that the references all cleared up and the garbage collector took out the trash.
Moral of the story? You need to capture the exact heap dump at time of the out-of-memory and review it very carefully.

Why would you set your maximum heap size to 4 megabytes? Java is often memory intensive, so setting it at such a ridiculously low level is a recipe for disaster.
It also depends on how many objects are being created and destroyed by your code, and the underlying Swing (I am assuming) components use components to draw the elements, and how these elements are created and destroyed each time a components is redrawn.
Look at the CellRenderer code and this will show you why objects are being created and destroyed often, and why the garbage collector does such a wonderful job.
Try playing with the Xmx setting and see how the charts flatten out. I would expect Xmx64m or Xmx128m would be suitable (although the amount of data coming out of your database will obviously be an important contributing factor.

You may need more than 4Mb for a GUI with an expanded screen if you are using a double buffer. This will generate multiple image of the UI. It does this to show them quickly on the screen. Usually this is done assuming you have lots and lots of memory.
The Sawtooth memory allocation is due to something being done, then garbage collected. This may be on a repaint operation or other timer. Is there a timer in your code to check some process or value being changed. Or have you added code to a object repaint or other process?

I think 4mb is too small for anything except a trivial program - for example lots of GUI libraries (Swing included) will need to allocate temporary working space for graphics that alone may exceed that amount.
If you want to avoid out of memory errors but also want to avoid over-allocating memory to the JVM, I'd recommend setting a large maximum heap size and a small initial heap size.
Xmx (the maximum heap size) should
generally be quite large, e.g. 256mb
Xms (the initial heap size) can be
much smaller, 4mb should work -
though remember that if the application needs more
than this there will be a temporary performance
hit while it is resized

Related

Java Memory Leaks Finalizer Your Kit

I have web server which is having memory leaks. There is sudden spike in old gen usage and then latency of server spikes. When I took heap dump and analyzed using your kit it was suggesting Finalizer object taking 100% of memory. But i am not able to understand why the gc usage is high only at some point of time and it does not happen regularly(it happens say once a week).
Also i observed that there is button on your kit "calculate exact retained size" when i use that finalizer object does not show in updated list.
I am attaching screenshot of your kit.
Also if there is a way i can get list of all the classes from where finalizer is coming up in heap dump.
Before pressing calculate retained size
After pressing calculate retained size:
As per YourKit docs: Class List page:
On opening the view, estimated retained sizes are shown instead of exact sizes, which cannot be immediately calculated. The exact sizes may be obtained by using "Calculate exact retained sizes" balloon above the "Retained Size" column. However, for most classes the estimation is very close to the exact value, so there is almost no need to run exact size calculation.
java.lang.ref.Finalizer is created by objects overriding Object.finalize() method as they have to be collected in the background. Your best bet would be to inspect the heap dump and figure out which class is using finalize(). Ideally don't depend on this method as it's behavior tends to be unpredictable.
If you have a high GC usage only once a week try to get the GC logs or record JVM behaviour with FlightRecorder. Perhaps you get a stop the world full GC cycle only once a week? It's impossible to tell without seeing logs and JVM configuration.

Java Heap Size Reduction

BACKGROUND
I recently wrote a java application that consumes a specified amount of MB. I am doing this purposefully to see how another Java application reacts to specific RAM loads (I am sure there are tools for this purpose, but this was the fastest). The memory consumer app is very simple. I enter the number of MB I want to consume and create a vector of that many bytes. I also have a reset button that removes the elements of the vector and prompts for a new number of bytes.
QUESTION
I noticed that the heap size of the java process never reduces once the vector is cleared. I tried clear(), but the heap remains the same size. It seems like the heap grows with the elements, but even though the elements are removed the size remains. Is there a way in java code to reduce heap size? Is there a detail about the java heap that I am missing? I feel like this is an important question because if I wanted to keep a low memory footprint in any java application, I would need a way to keep the heap size from growing or at least not large for long lengths of time.
Try garbage collection by making call to System.gc()
This might help you - When does System.gc() do anything
Calling GC extensively is not recommended.
You should provide max heap size with -Xmx option, and watch memory allocation by you app. Also use weak references for objects which have short time lifecycle and GC remove them automatically.

Reducing Java heap size

I have an application that uses a lot of memory diff'ing the contents of two potentially huge (100k+) directories. It makes sense to me that such an operation would use a lot of memory, but once my diff'ing operation is done, the heap remains the same size.
I basically have code that instantiates a class to store the filename, file size, path, and modification date for each file on the source and target. I save the additions, deletions, and updates in other arrays. I then clear() my source and target arrays (which could be 100k+ each by now), leaving relatively small additions, deletions, and updates arrays left.
After I clear() my target and source arrays though, the memory usage (as visible via VirtualVM and Windows Task Manager) doesn't drop. I'm not experienced enough with VirtualVM (or any profiler for that matter) to figure out what is taking up all this memory. VirtualVM's heap dump lists the top few objects with a retained size of a few megabytes.
Anything to help point me in the right direction?
If the used heap goes down after a Garbage Collection, than it likely works as expected. Java increases its heap when it needs more memory, but does not free it -- it prefers to keep it in case the application uses more memory again. See Is there a way to lower Java heap when not in use? for this topic on why the heap is not reduced after the used heap amount lowers.
The VM grows or shrinks the heap based on the command-line parameters -XX:MinHeapFreeRatio and -XX:MaxHeapFreeRatio. It will shrink the heap when the free percentage hits -XX:MaxHeapFreeRatio, whose default is 70.
There is a short discussion of this in Oracle's bug #6498735.
Depending on your code you might be generating memory leaks and the Garbage collector just can't free them up.
I would suggest to instrument your code in order to find potential memory leaks. Once this is ruled out or fixed, I would start to look at the code itself for possible improvement.
Note that for instance if you use the try/catch/finally block. The finally block might not be called at all (or at least not immediately). If you do some resource freeing in a finally block this might be the answer.
Nevertheless read up on the subject, for instance here: http://www.toptal.com/java/hunting-memory-leaks-in-java

How to automatically get retained memory while profiling in JProfiler offline mode with triggers

I have a large, memory-intensive, Java-based web application with many different features that will take me a long time to profile. Instead of manually profiling every feature in the entire application with different test data, I'm thinking a more time-efficient approach is to run JProfiler in offline mode and set up triggers to capture data for me. Testing teams will use the software normally, and over time, JProfiler will capture memory-intsensive hotspots that we can use to make our application more efficient.
However, if I set up a trigger to just take a Snapshot of the heap, then it will only give me the shallow memory -- the memory stats of each class, excluding any referenced objects it contains. But it's not useful to me to know how much memory is consumed by instances of String or char[]. What I really want to know is the retained memory of my classes -- the memory of the shallow size of each instance plus all classes that it contains. In other words, for each class in my software, I want to know how much memory will be freed when all its instances are garbage collected.
So basically I have a few questions:
1) Can JProfiler calculate the retained memory by just triggering snapshots without recording the memory? It seems that you have to actually perform the "record memory" action to calculate the retained memory, but I might be missing something.
2) If I have to record memory to calculate the retained memory information, then my next thought was to set up a trigger to record the information when the overall memory reached a certain threshold. But this raises two more questions: how will I set up a trigger to stop the recording and take a snapshot? And won't the recording miss the most important memory information since we're already past the threshold specified in the trigger?
Number 2 from above leads me to believe that the best way to profile is to trigger snapshots without any recording and calculating of retained memory -- so shallow memory only. However, if the shallow memory shows that most of my memory usage is in char[] (which it does), how can I get useful information out of this? How does this help me track down memory intensive areas of my application?
Any help is greatly appreciated
1) Can JProfiler calculate the retained memory by just triggering
snapshots without recording the memory? It seems that you have to
actually perform the "record memory" action to calculate the retained
memory, but I might be missing something.
You actually need the "Trigger heap dump" action, then the heap walker will be available. The "Start recording" action with "Allocation data" enabled records data for the live views (where only the shallow size is available), but it also provides data for the "Allocations" view of the heap walker, so you can analyze where objects were allocated.
And won't the recording miss the most important memory information
since we're already past the threshold specified in the trigger?
The heap dump captures the entire heap at the moment in time when the trigger is fired, so you should see all objects of interest.

How do I predict when I'm going to run out of memory

We have a swing based application that does complex processing on data. One of the prerequisites for our software is that any given column cannot have too many unique values. If the number is numeric, the user would need to discretize the data before they could from our tool.
Unfortunately, the algorithms we are using are combinatorially expensive in memory depending on the number of unique values per column. Right now with the wrong dataset, the app would run out of memory very quickly. Before doing one of these operations that would run out of memory, we should be able to calculate roughly how much memory the operation will need. It would be nice if we could check how much memory the app currently is using, estimate if the app is going to run out of memory, and show an error message accordingly rather than running out of memory. Using java.lang.Runtime, we can find the free memory, total memory, and max memory, but is this really helpful? Even if it appears we won't have enough heap space, it could be that if we wait 30 milliseconds the garbage collector will run, and suddenly we have more than enough heap space to run our operation. Is there anyway to really predict if we are going to run out of memory?
I have done something similar for a database application where the number of rows that were loaded could not be estimated. So in the loop that processes the result set I'm calling a "MemorWatcher" method that would check the memory that was free.
If the available memory goes under a certain threshold the watcher would force a garbage collection and re-check. If there still wasn't enough memory the watcher method signals this to the caller with an exception. The caller can gracefully recover from that exception - as opposed to the OutOfMemoryException which sometimes leaves Swing totally unstable.
I don't have expertise on this, but I feel you can take an extra step of bytecode analysis using ASM to preempt bugs like null pointer exception, out of memory exception etc.
Unless you run your application with the maximum amount of memory you need from the outset (using -Xms) I don't think you can achieve anything useful, since other applications will be able to consume memory before your app needs it.
Have you considered using Soft/WeakReferences, and letting garbage collection reap objects that you could possible recalculate/regenerate on the fly ?

Categories

Resources