I implemented a heuristic in Java that solves an optimization problem for a given input. The heuristic can run for thousands of iterations and create lots of objects of varying complexity.
In order to test it, I have thousands of test inputs. My main method takes all inputs and sequentially starts the heuristic for each input in a loop. The results are stored in a separate file for each input.
When I run the program, it always stops after producing 218 or 219 and throws an "OutOfMemoryError". Once it says Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded and once Exception in thread "main" java.lang.OutOfMemoryError: Java heap space.
My guess is, the program creates too many objects over time until it runs out of memory when computing the 218th or 219th input. Every instance is computed in an independent run. Hence, it should solve the problem to clear the memory and getting rid of all created objects after the result for an input is stored and before the next input is parsed. Is that correct? I heard using System.gc() is bad practice, but what else would you recommend in my case?
Edit:
To specify what I want: Instead of pressing "start" for each input, I implemented the loop to do that for me. However, it seems like it doesn't behave the same way and it keeps old objects from previous runs. Can I change my java code in such a way that it behaves similar to starting the program anew for each input? Or do I have to use a shell skript that starts my heuristic for each input separatly to make it work?
I have never used any JVM parameters and it seems to me like they don't really tackle the problem.
Resolved: There was in fact a memory leak that I discovered and fixed. No System.gc() needed. Thanks for helping anyways!
Yes leave GC handling with JVM. You need to follow some of the steps mentioned below in order:
Increase your heap size using Xmx... parameter
Set proper GC algorithm and parameters. If you have already have GC parameters try to tune the parameters
Try using -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=<path for heap dump> option when you start your JVM, so you get heap dump when your jvm runs OOM. By using the heap dump, you could use profilers like jprofiler/yourkit/jvisualvm etc to investigate memory leaks and then rectify the same.
First, when you start a JVM to run your tests, disable the GC overhead limit:
-XX:-UseGCOverheadLimit
I recommend this because you already know you're purposefully stressing the garbage collector, and you don't want it to warn you about GC overhead.
Second, take a look at how you can break up your tests better, in such a way that you're allowing objects from the previous test to be garbage collected. Don't keep active pointers to large structures of objects after each test completes.
Third, if you still need more memory due to exceeding Java heap space, use:
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
If you know you'll be using the memory anyhow, it works best to set both of these to the same value, which prevents thrashing during execution.
Don't bother explicitly calling System.gc(), it's ultimately pointless because garbage collection is always going to happen when it's necessary.
Fourth, another JVM setting which could be useful in your circumstances:
-XX:NewRatio=<n> Ratio of old/new generation sizes. The default value is 2.
It's normally not recommended to set this lower than 2 (2/3 old, 1/3 new), but in your situation I might suggest you try setting this to 1 (1/2 old, 1/2 new).
See also GC overhead limit exceeded and check out Java HotSpot VM Options.
Give this a try:
http://javaandroidandrest.blogspot.de/2012/06/wait-for-jvm-garbage-collector.html
From the site:
Using functions like System.gc(); or Runtime.getRuntime().gc(); only suggest to the JVM that you want to run the garbage collector.
I found a way on the internet not to force the grabage collector but to wait until the garbage collector runs.
Related
I have an interesting problem with Java memory consumption. I have a native C++ application which invokes my Java application.
The Application basically does some language translations\parses a few XML's and responds to network requests. Most of the state of Application doesn't have to be retained so it is full of Methods which take in String arguments and returns string results.
This application continues to take more and more memory with time and there comes a time where it starts to take close to 2 GB memory, which made us suspect that there is a leak somewhere in some Hashtable or static variables. On closer inspection we did not find any leaks. Comparing heap dumps over a period of time, shows the char[] and String objects take huge memory.
However when we inspect these char[], Strings we find that they do not have GC roots which means that they shouldn't be the cause of leak. Since they are a part of heap, it means they are waiting to get garbage collected. After using verious tools MAT\VisualVM\JHat and scrolling through a lot of such objects I used the trial version of yourkit. Yourkit gives the data straightaway saying that 96% of the char[] and String are unreachable. Which means that at the time of taking dump 96% of the Strings in the heap were waiting to get garbage collected.
I understand that the GC runs sparingly but when you check via VisualVM you can actually see it running :-( than how come there are so many unused objects on the heap all time.
IMO this Application should never take more than 400-500 MB memory, which is where it stays for the first 24 hours but than it continues to increase the heap :-(
I am running Java 1.6.0-25.
thanks for any help.
Java doesn't GC when you think it does/should :-) GC is too complex a topic to understand what is going on without spending a couple of weeks really digging into the details. So if you see behavior that you can't explain, that doesn't mean its broken.
What you see can have several reasons:
You are loading a huge String into memory and keep a reference to a substring. That can keep the whole string in memory (Java doesn't always allocate a new char array for substrings - since Strings are immutable, it simply reuses the original char array and remembers the offset and length).
Nothing triggered the GC so far. Some C++ developers believe GC is "evil" (anything that you don't understand must be evil, right?) so they configure Java not to run it unless absolutely necessary. This means the VM will eat memory until it hits the maximum and then, it will do one huge GC run.
build 25 is already pretty old. Try to update to the latest Java build (33, I think). The GC is one of the best tested parts of the VM but it does have bugs. Maybe you hit one.
Unless you see OutOfMemoryException, you don't have a leak. We have an application which eats all the heap you give it. If it gets 16GB of RAM ("just to be safe"), it will use the whole 16GB because we cache what we can. You never see out of memory, because the cache will shrink as needed but system admins routinely freak out "oh god! oh god! It's running out of memory" PANIK No, it's not. Unless Java tells you so, it's not running out of memory. It's just using it efficiently.
Tuning the GC with command line options is one of the best ways to break it. Hundreds of people which know a lot more about the topic than you ever will spent years making the GC efficient. You think you can do better? Good luck. -> Get rid of any "magic" command line options and calls to System.gc() and your problem might go away.
Try decreasing the heap size to 500 Megabytes and see if the software will start garbage collecting or die. Java isnt too fussy about using memory given to it. you might also research GC tuning options which will make the GC more prudent about cleaning stuff up.
String reallyLongString = "this is a really long String";
String tinyString = reallyLongString.substring(2, 3);
reallyLongString = null
The JVM can't collect the memory allocated for the long string in the above case, since there's a reference to part of it.
If you're doing stuff with Strings and you're suffering from memory issues, this might be the cause of your grief.
use tinyString = new String(reallyLongString.substring(2, 3); instead.
There might not be a leak at all - a leak would be if the Strings were reachable. If you've allocated as much as 2GB to the application, there is no reason for the garbage collector to start freeing up memory until you are approaching that limit. If you don't want it taking any more than 500MB, then pass -Xmx 512m when starting the JVM.
You could also try tuning the garbage collector to start cleaning up much earlier.
First of all, stop worrying about those Strings and char[]. In almost every java application I have profiled, they are on the top of memory consumer list. And in almost no of those java application they were the real problem.
If you have not received OutOfMemoryError yet, but do worry that 2GB is too much for your java process, then try to decrease Xmx value you pass to it. If it runs well and good with 512m or 1g, then problem solved, isn't it?
If you get OOM, then one more option you can try is to use Plumbr with your java process. It is memory leak discovery tool, to it can help you if there really is a memory leak.
I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.
We have a swing based application that does complex processing on data. One of the prerequisites for our software is that any given column cannot have too many unique values. If the number is numeric, the user would need to discretize the data before they could from our tool.
Unfortunately, the algorithms we are using are combinatorially expensive in memory depending on the number of unique values per column. Right now with the wrong dataset, the app would run out of memory very quickly. Before doing one of these operations that would run out of memory, we should be able to calculate roughly how much memory the operation will need. It would be nice if we could check how much memory the app currently is using, estimate if the app is going to run out of memory, and show an error message accordingly rather than running out of memory. Using java.lang.Runtime, we can find the free memory, total memory, and max memory, but is this really helpful? Even if it appears we won't have enough heap space, it could be that if we wait 30 milliseconds the garbage collector will run, and suddenly we have more than enough heap space to run our operation. Is there anyway to really predict if we are going to run out of memory?
I have done something similar for a database application where the number of rows that were loaded could not be estimated. So in the loop that processes the result set I'm calling a "MemorWatcher" method that would check the memory that was free.
If the available memory goes under a certain threshold the watcher would force a garbage collection and re-check. If there still wasn't enough memory the watcher method signals this to the caller with an exception. The caller can gracefully recover from that exception - as opposed to the OutOfMemoryException which sometimes leaves Swing totally unstable.
I don't have expertise on this, but I feel you can take an extra step of bytecode analysis using ASM to preempt bugs like null pointer exception, out of memory exception etc.
Unless you run your application with the maximum amount of memory you need from the outset (using -Xms) I don't think you can achieve anything useful, since other applications will be able to consume memory before your app needs it.
Have you considered using Soft/WeakReferences, and letting garbage collection reap objects that you could possible recalculate/regenerate on the fly ?
I have this class and I'm testing insertions with different data distributions. I'm doing this in my code:
...
AVLTree tree = new AVLTree();
//insert the data from the first distribution
//get results
...
tree = new AVLTree();
//inser the data from the next distribution
//get results
...
I'm doing this for 3 distributions. Each one should be tested an average of 14 times, and the 2 lowest/highest values removed from to compute the average. This should be done 2000 times, each time for 1000 elements. In other words, it goes 1000, 2000, 3000, ..., 2000000.
The problem is, I can only get as far as 100000. When I tried 200000, I ran out of heap space. I increased the available heap space with -Xmx in the command line to 1024m and it didn't even complete the tests with 200000. I tried 2048m and again, it wouldn't work.
What I'm thinking is that the garbage collector isn't getting rid of the old trees once I do tree = new AVL Tree(). But why? I thought that the elements from the old trees would no longer be accessible and their memory would be cleaned up.
The garbage collector should have no trouble cleaning up your old tree objects, so I can only assume there's some other allocation that you're doing that's not being cleaned up.
Java has a good tool to watch the GC in progress (or not in your case), JVisualVM, which comes with the JDK.
Just run that and it will show you which objects are taking up the heap, and you can both trigger and see the progress of GC's. Then you can target those for pools so they can be re-used by you, saving the GC the work.
Also look into this option, which will probably stop the error you're getting that stops the program, and you program will finish, but it may take a long time because your app will fill up the heap then run very slowly.
-XX:-UseGCOverheadLimit
Which JVM you are using and what JVM parameters you have used to configure GC?
Your explaination shows there is a memory leak in your code. If you have any tool like jprofiler then use it to find out where is the memory leak.
There's no reason those trees shouldn't be collected, although I'd expect that before you ran out of memory you should see long pauses as the system ran a full GC. As it's been noted here that that's not what you're seeing, you could try running with flags like -XX:-PrintGC, -XX:-PrintGCDetails,-XX:-PrintGCTimeStamps to give you some more information on exactly what's going on, along with perhaps some sort of running count of roughly where you are. You could also explicitly tell the garbage collector to use a different garbage-collection algorithm.
However, it still seems unlikely to me. What other code is running? is it possible there's something in the AVLTree class itself that's keeping its instances from being GC'd? What about manually logging the finalize() on that class to insure that (some of them, at least) are collectible (e.g. make a few and manually call System.gc())?
GC params here, a nice ref on garbage collection from sun here that's well worth reading.
The Java garbage collector isn't guaranteed to garbage collect after each object's refcount becomes zero. So if you're writing code that is only creating and deleting a lot of objects, it's possible to expend all of the heap space before the gc has a chance to run. Alternatively, Pax's suggestion that there is a memory leak in your code is also a strong possibility.
If you are only doing benchmarking, then you may want to use the java gc function (in the System class I think) between tests, or even re-run you program for each distribution.
We noticed this in a server product. When making a lot of tiny objects that quickly get thrown away, the garbage collector can't keep up. The problem is more pronounced when the tiny objects have pointers to larger objects (e.g. an object that points to a large char[]). The GC doesn't seem to realize that if it frees up the tiny object, it can then free the larger object. Even when calling System.gc() directly, this was still a huge problem (both in 1.5 and 1.6 VMs)!
What we ended up doing and what I recommend to you is to maintain a pool of objects. When your object is no longer needed, throw it into the pool. When you need a new object, grab one from the pool or allocate a new one if the pool is empty. This will also save a small amount of time over pure allocation because Java doesn't have to clear (bzero) the object.
If you're worried about the pool getting too large (and thus wasting memory), you can either remove an arbitrary number of objects from the pool on a regular basis, or use weak references (for example, using java.util.WeakHashMap). One of the advantages of using a pool is that you can track the allocation frequency and totals, and you can adjust things accordingly.
We're using pools of char[] and byte[], and we maintain separate "bins" of sizes in the pool (for example, we always allocate arrays of size that are powers of two). Our product does a lot of string building, and using pools showed significant performance improvements.
Note: In general, the GC does a fine job. We just noticed that with small objects that point to larger structures, the GC doesn't seem to clean up the objects fast enough especially when the VM is under CPU load. Also, System.gc() is just a hint to help schedule the finalizer thread to do more work. Calling it too frequently causes a significant performance hit.
Given that you're just doing this for testing purposes, it might just be good housekeeping to invoke the garbage collector directly using System.gc() (thus forcing it to make a pass). It won't help you if there is a memory leak, but if there isn't, it might buy you back enough memory to get through your test.
I am running JBOSS server by deploying my own classes.Now i started doing some operations on my application.Now i would like to know the memory used by my application before and after performing operations.please support me in this regard
By using
MemoryMXBean
(retrieved by calling
ManagementFactory.getMemoryMXBean())
as well as
Runtime.getRuntime()'s methods:
.totalMemory(),
.maxMemory()
and
.freeMemory().
Note that this is not an exact art: while creating a new object, other temporary ones may be allocated, which will not give you an accurate measurement. As we know, java garbage collection is not guaranteed so you can't necessarily do that to eliminate dead objects.
If you research, you'll see that most code that attempts to do these measurements will have loops of Runtime.gc() calls and sleeps etc to try and ensure that the measurement is accurate. And this will only work on certain JVM implementations...
On an app server/deployed application, you will likely only get gross measurements/usage changes as the heap is allocated and the gc fires, but it should be enough. [I'm presuming that you wouldn't implement gc()'s and sleeps in production code :)]
Get the free memory before doing the operation Runtime.getRuntime().freeMemory() and then again after finishing the operation and you will get the memory used by your operation.
You may find the results you get are inconclusive. The GC will clean up used memory at random points in the background so you might find at if you run the same operations many times you will get different results. You can even appear to have more memory free after performing an operation.