For a project for school I have to program different kind of algorithms. The problem is, I got a working algorithm. But I have to run it several times and after some time it gives me the following errors:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
I know what the error means, but is it possible to let Java search for empty space during the run? I know it uses a lot of space which isn't used at some point. It sets a lot of object to null during the application run and create a lot of new ones, because of this it runs out of memory.
So concrete: is it possible to let the JVM free some space that is set to null? Or free some space in the time the program is running? I know I can set the JVM to more space, but sooner or later I will run to the same problem.
If you need my IDE (in case it is IDE specific) it is Eclipse.
Please google 'garbage collection'. Java is always looking to reuse space from objects that you aren't using. If you run out of memory, you either need to use -Xmx to configure for more memory, or you have to fix your code to retain fewer objects. You may find that a profiler like jvisualvm would help you find wasteful memory usage.
If you're using an Oracle/Sun JVM, I'd recommend that you download Visual VM 1.3.3, install all the plugins, and start it up. It'll show you what's happening in every heap generation, threads, CPU, objects, etc. It can tell you which class is taking up the most heap space.
You'll figure it out quickly if you have data.
I would use a memory profiler to determine where the memory is being used. Setting to null rarely helps. The GC will always run and free as much space as possible before you get an OOME.
Q: "is it possible to let the JVM free some space that is set to null? Or free some space in the time the program is running?"
A: Yes, use a call to System.gc() will do this, but this will not likely solve your problem as the system does this automatically from time to time. You need to find the object that is using all the memory and fix it in your code. Likely a list that is never cleared and only ever added to.
I actually encountered this issue while implementing a particularly complicated algorithm that required a massive data structure. I had to come and post a question on this website. It turned out I had to use a completely different type of object altogether in order to avoid the memory error.
Here is that question.
GC will reclaim 'unused' memory automatically, so yes, it is possible to free some space at runtime, but it's crucial to understand what's classified as possible to be reclaimed.
Basically an object's space can be reclaimed (garbage collected) if the object itself is unreachable - there are no references to it. When you say 'setting space to null' you're most likely removing just one link (reference) to the object by setting it to null. This will allow to reclaim the object only if that was the only link (reference)
Object First= new Object(); //first object
Object Second= new Object(); //second object
Object SecondPrim=Second; //second reference to second object
First=null;
// First memory will be reclaimed (sooner or later)
Second=null;
// there is still a reference to second object via SecondPrim
// second object will not be reclaimed
Hope this helps. As for checking what's exactly going on I would second advice to profile your program.
Related
I have a memory leak in Java in which I have 9600 ImapClients in my heap dump and only 7800 MonitoringTasks. This is a problem since every ImapClient should be owned by a MonitoringTask, so those extra 1800 ImapClients are leaked.
One problem is I can't isolate them in the heap dump and see what's keeping them alive. So far I've only been able to pinpoint them by using external evidence to guess at which ImapClients are dangling. I'm learning OQL which I believe can solve this but it's coming slowly, and it'll take a while before I can understand how to perform something recursive like this in a new query language.
Determining a leak exists is difficult, so here is my full situation:
this process was spewing OOMEs a week ago. I thought I fixed it and I'm trying to verify whether my fixed worked without waiting another full week to see if it spews OOMEs again.
This task creates 7000-9000 ImapClients on start then under normal operation connects and disconnects very few of them.
I checked another process running older pre-OOME code, and it showed numbers of 9000/9100 instead of 7800/9600. I do not know why old code will be different from new code but this is evidence of a leak.
The point of this question is so I can determine if there is a leak. There is a business rule that every ImapClient should be a referee of a MonitoringTask. If this query I am asking about comes up empty, there is not a leak. If it comes up with objects, together with this business rule, it is not only evidence of a leak but conclusive proof of one.
Your expectations are incorrect, there is no actual evidence of any leaks occuring
The Garbage Collector's goal is to free space when it is needed and
only then, anything else is a waste of resources. There is absolutely
no benefit in attempting to keep as much free space as possible
available all the time and only down sides.
Just because something is a candidate for garbage collection doesn't
mean it will ever actually be collected, and there is no way to
force garbage collection either.
I don't see any mention of OutOfMemoryError anywhere.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
Intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
Check your code for finalizers, especially anything relating to IMapclient.
It could be that your MonitoringTasks are being easily collected whereas your IMapclient's are finalized, and therefore stay on the heap (though dead) until the finalizer thread runs.
The obvious answer is to add a WeakHashMap<X, Object> (and Y) to your code -- one tracking all instances of X and another tracking all instances of Y (make them static members of the class and insert every object into the map in the constructor with a null 'value'). Then you can at any time iterate over these maps to find all live instances of X and Y and see which Xs are not referenced by Ys. You might want to trigger a full GC first, to ignore objects that are dead and not yet collected.
I have an interesting problem with Java memory consumption. I have a native C++ application which invokes my Java application.
The Application basically does some language translations\parses a few XML's and responds to network requests. Most of the state of Application doesn't have to be retained so it is full of Methods which take in String arguments and returns string results.
This application continues to take more and more memory with time and there comes a time where it starts to take close to 2 GB memory, which made us suspect that there is a leak somewhere in some Hashtable or static variables. On closer inspection we did not find any leaks. Comparing heap dumps over a period of time, shows the char[] and String objects take huge memory.
However when we inspect these char[], Strings we find that they do not have GC roots which means that they shouldn't be the cause of leak. Since they are a part of heap, it means they are waiting to get garbage collected. After using verious tools MAT\VisualVM\JHat and scrolling through a lot of such objects I used the trial version of yourkit. Yourkit gives the data straightaway saying that 96% of the char[] and String are unreachable. Which means that at the time of taking dump 96% of the Strings in the heap were waiting to get garbage collected.
I understand that the GC runs sparingly but when you check via VisualVM you can actually see it running :-( than how come there are so many unused objects on the heap all time.
IMO this Application should never take more than 400-500 MB memory, which is where it stays for the first 24 hours but than it continues to increase the heap :-(
I am running Java 1.6.0-25.
thanks for any help.
Java doesn't GC when you think it does/should :-) GC is too complex a topic to understand what is going on without spending a couple of weeks really digging into the details. So if you see behavior that you can't explain, that doesn't mean its broken.
What you see can have several reasons:
You are loading a huge String into memory and keep a reference to a substring. That can keep the whole string in memory (Java doesn't always allocate a new char array for substrings - since Strings are immutable, it simply reuses the original char array and remembers the offset and length).
Nothing triggered the GC so far. Some C++ developers believe GC is "evil" (anything that you don't understand must be evil, right?) so they configure Java not to run it unless absolutely necessary. This means the VM will eat memory until it hits the maximum and then, it will do one huge GC run.
build 25 is already pretty old. Try to update to the latest Java build (33, I think). The GC is one of the best tested parts of the VM but it does have bugs. Maybe you hit one.
Unless you see OutOfMemoryException, you don't have a leak. We have an application which eats all the heap you give it. If it gets 16GB of RAM ("just to be safe"), it will use the whole 16GB because we cache what we can. You never see out of memory, because the cache will shrink as needed but system admins routinely freak out "oh god! oh god! It's running out of memory" PANIK No, it's not. Unless Java tells you so, it's not running out of memory. It's just using it efficiently.
Tuning the GC with command line options is one of the best ways to break it. Hundreds of people which know a lot more about the topic than you ever will spent years making the GC efficient. You think you can do better? Good luck. -> Get rid of any "magic" command line options and calls to System.gc() and your problem might go away.
Try decreasing the heap size to 500 Megabytes and see if the software will start garbage collecting or die. Java isnt too fussy about using memory given to it. you might also research GC tuning options which will make the GC more prudent about cleaning stuff up.
String reallyLongString = "this is a really long String";
String tinyString = reallyLongString.substring(2, 3);
reallyLongString = null
The JVM can't collect the memory allocated for the long string in the above case, since there's a reference to part of it.
If you're doing stuff with Strings and you're suffering from memory issues, this might be the cause of your grief.
use tinyString = new String(reallyLongString.substring(2, 3); instead.
There might not be a leak at all - a leak would be if the Strings were reachable. If you've allocated as much as 2GB to the application, there is no reason for the garbage collector to start freeing up memory until you are approaching that limit. If you don't want it taking any more than 500MB, then pass -Xmx 512m when starting the JVM.
You could also try tuning the garbage collector to start cleaning up much earlier.
First of all, stop worrying about those Strings and char[]. In almost every java application I have profiled, they are on the top of memory consumer list. And in almost no of those java application they were the real problem.
If you have not received OutOfMemoryError yet, but do worry that 2GB is too much for your java process, then try to decrease Xmx value you pass to it. If it runs well and good with 512m or 1g, then problem solved, isn't it?
If you get OOM, then one more option you can try is to use Plumbr with your java process. It is memory leak discovery tool, to it can help you if there really is a memory leak.
I'm creating a service that will run constantly, each day at a specified time it will run the main body of the program.
Essentially:
while(true){
run();
Thread.sleep(day);
}
After a while, I'm getting OutOfMemoryHeapExceptions.
After reading about this a little I'm thinking its because any objects created inside the run() method will never be garbage collected.
Therefore I have done something like:
public void run(){
Object a = new Object();
a.doSomething();
a= null; //Wasn't here before
}
My question is, will this solve my problem? I'm under the impression that once an object is null, the object it previously referenced will be garbage collected? Also is this a good idea? Or should I look at doing something else?
Thanks
Adding a = null will almost certainly be insufficient to fix the problem (since a is about to go out of scope anyway).
My advice would be to use a memory profiler to pinpoint what's leaking and where.
I personally use YourKit. It's very good, but costs money (you can get a free evaluation).
Another recently-released tool is Plumbr. I am yet to try it, but the blurb says:
Try out our Java agent for timely discovery of memory leaks. We'll tell you what is leaking, where the leak originates from and where the leaked objects currently reside - well before the OutOfMemoryError!
That might indeed help, in some circumstances the GC algorithm needs a little help to perform, but it doesn't guarantee to solve your problems, merely delay them.
My advice:
Simulate the same behavior with a lower time period, so you can force the error to happen.
Run it with a profiler and see where all that memory is going, and work from there.
Your impression is incorrect. Objects created inside the run() method will be garbage collected provided they 1) go out of scope, and 2)have released any native or remote system resources they are using.
What functionality are you actually performing inside your run() method call? Are you reading files, making database calls, writing to sockets? Without knowing the details its very difficult to provide a better suggestion.
No. You don't need to set the variable to null. The VM knows that you exit that scope and that the variable a no longer exists, so it automatically decrements the reference count and your object is elegible for garbage collection if it had no other references.
The error is somewhere else.
Setting references to null depends if your object is still in scope in a long time consuming process, though theoretically it will mark the reference as null you cannot guarantee when it will be garbage collected.
You need to check if your objects are being held in long scope somewhere in your code.
Found a nice explanation of setting references to null : Does setting Java objects to null do anything anymore?
In order to corner out your issue you need to profile your application.
Searching SO gave so many pointers on Garbage Collection that I have decided to just place the search string here:
https://stackoverflow.com/search?q=Java+Garbage+collection+and+setting+references+to+null
http://java.sun.com/docs/books/performance/1st_edition/html/JPAppGC.fm.html
Local variables should be collected by GC. So, you don't need to put obj=null;. Because Object is also stored in Heap area.
You should get a memory dump and analyze that using tools like JConsole JVisualVM.
The scope of the run() method is left before the Thread.sleep(day); and thus any variables inside that method are destroyed. After that a won't exist any more and thus the object referenced by that variable might be eligible for garbage collection provided there's no other reference to it.
Analyzing a memory dump should allow you to find any references to those object if they still exist.
It might as well not be those objects but others that are kept alive and which eat up the memory. That depends on what you're actually doing and might be hard to analyze here. Thus look out for huge object graphs in terms of memory usage.
For instance, we had a problem with database connections that were created frequently (XA recovery mechanism) and we thought they'd be destroyed once the method scope is left. However, the server put those connections into a static list and never cleared it and thus we ended up with no memory really soon. What helped us identify that case was analyzing a memory dump. :)
In the short term a pragmatic approach to keeping your application stable is to exit the JVM after each execution. Use a batch scheduler (e.g. cron on *nix, at on Windows) to execute your application just once every day. Any memory leaks will be cleaned up when the JVM exists for sure. However you may have to be careful you're not leaving database connections open, etc.
This will give you time to troubleshoot and fix the underlying memory leak issues while keeping your production code running and not requiring support staff to restart servers, etc.
I'm assuming you're not running out of memory on a single execution
is there a way to check if an object can be fetched by the garbage collector?
Somewhere in my code I've got a reference to an object:
MyObject mo = myObject;
Then, via Eclipse Debugger, I get the objects memory location. Afterwards, I set the reference null:
mo = null;
Is there any way to check if the previously referenced object is now suitable for garbage collection or if there's somewhere another reference to it?
Thanks a lot,
Stefan
You cannot do this at runtime with an arbitrary object, and in fact it's not fully possible to do this deterministically. However, there are two options that may be suitable depending on your needs:
Take a heap dump after you set the reference to null, and then load it up in a heap analyzer tool such as jhat or a profiler that supports this. These tools should let you traverse the path from the GC roots and thus check if your object is still reachable or not.
Wrap the object in a PhantomReference with a given ReferenceQueue. When the reference is enqueued, you know that the object has been garbage collected. (Unfortunately, if the reference is unqueued it could be because the object is still reachable, or it could be because the GC just hasn't inspected the object yet. As with all GC-related questions, garbage collection is not a deterministic process!)
On the whole though, I agree that the best option is to be aware of memory leak issues and design your application to avoid them. If you do have a memory leak it should be obvious enough, and you can then focus your energies on finding the problem (again by dumping and analysing the heap for objects that are incorrectly reachable).
The steps above are relatively time-consuming, and shouldn't be something that you do after every change just to reassure yourself, but rather are tools you'd use to investigate a specific problem.
No. The only thing to do is to be careful and keep in mind that memory leaks can exist in Java when writing your application.
The only you can do, is to use tools to try to find where memory leaks come from when you noticed such a problem. I would strongly recommend Memory Analyzer for this purpose.
I have this class and I'm testing insertions with different data distributions. I'm doing this in my code:
...
AVLTree tree = new AVLTree();
//insert the data from the first distribution
//get results
...
tree = new AVLTree();
//inser the data from the next distribution
//get results
...
I'm doing this for 3 distributions. Each one should be tested an average of 14 times, and the 2 lowest/highest values removed from to compute the average. This should be done 2000 times, each time for 1000 elements. In other words, it goes 1000, 2000, 3000, ..., 2000000.
The problem is, I can only get as far as 100000. When I tried 200000, I ran out of heap space. I increased the available heap space with -Xmx in the command line to 1024m and it didn't even complete the tests with 200000. I tried 2048m and again, it wouldn't work.
What I'm thinking is that the garbage collector isn't getting rid of the old trees once I do tree = new AVL Tree(). But why? I thought that the elements from the old trees would no longer be accessible and their memory would be cleaned up.
The garbage collector should have no trouble cleaning up your old tree objects, so I can only assume there's some other allocation that you're doing that's not being cleaned up.
Java has a good tool to watch the GC in progress (or not in your case), JVisualVM, which comes with the JDK.
Just run that and it will show you which objects are taking up the heap, and you can both trigger and see the progress of GC's. Then you can target those for pools so they can be re-used by you, saving the GC the work.
Also look into this option, which will probably stop the error you're getting that stops the program, and you program will finish, but it may take a long time because your app will fill up the heap then run very slowly.
-XX:-UseGCOverheadLimit
Which JVM you are using and what JVM parameters you have used to configure GC?
Your explaination shows there is a memory leak in your code. If you have any tool like jprofiler then use it to find out where is the memory leak.
There's no reason those trees shouldn't be collected, although I'd expect that before you ran out of memory you should see long pauses as the system ran a full GC. As it's been noted here that that's not what you're seeing, you could try running with flags like -XX:-PrintGC, -XX:-PrintGCDetails,-XX:-PrintGCTimeStamps to give you some more information on exactly what's going on, along with perhaps some sort of running count of roughly where you are. You could also explicitly tell the garbage collector to use a different garbage-collection algorithm.
However, it still seems unlikely to me. What other code is running? is it possible there's something in the AVLTree class itself that's keeping its instances from being GC'd? What about manually logging the finalize() on that class to insure that (some of them, at least) are collectible (e.g. make a few and manually call System.gc())?
GC params here, a nice ref on garbage collection from sun here that's well worth reading.
The Java garbage collector isn't guaranteed to garbage collect after each object's refcount becomes zero. So if you're writing code that is only creating and deleting a lot of objects, it's possible to expend all of the heap space before the gc has a chance to run. Alternatively, Pax's suggestion that there is a memory leak in your code is also a strong possibility.
If you are only doing benchmarking, then you may want to use the java gc function (in the System class I think) between tests, or even re-run you program for each distribution.
We noticed this in a server product. When making a lot of tiny objects that quickly get thrown away, the garbage collector can't keep up. The problem is more pronounced when the tiny objects have pointers to larger objects (e.g. an object that points to a large char[]). The GC doesn't seem to realize that if it frees up the tiny object, it can then free the larger object. Even when calling System.gc() directly, this was still a huge problem (both in 1.5 and 1.6 VMs)!
What we ended up doing and what I recommend to you is to maintain a pool of objects. When your object is no longer needed, throw it into the pool. When you need a new object, grab one from the pool or allocate a new one if the pool is empty. This will also save a small amount of time over pure allocation because Java doesn't have to clear (bzero) the object.
If you're worried about the pool getting too large (and thus wasting memory), you can either remove an arbitrary number of objects from the pool on a regular basis, or use weak references (for example, using java.util.WeakHashMap). One of the advantages of using a pool is that you can track the allocation frequency and totals, and you can adjust things accordingly.
We're using pools of char[] and byte[], and we maintain separate "bins" of sizes in the pool (for example, we always allocate arrays of size that are powers of two). Our product does a lot of string building, and using pools showed significant performance improvements.
Note: In general, the GC does a fine job. We just noticed that with small objects that point to larger structures, the GC doesn't seem to clean up the objects fast enough especially when the VM is under CPU load. Also, System.gc() is just a hint to help schedule the finalizer thread to do more work. Calling it too frequently causes a significant performance hit.
Given that you're just doing this for testing purposes, it might just be good housekeeping to invoke the garbage collector directly using System.gc() (thus forcing it to make a pass). It won't help you if there is a memory leak, but if there isn't, it might buy you back enough memory to get through your test.