I am working on an application & my code is having out of memory error. I am not able to see memory utilisation of the code.so I am very confused were to see.
Also after my little analysis I came to kow that there is private static object getting creating & in the constructor of that class. some more objects are getting created. & that class is multithreaded..
so I want to know since the static objects does not get garbage collected.. will all the objects related to the constructor will not be garbage collected??
A static reference is only collected when the class is unloaded, and this only happened when the class loader is not used any more. If you haven't got multiple class loaders it is likely this will never be unloaded (until your program stops)
However, just because an object was once referenced statically doesn't change how it is collected. If you had a static reference to an object and no longer have a reference to that object, it will be collected as normal.
Having multiple threads can make finding bugs harder, but it doesn't change how objects are collected either.
You need to take a memory dump of your application and see why memory is building up. It is possible the objects you retaining are all needed. In this case you need to
reduce your memory requirement
increase your maximum memory.
You might not have a memory leak - you might simply surpassed the amount of avaialble RAM your system can provide.
you can add several JVM arguments to limit the size of RAM allocated to your runtime enviorment, and control the garbage collector - the tradeoff is it usually consumes more CPU.
You say you are not capable of seeing the memory utilisation?
Have you tried using JVisualVM (in $JAVA_HOME/bin/jvisualvm)
It should be capable of attaching to local processes and take heap dumps.
Also, Eclipse Memory Analyzer has some good reports for subsequent analysis
Related
I am in a position where I want to pass byte[] to a native method via JNA. All the examples I've found about this sort of thing either use a Memory instance or use a directly-allocated ByteBuffer and then get a Pointer from that.
However, when I read the docs they say that underlying native memory -- which as I understand it is allocated "off the books" outside of the JVM-managed heap -- these Java objects consume only get freed when the objects' finalize() method is called.
But when that finalizer gets called has nothing to do with when the objects go out of scope. They could hang around for a long time before the garbage collector actually finalizes them. So any native memory they've allocated will stay allocated for an arbitrarily-long time after they go out of scope. If they are holding on to a lot of memory and/or if there are lots of the objects it would seem to me you have an effective memory leak. Or least will have a steady-state memory consumption potentially a lot higher than it would seem to need to be. In other words, similar behavior to what's described in JNA/ByteBuffer not getting freed and causing C heap to run out of memory
Is there any way around this problem with JNA? Or will I need to give up on JNA for this and use JNI instead so that I can use JNIEnv::GetByteArrayElements() so there's no need for any "off the books" memory allocations that can persist arbitrarily long? Is it acceptable to subclass Memory in order to get access to the dispose() method and use that to free up the underlying native memory on my timeline instead of the GC's timeline? Or will that cause problems when the finalizer does run?
JNA provides Memory.disposeAll() and Memory.dispose() to explicitly free memory (the latter requires you to subclass Memory), so if you do ever encounter memory pressure for which regular GC is not sufficient, you have some additional control available.
When I get OOM error, how do I decide weather I should increase heap size or there is memory leak problem with my code?
Also, how do I decide with inital heap size of my application? In my current application, we had started with 512MB but now have increased to 4GB. But this was done by trial and error method. Is there any systematic way to decide required heap size?
If my question is sounding too basic, can anyone please share some references which can help to increase understanding of this?
I don't think Java or the JVM actually define "leak". But your programs are clearly affected by them. I'm sure they define "out of memory".
You have a memory leak, if there is an object that will never be examined again by the application, held by a reference in a scope that is never exited, or held by another object that is examined occasionally. Such a leak may not seriously impact the long term stability of your program.
You have a bad leak if you generate a new object with these properties when some bit of code in the application is run, and that bit of code may run an indefinite number of times. Eventually a program with a bad leak runs out of memory.
It seems to me that 'leak' is a poor term in the GC context. Leaks are exactly what will be garbage-collected.
What won't be GC'd, and what causes OutOfMemoryErrors, is memory that isn't leaked when it should be, i.e. references to objects held beyond their true useful lifetime, typically a reference that is a member variable that should be method-local.
I have a memory leak in Java in which I have 9600 ImapClients in my heap dump and only 7800 MonitoringTasks. This is a problem since every ImapClient should be owned by a MonitoringTask, so those extra 1800 ImapClients are leaked.
One problem is I can't isolate them in the heap dump and see what's keeping them alive. So far I've only been able to pinpoint them by using external evidence to guess at which ImapClients are dangling. I'm learning OQL which I believe can solve this but it's coming slowly, and it'll take a while before I can understand how to perform something recursive like this in a new query language.
Determining a leak exists is difficult, so here is my full situation:
this process was spewing OOMEs a week ago. I thought I fixed it and I'm trying to verify whether my fixed worked without waiting another full week to see if it spews OOMEs again.
This task creates 7000-9000 ImapClients on start then under normal operation connects and disconnects very few of them.
I checked another process running older pre-OOME code, and it showed numbers of 9000/9100 instead of 7800/9600. I do not know why old code will be different from new code but this is evidence of a leak.
The point of this question is so I can determine if there is a leak. There is a business rule that every ImapClient should be a referee of a MonitoringTask. If this query I am asking about comes up empty, there is not a leak. If it comes up with objects, together with this business rule, it is not only evidence of a leak but conclusive proof of one.
Your expectations are incorrect, there is no actual evidence of any leaks occuring
The Garbage Collector's goal is to free space when it is needed and
only then, anything else is a waste of resources. There is absolutely
no benefit in attempting to keep as much free space as possible
available all the time and only down sides.
Just because something is a candidate for garbage collection doesn't
mean it will ever actually be collected, and there is no way to
force garbage collection either.
I don't see any mention of OutOfMemoryError anywhere.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
Intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
Check your code for finalizers, especially anything relating to IMapclient.
It could be that your MonitoringTasks are being easily collected whereas your IMapclient's are finalized, and therefore stay on the heap (though dead) until the finalizer thread runs.
The obvious answer is to add a WeakHashMap<X, Object> (and Y) to your code -- one tracking all instances of X and another tracking all instances of Y (make them static members of the class and insert every object into the map in the constructor with a null 'value'). Then you can at any time iterate over these maps to find all live instances of X and Y and see which Xs are not referenced by Ys. You might want to trigger a full GC first, to ignore objects that are dead and not yet collected.
I have a Java class that is dynamically reloading groovy classes with a custom classloader and I am seeing some strange behaviour with some classes not being collected, but over time it does not leak memory (e.g. perm gen does not continue to grow indefinitely).
In my java code I am loading classes like so (boilerplate stuff removed for simplicity):
Class clazz = groovyClassLoader.loadClass(className, true, false, true);
instance = clazz.newInstance();
And I then reload the groovy classes dynamically by clearing the classloader cache, metaregistry, etc:
for (Class c : groovyClassLoader.getLoadedClasses()){
GroovySystem.getMetaClassRegistry().removeMetaClass(c);
}
groovyClassLoader.clearCache();
Now, if i just loop over this code, constantly loading and then re-loading my groovy classes, I see strange behaviour (my test code is literally just looping over the reload process - its not doing anything with any of the objects created etc, so instance in the code above is just local so should be good for GC).
If i run it, setting maxpermsize to 128m then i get leak behaviour and it OOM permgen errors:
However, if i run it again and increase the maxpermsize to 256m, then all is good and it can run forever (this image is 1 hour, but i have run it over night doing thousands of reloads):
Has anyone come across any similar behaviour? or have any ideas? It also seems strange that in the first example, the memory usage jumps up in steps rather than a steady increase.
The saw-tooth pattern you see is typical for memory being allocated and released all the time. You added code to clear the caches, but that does not automatically imply that the class will be collected. The JVM has the tendency to grow memory quite a bit before even attempting normal garbage collection for longer living objects. Removing classes is done even less, often only during full gc runs. That can lead to the annoying situation that permgen is full and an OOME is thrown, even though classes could have been collected. The exact behavior seems to vary from version to version.
Anyway. Just because the class is not referenced anymore, does not mean it is collected right away. Instead permgen may grow to the maximum to then have classes unloaded.
Besides your loadClass calls possibly causing the creation of a new class and the meta class registry referencing the class somehow, there are more classes with possible references. The callsite caching in Groovy for example involves SoftReferences to classes as well. And if something has been called often enough by Reflection (which Groovy may have to do), there might be helper classes be generated to speed up Reflection. This later one is done by the JDK, not Groovy.
One correction I have to make: meta classes are no real classes and cannot take permgen. But they reference classes which take permgen. So if the metaclass is hard referenced,the class stays. there has been an interesting "feature" in the IBM JDK that did consider a class as unloadable if it is hard referenced, even if the object doing that reference itself is part of a SoftReference.
To explain the behavior above more complete I would need the output of the JVM for class loading and unloading. I would assume the application can in theory run in 128MB. If you look in the 128MB-graph at the low point before 16:17:30 and the one before, you may notice the one before is not as low as the other one. That means the point directly before that time code did unload more classes than the point before. The JVM is free in deciding when to remove a class and does not always remove all classes that in theory could be unloaded. You have to have a trade-off between classes being unloaded when they can and performance.
For a project for school I have to program different kind of algorithms. The problem is, I got a working algorithm. But I have to run it several times and after some time it gives me the following errors:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
I know what the error means, but is it possible to let Java search for empty space during the run? I know it uses a lot of space which isn't used at some point. It sets a lot of object to null during the application run and create a lot of new ones, because of this it runs out of memory.
So concrete: is it possible to let the JVM free some space that is set to null? Or free some space in the time the program is running? I know I can set the JVM to more space, but sooner or later I will run to the same problem.
If you need my IDE (in case it is IDE specific) it is Eclipse.
Please google 'garbage collection'. Java is always looking to reuse space from objects that you aren't using. If you run out of memory, you either need to use -Xmx to configure for more memory, or you have to fix your code to retain fewer objects. You may find that a profiler like jvisualvm would help you find wasteful memory usage.
If you're using an Oracle/Sun JVM, I'd recommend that you download Visual VM 1.3.3, install all the plugins, and start it up. It'll show you what's happening in every heap generation, threads, CPU, objects, etc. It can tell you which class is taking up the most heap space.
You'll figure it out quickly if you have data.
I would use a memory profiler to determine where the memory is being used. Setting to null rarely helps. The GC will always run and free as much space as possible before you get an OOME.
Q: "is it possible to let the JVM free some space that is set to null? Or free some space in the time the program is running?"
A: Yes, use a call to System.gc() will do this, but this will not likely solve your problem as the system does this automatically from time to time. You need to find the object that is using all the memory and fix it in your code. Likely a list that is never cleared and only ever added to.
I actually encountered this issue while implementing a particularly complicated algorithm that required a massive data structure. I had to come and post a question on this website. It turned out I had to use a completely different type of object altogether in order to avoid the memory error.
Here is that question.
GC will reclaim 'unused' memory automatically, so yes, it is possible to free some space at runtime, but it's crucial to understand what's classified as possible to be reclaimed.
Basically an object's space can be reclaimed (garbage collected) if the object itself is unreachable - there are no references to it. When you say 'setting space to null' you're most likely removing just one link (reference) to the object by setting it to null. This will allow to reclaim the object only if that was the only link (reference)
Object First= new Object(); //first object
Object Second= new Object(); //second object
Object SecondPrim=Second; //second reference to second object
First=null;
// First memory will be reclaimed (sooner or later)
Second=null;
// there is still a reference to second object via SecondPrim
// second object will not be reclaimed
Hope this helps. As for checking what's exactly going on I would second advice to profile your program.