I was following tutorial and below was the example for auto boxing memory leak.
package com.example.memoryleak;
public class Adder {
public long addIncremental(long l) {
Long sum=0L;
sum =sum+l;
return sum;
}
public static void main(String[] args) {
Adder adder = new Adder();
for(long ;i<1000;i++) {
adder.addIncremental(i);
}
}
}
Now, I could understand that unnecessary objects would be created because of autoboxing but how it caused memory leak, the way I understand is that memory leak is caused when you are holding a strong reference to a dead object. Now, in this case once I have came out of the FOR loop there would be no strong references to those Long objects then how it caused memory leak?
Please note I want to understand how it caused memory leak, I know those objects were unnecessary.
The other answers are correct: this is not a memory leak.
The code you are showing creates object on a very high rate; and they are subject to garbage collection immediately. None of of these "temp" objects is somehow forgotten; they all get eligible for collection; and the GC will collect them at some point.
A memory leak refers to situations where the used memory keeps ever increasing - without the objects ever becoming eligible for garbage collection.
Given the comment that asks about the "cache" example that uses a map:
as long as there is a single (strong!) reference to the map object from another object that is "alive" in GC terms, that map is "alive". And therefore: all objects stored within that map are alive (not eligible for GC)
when the last reference to that map vanishes, the map itself becomes eligible for the GC. Same is true for the values within the map - unless there is some other reference to such a value which is still alive.
Cite from link you provided:
Can you spot the memory leak?
Here I made a mistake. Instead of taking the primitive long for the sum, I took the Long (wrapper class), which is the cause of the memory leak. Due to auto-boxing, sum=sum+l; creates a new object in every iteration, so 1000 unnecessary objects will be created. Please avoid mixing and matching between primitive and wrapper classes. Try to use primitive as much as you can.
Actually, there is no memory leak here. Better say, it produces some redundant memory usage and garbage collection.
If you want to simulate real memory leak refer to this question: Creating a memory leak with Java.
Also, as a result of adder.addIncremental(i); is ignored there could be some JVM optimizations for this code.
If you take a look at plots of memory you will see that memory usage is quite stable from GC cycle to cycle.
For example:
Can you spot the memory leak?
Here I made a mistake. Instead of taking the primitive long for the
sum, I took the Long (wrapper class), which is the cause of the memory
leak. Due to auto-boxing, sum=sum+l; creates a new object in every
iteration, so 1000 unnecessary objects will be created.
This quoter from tutorial is wrong. In this example you will have no memory leaks but just not efficient memory using.
Related
public class TestProcessor{
public void fillData(){
boolean success = true;
HashMap<String,String> hMap = null;
if (success){
hMap = new HashMap<String,String>();
hMap.put("one","java");
hMap.put("two","servlet");
}
if(hMap! = null){
processData(hMap);
}
}
public void processData(HashMap<String,String> map){
String param1 = map.get("one");
String param2 = map.get("two");
}
}
In the above code if we call fillData() method multiple times and the if condition becomes true then HashMap object will be created multiple times.Will this cause Memory Leak problem?If memory leak happens then how can we fix it?
The Java Virtual Machine (JVM) actively and automatically manages the memory your application uses. Some things to keep in mind about Java's memory management:
Memory is automatically allocated in heap memory for objects that your program creates.
When an object can no longer be accessed by your program (usually by falling out of scope so that no variables that reference the object can be accessed) the memory is automatically reclaimed by a process called garbage collection.
Garbage collection is automatic and non-deterministic. Your program can't know (or predict) when garbage collection is going to occur (so you don't know exactly when unreachable objects will be reclaimed).
In the majority of cases, the JVM manages memory so well that no memory leaks occur even when a program runs for a long time (and creates and reclaims many objects).
By the above reasoning, the code snippet you show will not result in any memory leaks. There are only a few situations in which memory leaks will occur in Java:
1. When you do your own memory management. Memory leaks can occur if you implement your own data structures for objects. For example, if you create your own stack implementation, objects that are "popped" out of your stack can still have active references to them. When this happens, the object will not be garbage collected even if it is no longer in the active portion of your stack. This is one of the few cases in which it can be important to actively assign null to elements that may refer to objects that are no longer "being used."
2. Any time you have a long-lived object that holds a reference to an object that you intend to be short lived. The most common situation in which this can cause memory leaks is in the use of non-static inner classes and anonymous classes (both of which contain a reference to their enclosing instance).
Every non-static Inner Class has an implicit reference to its surrounding class. Anonymous Classes are similar. To successfully create a memory leak simply pass an Inner Class object to a method which keeps references to the provided objects and you're done.
Why does this cause a memory leak? Suppose you implement something like a cache. Next follow the execution path to a local object which stores some of its inner class objects into the cache. After the local object was out of scope, it won't be garbage collected anymore! The inner class object in the cache holds a reference to the surrounding object and that one is still referenceable and therefore not a candidate for garbage collection anymore. The same is true for anonymous classes!
It should not create a memory leak as you will be replacing your existing hashmap which would allow the old one to be garbage collected.
If you are holding references to the objects within in the hashmap externally, then you may cause them to be retained.
*I'm assuming this is java from your syntax.
With the following function:
Collection#clear
how can I attempt to reclaim memory that could be freed from an invocation? Code sample:
public class Foo
{
private static Collection<Bar> bars;
public static void main(String[] args){
bars = new ArrayList<Bar>();
for(int i = 0; i < 100000;i++)
{
bars.add(new Bar());
}
bars.clear();
//how to get memory back here
}
}
EDIT
What I am looking for is similar to how ArrayList.remove reclaims memory by copying the new smaller array.
It is more efficient to only reclaim memory when you need to. In this case it is much simpler/faster to let the GC do it asynchronous when there is a need to do. You can give the JVM a hint using System.gc() but this is likely to be slower and complicate your program.
how ArrayList.remove reclaims memory by copying the new smaller array.
It doesn't do this. It never shrinks the array, nor would you need to.
If you really need to make the collection smaller, which I seriously doubt, you can create a new ArrayList which has a copy of the elements you want to keep.
bars= null ;
would be the best. clear doesn't guarantee to release any memory, only to reset the logical contents to "empty".
In fact, bars= null ; doesn't guarantee that memory will be immediately released. However, it would make the object previously pointed by bars and all its dependents "ready for garbage collection" ("finalization", really, but let's keep this simple). If the JVM finds itself needing memory, it will collect these objects (other simplification here: this depends on the exact garbage collection algorithm the JVM is configured to use).
You can't.
At some point after there are no more references to the objects, the GC will collect them for you.
EDIT: To force the ArrayList to release its reference to the giant empty array, call trimToSize()
You can't force memory reclamation, that will happen when garbage collection occurs.
If you use clear() you will clear the references to objects that were contained in the collection. If there are no other references to those objects, then they will be reclaimed next time GC is run.
The collection itself (which just contains references, not the objects referred to), will not be resized. The only way to get back the storage used by the collection is to set the reference bars to null so it will eventually be reclaimed.
I have to assume that the following method doesn't leak memory:
public final void setData(final Integer p_iData)
{
data = p_iData;
}
Where data is a property of some class.
Every time the method gets called, a new Integer is replacing the currently existing data reference. So what's happening with the current/old data?
Java has to be doing something under the hood; otherwise we'd have to null-out any objects every time an object is assigned.
Simplistic explanation:
Periodically the garbage collector looks at all the objects in the system, and sees which aren't reachable any more from live references. It frees any objects which are no longer reachable.
Note that your method does not create a new Integer object at all. A reference to the same Integer object could be passed in time and time again, for example.
The reality of garbage collection is a lot more complicated than this:
Modern GCs tend to be generational, assuming that most objects are short-lived, so it doesn't need to check the whole (possibly large) heap as often; it can just check "recent" objects for liveness frequently
Objects can have finalizers - code to be run before they're garbage collected. This delays garbage collection of such objects by a cycle, and the object could even "resurrect" itself by making itself reachable
Modern GCs can collect in parallel, and have numerous tweaking options
Java is a garbage-collected language.
Once there are no more live references to an object, it becomes eligible for garbage collection. The collector runs from time to time and will reclaim the object's memory.
In a nutshell, your code is 100% correct and is not leaking memory.
It gets garbage collected eventually.
if there is no ther reference to data, the garbage collector of java will clean the old data up and free the memory
Actually, since Integer is an object not a primitive type, the line:
data = p_iData;
is updating a reference.
Now, the old object that this.data used to point to will be examined by the GC to determine if there are no more references to that object. If not, that object is destroyed and the memory is freed (at some later time)
If the object previously referenced by data is no longer referenced by any object structure that is referenced from any running thread it is eligible for garbage collecion. GC is performed by Java in the background to free the memory of unused objects.
i want to show one example to you
in some code :
int x;
x=10;
x=20;
initially i assigned x to 10
again x to 20
first reference memory will be handled by Java GC.
Java GC is a thread tht run continuously and checked unreferenced memory and clean it .
I am currently trying to diagnose a slow memory leak in my application. The facts I have so far are as follows.
I have a heap dump from a 4 day run of the application.
This heap dump contains ~800 WeakReference objects which point to objects (all of the same type, which I will call Foo for the purposes of this question) retaining 40mb of memory.
Eclipse Memory Analysis Tool shows that each of the Foo objects referred to by these WeakReferences is not referred to by any other objects. My expectation is that this should make these Foo objects Weakly Reachable and thus they should be collected at the next GC.
Each of these Foo objects has a timestamp which shows that they were allocated over the course of the 4 day run. I also have logs during this time which confirm that Garbage Collection was happening.
A huge number of Foo objects are being created by my application and only a very small fraction of them are ending up in this state within the heap dump. This suggests to me that the root cause is some sort of race condition.
My application uses JNI to call through to a native library. The JNI code calls NewGlobalRef 4 times during start of day initialisation to get references to Java classes which it uses.
What could possibly cause these Foo classes to not be collected despite only being referenced by WeakReferences (according to Eclipse Memory Analyser Tool)?
EDIT1:
#mindas
The WeakReference I am using is equivalent to the following example code.
public class FooWeakRef extends WeakReference<Foo>
{
public long longA;
public long longB;
public String stringA;
public FooWeakRef(Foo xiObject, ReferenceQueue<Foo> xiQueue)
{
super(xiObject, xiQueue);
}
}
Foo does not have a finalizer and any finalizer would not be a consideration so long as the WeakRefs have not been cleared. An object is not finalizable when it is weakly reachable. See this page for details.
#kasten The weakreferences are cleared before the object is finalizable. My heap dump shows that this has not happened.
#jarnbjo I refer to the WeakReference Javadoc:
"Suppose that the garbage collector determines at a certain point in time that an object is weakly reachable. At that time it will atomically clear all weak references to that object and all weak references to any other weakly-reachable objects from which that object is reachable through a chain of strong and soft references."
This suggests to me that the GC should be detecting the fact that my Foo objects are "Weakly reachable" and "At that time" clearing the weak references.
EDIT 2
#j flemm - I know that 40mb doesn't sound like much but I am worried that 40mb in 4 days means 4000mb in 100 days. All of the docs I have read suggest that objects which are weakly reachable should not hang around for several days. I am therefore interested in any other explanations about how an object could be strongly referenced without the reference showing up in a heap dump.
I am going to try allocating some large objects when some of these dangling Foo objects are present and see whether the JVM collects them. However, this test will take a couple of days to setup and complete.
EDIT 3
#jarnbjo - I understand that I have no guarantee about when the JDK will notice that an object is weakly reachable. However, I would expect that an application under heavy load for 4 days would provide enough opportunities for the GC to notice that my objects are weakly reachable. After 4 days I am strongly suspicious that the remaining weakly references objects have been leaked somehow.
EDIT 4
#j flemm - Thats really interesting! Just to clarify, are you saying that GC is happening on your app and is not clearing Soft/Weak refs? Can you give me any more details about what JVM + GC Config you are using? My app is using a memory bar at 80% of the heap to trigger GC. I was assuming that any GC of the old gen would clear Weak refs. Are you suggesting that a GC only collects Weak refs once the memory usage is above a higher threshold? Is this higher limit configurable?
EDIT 5
#j flemm - Your comment about clearing out WeakRefs before SoftRefs is consistent with the Javadoc which states:
SoftRef: "Suppose that the garbage collector determines at a certain point in time that an object is softly reachable. At that time it may choose to clear atomically all soft references to that object and all soft references to any other softly-reachable objects from which that object is reachable through a chain of strong references. At the same time or at some later time it will enqueue those newly-cleared soft references that are registered with reference queues."
WeakRef: "Suppose that the garbage collector determines at a certain point in time that an object is weakly reachable. At that time it will atomically clear all weak references to that object and all weak references to any other weakly-reachable objects from which that object is reachable through a chain of strong and soft references. At the same time it will declare all of the formerly weakly-reachable objects to be finalizable. At the same time or at some later time it will enqueue those newly-cleared weak references that are registered with reference queues."
For clarity, are you saying that the Garbage Collector runs when your app has more than 50% free memory and in this case it does not clear WeakRefs? Why would the GC run at all when your app has >50% free memory? I think your app is probably just generating a very low amount of garbage and when the collector runs it is clearing WeakRefs but not SoftRefs.
EDIT 6
#j flemm - The other possible explanation for your app's behaviour is that the young gen is being collected but that your Weak and Soft refs are all in the old gen and are only cleared when the old gen is being collected. For my app I have stats showing that the old gen is being collected which should mean that WeakRefs get cleared.
EDIT 7
I am starting a bounty on this question. I am looking for any plausible explanations for how WeakRefs could fail to be cleared while GC is happening. If the answer is that this is impossible I would ideally like to be pointed at the appropriate bits of OpenJDK which show WeakRefs being cleared as soon as an object is determined to be weakly reachable and that weak reachability is resolved every time GC runs.
I have finally got round to checking the Hotspot JVM source code and found the following code.
In referenceProcessor.cpp:
void ReferenceProcessor::process_discovered_references(
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
VoidClosure* complete_gc,
AbstractRefProcTaskExecutor* task_executor) {
NOT_PRODUCT(verify_ok_to_handle_reflists());
assert(!enqueuing_is_done(), "If here enqueuing should not be complete");
// Stop treating discovered references specially.
disable_discovery();
bool trace_time = PrintGCDetails && PrintReferenceGC;
// Soft references
{
TraceTime tt("SoftReference", trace_time, false, gclog_or_tty);
process_discovered_reflist(_discoveredSoftRefs, _current_soft_ref_policy, true,
is_alive, keep_alive, complete_gc, task_executor);
}
update_soft_ref_master_clock();
// Weak references
{
TraceTime tt("WeakReference", trace_time, false, gclog_or_tty);
process_discovered_reflist(_discoveredWeakRefs, NULL, true,
is_alive, keep_alive, complete_gc, task_executor);
}
The function process_discovered_reflist has the following signature:
void
ReferenceProcessor::process_discovered_reflist(
DiscoveredList refs_lists[],
ReferencePolicy* policy,
bool clear_referent,
BoolObjectClosure* is_alive,
OopClosure* keep_alive,
VoidClosure* complete_gc,
AbstractRefProcTaskExecutor* task_executor)
This shows that WeakRefs are being unconditionally cleared by ReferenceProcessor::process_discovered_references.
Searching the Hotspot code for process_discovered_reference shows that the CMS collector (which is what I am using) calls this method from the following call stack.
CMSCollector::refProcessingWork
CMSCollector::checkpointRootsFinalWork
CMSCollector::checkpointRootsFinal
This call stack looks like it is invoked every time a CMS collection is run.
Assuming this is true, the only explanation for a long lived weakly referenced object would be either a subtle JVM bug or if the GC had not been run.
You might want to check if you have leaked classloader issue. More on this topic you could find in this blog post
You need to clarify on what is the link between Foo and WeakReference. The case
class Wrapper<T> extends WeakReference<T> {
private final T referent;
public Wrapper(T referent) {
super(t);
this.referent = referent;
}
}
is very different from just
class Wrapper<T> extends WeakReferece<T> {
public Wrapper(T referent) {
super(t);
}
}
or its inlined version, WeakReference<Foo> wr = new WeakReference<Foo>(foo).
So I assume your case is not like I described in my first code snippet.
As you have said you are working with JNI, you might want to check if you have any unsafe finalizers. Every finalizer should have finally block calling super.finalize() and it's easy to slip.
You probably need to tell us more about the nature of your objects to offer better ideas.
Try SoftReference instead. Javadoc says: All soft references to softly-reachable objects are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError.
WeakReference doesn't have such guarantees, which makes them more suitable for caches, but sometimes SoftReferences are better.
#iirekm No: WeakReferences are 'weaker' than SoftReferences, meaning that a WeakReference will always be garbage collected before a SoftReference.
More info in this post: Understanding Java's Reference classes: SoftReference, WeakReference, and PhantomReference
Edit: (after reading comments) Yes surely Weak References are 'Weaker' than SoftReferences, typo. :S
Here's some use cases to throw further light on the subject:
SoftReference: In-memory cache (Object stays alive until VM deems that there's not enough heap mem)
WeakReference: Auto-clearing Listeners (Object should be cleared on next GC cycle after deemed being Weakly reachable)
PhantomReference: Avoiding out-of-memory errors when handling unusually large objects (When scheduled in reference queue, we know that host object is to be cleared, safe to allocate another large object). Think of it as a finalize() alternative, without the ability to bring dead objects back to life (as you potentially could with finalize)
This being said, nothing prevents the VM (please correct me if I'm wrong) to let the Weakly reachable objects stay alive as long as it is not running out of memory (as in the orig. author's case).
This is the best resource I could find on the subject: http://www.pawlan.com/monica/articles/refobjs/
Edit 2: Added "to be" in front of cleared in PhantomRef
I am not acquainted with Java, but you may be using a generational garbage collector, which will keep your Foo and FooWeakRef objects alone (not collected) as long as
they passed in an older generation
there is enough memory to allocate new objects in younger generations
Does the log that indicates that garbage collection occurred discriminates between major and minor collections?
For non-believers who claim that weak references are cleared before soft references:
import java.lang.ref.Reference;
import java.lang.ref.ReferenceQueue;
import java.lang.ref.SoftReference;
import java.lang.ref.WeakReference;
import java.util.HashMap;
import java.util.Map;
public class Test {
/**
* #param args
*/
public static void main(String[] args) {
ReferenceQueue<Object> q = new ReferenceQueue<Object>();
Map<Reference<?>, String> referenceToId = new HashMap<Reference<?>, String>();
for(int i=0; i<100; ++i) {
Object obj = new byte [10*1024*1024]; // 10M
SoftReference<Object> sr = new SoftReference<Object>(obj, q);
referenceToId.put(sr, "soft:"+i);
WeakReference<Object> wr = new WeakReference<Object>(obj, q);
referenceToId.put(wr, "weak:"+i);
for(;;){
Reference<?> ref = q.poll();
if(ref == null) {
break;
}
System.out.println("cleared reference " + referenceToId.get(ref) + ", value=" + ref.get());
}
}
}
}
If your run it with either -client or -server, you'll see that soft references are always cleared before weak references, which also agrees with Javadoc: http://download.oracle.com/javase/1.4.2/docs/api/java/lang/ref/package-summary.html#reachability
Typically soft/weak references are used in connection with Maps to make kinds of caches. If keys in your Map are compared with == operator, (or unoverriden .equals from Object), then it's best to use Map which operates on SoftReference keys (eg from Apache Commons) - when the object 'disappears' no other object will ever be equal in the '==' sense to the old one. If keys of your Map are compared with advanced .equals() operator, like String or Date, many other objects may match to the 'disappearing' one, so it's better to use the standard WeakHashMap.
Given an aggregation of class instances which refer to each other in a complex, circular, fashion: is it possible that the garbage collector may not be able to free these objects?
I vaguely recall this being an issue in the JVM in the past, but I thought this was resolved years ago. yet, some investigation in jhat has revealed a circular reference being the reason for a memory leak that I am now faced with.
Note: I have always been under the impression that the JVM was capable of resolving circular references and freeing such "islands of garbage" from memory. However, I am posing this question just to see if anyone has found any exceptions.
Only a very naive implementation would have a problem with circular references. Wikipedia has a good article on the different GC algorithms. If you really want to learn more, try (Amazon) Garbage Collection: Algorithms for Automatic Dynamic Memory Management . Java has had a good garbage collector since 1.2 and an exceptionally good one in 1.5 and Java 6.
The hard part for improving GC is reducing pauses and overhead, not basic things like circular reference.
The garbage collector knows where the root objects are: statics, locals on the stack, etc and if the objects aren't reachable from a root then they will be reclaimed. If they are reachable, then they need to stick around.
Ryan, judging by your comment to Circular References in Java, you fell into the trap of referencing objects from a class, which was probably loaded by the bootstrap/system classloader. Every class is referenced by the classloader that loaded the class, and can thus be garbage-collected only if the classloader is no longer reachable. The catch is that the bootstrap/system classloader is never garbage collected, therefore, objects reachable from classes loaded by the system classloader cannot be garbage-collected either.
The reasoning for this behavior is explained in JLS. For example, Third Edition 12.7 http://java.sun.com/docs/books/jls/third_edition/html/execution.html#12.7.
If I remember correctly, then according to the specifications, there are only guarantees about what the JVM can't collect (anything reachable), not what it will collect.
Unless you are working with real-time JVMs, most modern garbage collectors should be able to handle complex reference structures and identify "subgraphs" that can be eliminated safely. The efficiency, latency, and likelihood of doing this improve over time as more research ideas make their way into standard (rather than research) VMs.
No, at least using Sun's official JVM, the garbage collector will be able to detect these cycles and free the memory as soon as there are no longer any references from the outside.
The Java specification says that the garbage collector can garbage collect your object
ONLY If it is not reachable from any thread.
Reachable means there is a reference, or chain of references that leads from A to B,
and can go via C,D,...Z for all it cares.
The JVM not collecting things has not been a problem for me since 2000, but your mileage may vary.
Tip: Java serialization caches objects to make object mesh transfer efficient. If you have many large, transient objects, and all your memory is getting hogged, reset your serializer to clear it's cache.
A circular reference happens when one object refers to another, and that other one refers to the first object. For example:
class A {
private B b;
public void setB(B b) {
this.b = b;
}
}
class B {
private A a;
public void setA(A a) {
this.a = a;
}
}
public class Main {
public static void main(String[] args) {
A one = new A();
B two = new B();
// Make the objects refer to each other (creates a circular reference)
one.setB(two);
two.setA(one);
// Throw away the references from the main method; the two objects are
// still referring to each other
one = null;
two = null;
}
}
Java's garbage collector is smart enough to clean up the objects if there are circular references, but there are no live threads that have any references to the objects anymore. So having a circular reference like this does not create a memory leak.
Just to amplify what has already been said:
The application I've been working on for six years recently changed from Java 1.4 to Java 1.6, and we've discovered that we've had to add static references to things that we didn't even realize were garbage collectable before. We didn't need the static reference before because the garbage collector used to suck, and it is just so much better now.
Reference counting GCs are notorious for this issue. Notably, Suns JVM doesn't use a reference counting GC.
If the object can not be reach from the root of the heap (typically, at a minimum, through the classloaders if nothing else0, then the objects will be destroyed as they are not copied during a typical Java GC to the new heap.
The garbage collector is a very sophisticated piece of software -- it has been tested in a huge JCK test-suite. It is NOT perfect BUT there is a very good chance that as long as the java compiler(javac) will compile all of your classes and JVM will instantiate it, then you should be good.
Then again, if you are holding references to the root of this object graph, the memory will NOT be freed BUT if you know what you're doing, you should be OK.