Java String objects not getting garbage collected on time - java

I have an interesting problem with Java memory consumption. I have a native C++ application which invokes my Java application.
The Application basically does some language translations\parses a few XML's and responds to network requests. Most of the state of Application doesn't have to be retained so it is full of Methods which take in String arguments and returns string results.
This application continues to take more and more memory with time and there comes a time where it starts to take close to 2 GB memory, which made us suspect that there is a leak somewhere in some Hashtable or static variables. On closer inspection we did not find any leaks. Comparing heap dumps over a period of time, shows the char[] and String objects take huge memory.
However when we inspect these char[], Strings we find that they do not have GC roots which means that they shouldn't be the cause of leak. Since they are a part of heap, it means they are waiting to get garbage collected. After using verious tools MAT\VisualVM\JHat and scrolling through a lot of such objects I used the trial version of yourkit. Yourkit gives the data straightaway saying that 96% of the char[] and String are unreachable. Which means that at the time of taking dump 96% of the Strings in the heap were waiting to get garbage collected.
I understand that the GC runs sparingly but when you check via VisualVM you can actually see it running :-( than how come there are so many unused objects on the heap all time.
IMO this Application should never take more than 400-500 MB memory, which is where it stays for the first 24 hours but than it continues to increase the heap :-(
I am running Java 1.6.0-25.
thanks for any help.

Java doesn't GC when you think it does/should :-) GC is too complex a topic to understand what is going on without spending a couple of weeks really digging into the details. So if you see behavior that you can't explain, that doesn't mean its broken.
What you see can have several reasons:
You are loading a huge String into memory and keep a reference to a substring. That can keep the whole string in memory (Java doesn't always allocate a new char array for substrings - since Strings are immutable, it simply reuses the original char array and remembers the offset and length).
Nothing triggered the GC so far. Some C++ developers believe GC is "evil" (anything that you don't understand must be evil, right?) so they configure Java not to run it unless absolutely necessary. This means the VM will eat memory until it hits the maximum and then, it will do one huge GC run.
build 25 is already pretty old. Try to update to the latest Java build (33, I think). The GC is one of the best tested parts of the VM but it does have bugs. Maybe you hit one.
Unless you see OutOfMemoryException, you don't have a leak. We have an application which eats all the heap you give it. If it gets 16GB of RAM ("just to be safe"), it will use the whole 16GB because we cache what we can. You never see out of memory, because the cache will shrink as needed but system admins routinely freak out "oh god! oh god! It's running out of memory" PANIK No, it's not. Unless Java tells you so, it's not running out of memory. It's just using it efficiently.
Tuning the GC with command line options is one of the best ways to break it. Hundreds of people which know a lot more about the topic than you ever will spent years making the GC efficient. You think you can do better? Good luck. -> Get rid of any "magic" command line options and calls to System.gc() and your problem might go away.

Try decreasing the heap size to 500 Megabytes and see if the software will start garbage collecting or die. Java isnt too fussy about using memory given to it. you might also research GC tuning options which will make the GC more prudent about cleaning stuff up.

String reallyLongString = "this is a really long String";
String tinyString = reallyLongString.substring(2, 3);
reallyLongString = null
The JVM can't collect the memory allocated for the long string in the above case, since there's a reference to part of it.
If you're doing stuff with Strings and you're suffering from memory issues, this might be the cause of your grief.
use tinyString = new String(reallyLongString.substring(2, 3); instead.

There might not be a leak at all - a leak would be if the Strings were reachable. If you've allocated as much as 2GB to the application, there is no reason for the garbage collector to start freeing up memory until you are approaching that limit. If you don't want it taking any more than 500MB, then pass -Xmx 512m when starting the JVM.
You could also try tuning the garbage collector to start cleaning up much earlier.

First of all, stop worrying about those Strings and char[]. In almost every java application I have profiled, they are on the top of memory consumer list. And in almost no of those java application they were the real problem.
If you have not received OutOfMemoryError yet, but do worry that 2GB is too much for your java process, then try to decrease Xmx value you pass to it. If it runs well and good with 512m or 1g, then problem solved, isn't it?
If you get OOM, then one more option you can try is to use Plumbr with your java process. It is memory leak discovery tool, to it can help you if there really is a memory leak.

Related

Hadoop edge node Issues [duplicate]

I am getting the following error on execution of a multi-threading program
java.lang.OutOfMemoryError: Java heap space
The above error occured in one of the threads.
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
Is there any way to increase the heap space?
What changes should I made to my program so that It will grab less heap space?
If you want to increase your heap space, you can use java -Xms<initial heap size> -Xmx<maximum heap size> on the command line. By default, the values are based on the JRE version and system configuration. You can find out more about the VM options on the Java website.
However, I would recommend profiling your application to find out why your heap size is being eaten. NetBeans has a very good profiler included with it. I believe it uses the jvisualvm under the hood. With a profiler, you can try to find where many objects are being created, when objects get garbage collected, and more.
1.- Yes, but it pretty much refers to the whole memory used by your program.
2.- Yes see Java VM options
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
Ie
java -Xmx2g assign 2 gigabytes of ram as maximum to your app
But you should see if you don't have a memory leak first.
3.- It depends on the program. Try spot memory leaks. This question would be to hard to answer. Lately you can profile using JConsole to try to find out where your memory is going to
You may want to look at this site to learn more about memory in the JVM:
http://developer.streamezzo.com/content/learn/articles/optimization-heap-memory-usage
I have found it useful to use visualgc to watch how the different parts of the memory model is filling up, to determine what to change.
It is difficult to determine which part of memory was filled up, hence visualgc, as you may want to just change the part that is having a problem, rather than just say,
Fine! I will give 1G of RAM to the JVM.
Try to be more precise about what you are doing, in the long run you will probably find the program better for it.
To determine where the memory leak may be you can use unit tests for that, by testing what was the memory before the test, and after, and if there is too big a change then you may want to examine it, but, you need to do the check while your test is still running.
You can get your heap memory size through below programe.
public class GetHeapSize {
public static void main(String[] args) {
long heapsize = Runtime.getRuntime().totalMemory();
System.out.println("heapsize is :: " + heapsize);
}
}
then accordingly you can increase heap size also by using:
java -Xmx2g
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
To increase the heap size you can use the -Xmx argument when starting Java; e.g.
-Xmx256M
Upto my knowledge, Heap space is occupied by instance variables only. If this is correct, then why this error occurred after running fine for sometime as space for instance variables are alloted at the time of object creation.
That means you are creating more objects in your application over a period of time continuously. New objects will be stored in heap memory and that's the reason for growth in heap memory.
Heap not only contains instance variables. It will store all non-primitive data types ( Objects). These objects life time may be short (method block) or long (till the object is referenced in your application)
Is there any way to increase the heap space?
Yes. Have a look at this oracle article for more details.
There are two parameters for setting the heap size:
-Xms:, which sets the initial and minimum heap size
-Xmx:, which sets the maximum heap size
What changes should I made to my program so that It will grab less heap space?
It depends on your application.
Set the maximum heap memory as per your application requirement
Don't cause memory leaks in your application
If you find memory leaks in your application, find the root cause with help of profiling tools like MAT, Visual VM , jconsole etc. Once you find the root cause, fix the leaks.
Important notes from oracle article
Cause: The detail message Java heap space indicates object could not be allocated in the Java heap. This error does not necessarily imply a memory leak.
Possible reasons:
Improper configuration ( not allocating sufficiant memory)
Application is unintentionally holding references to objects and this prevents the objects from being garbage collected
Applications that make excessive use of finalizers. If a class has a finalize method, then objects of that type do not have their space reclaimed at garbage collection time. If the finalizer thread cannot keep up, with the finalization queue, then the Java heap could fill up and this type of OutOfMemoryError exception would be thrown.
On a different note, use better Garbage collection algorithms ( CMS or G1GC)
Have a look at this question for understanding G1GC
In most of the cases, the code is not optimized. Release those objects which you think shall not be needed further. Avoid creation of objects in your loop each time. Try to use caches. I don't know how your application is doing. But In programming, one rule of normal life applies as well
Prevention is better than cure. "Don't create unnecessary objects"
Local variables are located on the stack. Heap space is occupied by objects.
You can use the -Xmx option.
Basically heap space is used up everytime you allocate a new object with new and freed some time after the object is no longer referenced. So make sure that you don't keep references to objects that you no longer need.
No, I think you are thinking of stack space. Heap space is occupied by objects. The way to increase it is -Xmx256m, replacing the 256 with the amount you need on the command line.
To avoid that exception, if you are using JUnit and Spring try adding this in every test class:
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
I have tried all Solutions but nothing worked from above solutions
Solution: In My case I was using 4GB RAM and due to that RAM usage comes out 98% so the required amount if Memory wasn't available. Please do look for this also.If such issue comes upgrade RAM and it will work fine.
Hope this will save someone Time
In netbeans, Go to 'Run' toolbar, --> 'Set Project Configuration' --> 'Customise' --> 'run' of its popped up windo --> 'VM Option' --> fill in '-Xms2048m -Xmx2048m'. It could solve heap size problem.

Java: Clear memory between independent runs

I implemented a heuristic in Java that solves an optimization problem for a given input. The heuristic can run for thousands of iterations and create lots of objects of varying complexity.
In order to test it, I have thousands of test inputs. My main method takes all inputs and sequentially starts the heuristic for each input in a loop. The results are stored in a separate file for each input.
When I run the program, it always stops after producing 218 or 219 and throws an "OutOfMemoryError". Once it says Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded and once Exception in thread "main" java.lang.OutOfMemoryError: Java heap space.
My guess is, the program creates too many objects over time until it runs out of memory when computing the 218th or 219th input. Every instance is computed in an independent run. Hence, it should solve the problem to clear the memory and getting rid of all created objects after the result for an input is stored and before the next input is parsed. Is that correct? I heard using System.gc() is bad practice, but what else would you recommend in my case?
Edit:
To specify what I want: Instead of pressing "start" for each input, I implemented the loop to do that for me. However, it seems like it doesn't behave the same way and it keeps old objects from previous runs. Can I change my java code in such a way that it behaves similar to starting the program anew for each input? Or do I have to use a shell skript that starts my heuristic for each input separatly to make it work?
I have never used any JVM parameters and it seems to me like they don't really tackle the problem.
Resolved: There was in fact a memory leak that I discovered and fixed. No System.gc() needed. Thanks for helping anyways!
Yes leave GC handling with JVM. You need to follow some of the steps mentioned below in order:
Increase your heap size using Xmx... parameter
Set proper GC algorithm and parameters. If you have already have GC parameters try to tune the parameters
Try using -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=<path for heap dump> option when you start your JVM, so you get heap dump when your jvm runs OOM. By using the heap dump, you could use profilers like jprofiler/yourkit/jvisualvm etc to investigate memory leaks and then rectify the same.
First, when you start a JVM to run your tests, disable the GC overhead limit:
-XX:-UseGCOverheadLimit
I recommend this because you already know you're purposefully stressing the garbage collector, and you don't want it to warn you about GC overhead.
Second, take a look at how you can break up your tests better, in such a way that you're allowing objects from the previous test to be garbage collected. Don't keep active pointers to large structures of objects after each test completes.
Third, if you still need more memory due to exceeding Java heap space, use:
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
If you know you'll be using the memory anyhow, it works best to set both of these to the same value, which prevents thrashing during execution.
Don't bother explicitly calling System.gc(), it's ultimately pointless because garbage collection is always going to happen when it's necessary.
Fourth, another JVM setting which could be useful in your circumstances:
-XX:NewRatio=<n> Ratio of old/new generation sizes. The default value is 2.
It's normally not recommended to set this lower than 2 (2/3 old, 1/3 new), but in your situation I might suggest you try setting this to 1 (1/2 old, 1/2 new).
See also GC overhead limit exceeded and check out Java HotSpot VM Options.
Give this a try:
http://javaandroidandrest.blogspot.de/2012/06/wait-for-jvm-garbage-collector.html
From the site:
Using functions like System.gc(); or Runtime.getRuntime().gc(); only suggest to the JVM that you want to run the garbage collector.
I found a way on the internet not to force the grabage collector but to wait until the garbage collector runs.

Forcing Java virtual machine to run garbage collector [duplicate]

This question already has answers here:
How to force garbage collection in Java?
(25 answers)
Closed 8 years ago.
I have a complex java application running on a large dataset. The application performs reasonably fast but as time goes it seems to eat lots of memory and slow down. Is there a way to run the JVM garbage collector without re-starting the application?
No, You cant force garbage collection.
Even using
System.gc();
You can just make a request for garbage collection but it depends on JVM to do it or not.
Also Garbage collector are smart enough to collect unused memory when required so instead of forcing garbage collection you should check if you are handling objects in a wrong way.
If you are handling objects in a wrong way (like keeping reference to unnecessary objects) there is hardly anything JVM can do to free the memory.
From Doc
Calling the gc method suggests that the Java Virtual Machine expend
effort toward recycling unused objects in order to make the memory
they currently occupy available for quick reuse. When control returns
from the method call, the Java Virtual Machine has made a best effort
to reclaim space from all discarded objects.
Open Bug regarding System.gc() documentation
The documentation for System.gc() is extremely misleading and fails to
make reference to the recommended practise of never calling
System.gc().
The choice of language leaves it unclear what the behaviour would be
when System.gc() is called and what external factors will influence
the behaviour.
Few useful link to visit when you think you should force JVM to free up some memory
1. How does garbage collection work
2. When does System.gc() do anything
3. Why is it bad practice to call System.gc()?
All says
1. You dont have control over GC in Java even System.gc() dont guarantee it.
2. Also its bad practise as forcing it may have adverse effect on performance.
3. Revisit your design and let JVM do his work :)
you should not relay on System.gc() - if you feel like you need to force GC to run it usually means that there is something wrong with your code/design. GC will run and clear your unused objects if they are ready to be created - please verify your design and think more about memory management, look as well for loops in object references.
The
System.gc()
call in java, suggest to the vm to run garbage collection. Though it doesn't guarantee that it will actually do it. Nevertheless the best solution you have. As mentioned in other responses jvisualvm utility (present in JDK since JDK 6 update 7), provides a garbage functionality as well.
EDIT:
your question open my appetite for the topic and I came across this resource:
oracle gc resource
The application performs reasonably fast but as time goes it seems to eat lots of memory and slow down.
These are a classic symptoms of a Java memory. It is likely that somewhere in your application there is a data structure that just keeps growing. As the heap gets close to full, the JVM spends an increasing proportion of its time running the GC in a (futile) attempt to claw back some space.
Forcing the GC won't fix this, because the GC can't collect the data structure. In fact forcing the GC to run just makes the application slower.
The cure for the problem is to find what is causing the memory leak, and fix it.
Performance gain/drop depends how often you need garbage collection and how much memory your jvm has and how much your program needs.
There is no certainity(its just a hint to the interpreter) of garbage collection when you call System.gc() but at least has a probability. With enough number of calls, you can achieve some statistically derived performance multiplier for only your system setup.
Below graph shows an example program's executions' consumptions and jvm was given only 1GB(no gc),1GB(gc),3GB(gc),3GB(no gc) heaps respectively to each trials.
At first, when jvm was given only 1GB memory while program needed 3.75GB, it took more than 50 seconds for the producer thread pool to complete their job because having less garbage management lead to poor object creation rate.
Second example is about %40 faster because System.gc() is called between each production of 150MB object data.
At third example, jvm is given 3GB memory space while keeping System.gc() on. More memory has given more performance as expected.
But when I turned System.gc() off at the same 3GB environment, it was faster!
Even if we cannot force it, we can have some percentage gain or drain of performance trying System.g() if we try long enough. At least on my windows-7 64 bit operating system with latest jvm .
Garbage collector runs automatically. You can't force the garbage collector.
I do not suggest that you do that but to force the garbage collector to run from within your java code you can just use all the available memory, this works because the garbage collector will run before the JVM throws OutOfMemoryError...
try {
List<Object> tempList = new ArrayList<Object>();
while (true) {
tempList.add(new byte[Integer.MAX_VALUE]);
}
} catch (OutOfMemoryError OME) {
// OK, Garbage Collector will have run now...
}
My answer is going to be different than the others but it will lead to the same point.
Explain:
YES it is possible to force the garbage collector with two methods used at the same time and in the same order this are:
System.gc ();
System.runFinalization ();
this two methods call will force the garbage collector to execute the finalise() method of any unreachable object and free the memory. however the performance of the software will down considerable this is because garbage runs in his own thread and to that one is not way to controlled and depending of the algorithm used by the garbage collector could lead to a unnecessary over processing, It is better if you check your code because it must be broken to you need use the garbage collector to work in a good manner.
NOTE: just to keep on mind this will works only if in the finalize method is not a reassignment of the object, if this happens the object will keep alive an it will have a resurrection which is technically possible.

Simple Class - Is it a Memory Leak?

I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.

Question about java garbage collection

I have this class and I'm testing insertions with different data distributions. I'm doing this in my code:
...
AVLTree tree = new AVLTree();
//insert the data from the first distribution
//get results
...
tree = new AVLTree();
//inser the data from the next distribution
//get results
...
I'm doing this for 3 distributions. Each one should be tested an average of 14 times, and the 2 lowest/highest values removed from to compute the average. This should be done 2000 times, each time for 1000 elements. In other words, it goes 1000, 2000, 3000, ..., 2000000.
The problem is, I can only get as far as 100000. When I tried 200000, I ran out of heap space. I increased the available heap space with -Xmx in the command line to 1024m and it didn't even complete the tests with 200000. I tried 2048m and again, it wouldn't work.
What I'm thinking is that the garbage collector isn't getting rid of the old trees once I do tree = new AVL Tree(). But why? I thought that the elements from the old trees would no longer be accessible and their memory would be cleaned up.
The garbage collector should have no trouble cleaning up your old tree objects, so I can only assume there's some other allocation that you're doing that's not being cleaned up.
Java has a good tool to watch the GC in progress (or not in your case), JVisualVM, which comes with the JDK.
Just run that and it will show you which objects are taking up the heap, and you can both trigger and see the progress of GC's. Then you can target those for pools so they can be re-used by you, saving the GC the work.
Also look into this option, which will probably stop the error you're getting that stops the program, and you program will finish, but it may take a long time because your app will fill up the heap then run very slowly.
-XX:-UseGCOverheadLimit
Which JVM you are using and what JVM parameters you have used to configure GC?
Your explaination shows there is a memory leak in your code. If you have any tool like jprofiler then use it to find out where is the memory leak.
There's no reason those trees shouldn't be collected, although I'd expect that before you ran out of memory you should see long pauses as the system ran a full GC. As it's been noted here that that's not what you're seeing, you could try running with flags like -XX:-PrintGC, -XX:-PrintGCDetails,-XX:-PrintGCTimeStamps to give you some more information on exactly what's going on, along with perhaps some sort of running count of roughly where you are. You could also explicitly tell the garbage collector to use a different garbage-collection algorithm.
However, it still seems unlikely to me. What other code is running? is it possible there's something in the AVLTree class itself that's keeping its instances from being GC'd? What about manually logging the finalize() on that class to insure that (some of them, at least) are collectible (e.g. make a few and manually call System.gc())?
GC params here, a nice ref on garbage collection from sun here that's well worth reading.
The Java garbage collector isn't guaranteed to garbage collect after each object's refcount becomes zero. So if you're writing code that is only creating and deleting a lot of objects, it's possible to expend all of the heap space before the gc has a chance to run. Alternatively, Pax's suggestion that there is a memory leak in your code is also a strong possibility.
If you are only doing benchmarking, then you may want to use the java gc function (in the System class I think) between tests, or even re-run you program for each distribution.
We noticed this in a server product. When making a lot of tiny objects that quickly get thrown away, the garbage collector can't keep up. The problem is more pronounced when the tiny objects have pointers to larger objects (e.g. an object that points to a large char[]). The GC doesn't seem to realize that if it frees up the tiny object, it can then free the larger object. Even when calling System.gc() directly, this was still a huge problem (both in 1.5 and 1.6 VMs)!
What we ended up doing and what I recommend to you is to maintain a pool of objects. When your object is no longer needed, throw it into the pool. When you need a new object, grab one from the pool or allocate a new one if the pool is empty. This will also save a small amount of time over pure allocation because Java doesn't have to clear (bzero) the object.
If you're worried about the pool getting too large (and thus wasting memory), you can either remove an arbitrary number of objects from the pool on a regular basis, or use weak references (for example, using java.util.WeakHashMap). One of the advantages of using a pool is that you can track the allocation frequency and totals, and you can adjust things accordingly.
We're using pools of char[] and byte[], and we maintain separate "bins" of sizes in the pool (for example, we always allocate arrays of size that are powers of two). Our product does a lot of string building, and using pools showed significant performance improvements.
Note: In general, the GC does a fine job. We just noticed that with small objects that point to larger structures, the GC doesn't seem to clean up the objects fast enough especially when the VM is under CPU load. Also, System.gc() is just a hint to help schedule the finalizer thread to do more work. Calling it too frequently causes a significant performance hit.
Given that you're just doing this for testing purposes, it might just be good housekeeping to invoke the garbage collector directly using System.gc() (thus forcing it to make a pass). It won't help you if there is a memory leak, but if there isn't, it might buy you back enough memory to get through your test.

Categories

Resources