I have written a program in Java which has 5 threads. In the run() I have a while loop which will loop over and over and this loop will loop a lot of times.
While the program is running, it is gradually eating ram bringing the program to a crawl. Is there anyway I can stop it eating all my ram?
Edit:
Actually just thinking about it, it probably is because the loop is creating lots of objects. Should I = null those objects at the end of the while loop??
If you are creating new objects and saving them in some collection in your loop that would easily fill up memory very quickly.
What you need to do is make sure you aren't saving any extra objects that you don't need.
Also, you should optimize the logic in your threads to consume the least amount of memory.
While the program is running, it is
gradually eating ram bringing the
program to a crawl. Is there anyway I
can stop it eating all my ram?
Sure. Change your code so that it uses less RAM.
Specific to #Michael Borgwardt's answer, try to minimize the creation of new objects. If you can re-use the same object instead of creating new ones every time, you can save some memory there.
But as others have said, without seeing your code, we're just guessing.
You might need a memory analyzer to understand what is "eating RAM". There are many tools available. One commercial tool is JProfiler.
A simple and free tool is to use Sun Profiler supplied with Java VisualVM. If you have JDK 6 installed you probably already have this program installed on your computer. Run the C:\Program Files\Java\jdk1.6.0_16\bin\jvisualvm.exe. Connect to the running Java process and press the Profiler tab and Memory button. You can now see what objects you have and how much memory they are using. Press refresh to update the view.
It's normal for each one of your thread to have an infinite while loop, but simply having the while loops does not use up more RAM.
new Thread(new Runnable() {
public void run() {
try {
while (true) {
// do some work
}
} catch(InterruptedException ex) {}
}
}).start();
You're doing something to consume the RAM, so you must debug your application and find the cause of the increased RAM usage.
Update:
You don't have to make the objects null at the end of the loop... if you're not holding on to a reference of those objects then the garbage collector should clean them up. What are you doing with the objects after you create them? How much RAM are you using up?
Related
I'm about to start using a 3rd party closed-source library to load a bunch of data and wanted to check how fast it is and how much memory it requires to load one 'set' of data. I wrote this simple harness to invoke a data load to time it and used YourKit to have a quick look at memory usage and delve in to the CPU time.
I'm running it on Windows 7, using Eclipse on JDK8 with no VM args.
public static void main(String[] args){
long start = System.currentTimeMillis();
// There are a few more calls involved, but not much
BlackBoxDataProvider bd = new BlackBoxDataProvider("c:\\thedata");
BlackBoxData = bd.loadTheData();
System.out.println(System.currentTimeMillis() - start + "ms");
// Keep the application alive so I can have a quick look at memory usage
while(true) {
Thread.sleep(1000);
}
}
Here's the YourKit snapshot of memory after the load is complete:
I then used YourKit to "Force" Garbage Collection and this happened:
Obviously it's not a real life scenario because I'm stuck inside the main method, on the main thread, so some of my references won't be cleaned up, but I can't figure out why the memory allocation would keep increasing.
Every time I click 'Force System GC', the allocation increases. I got up to 11.9GB before it stopped increasing.
Why is this happening?
The System.gc() will return when all objects have been scanned once. If objects implementing the finalize() method are added to a queue, to be cleaned up later. This means those objects cannot be cleaned up yet (not the queue nodes which hold them) i.e. the act of triggering a GC can increase memory consumption temporarily. This is what might be happening in your case.
In short, not all objects can be cleaned up in one cycle.
I'm trying to write a code that will have a minimal impact on resources and I have come across GC behavior I don't understand.
Apparently Strings are not cleared from the memory immediately even though they are not in use anymore.
for(int i = 0; i < 999999999; i++)
System.out.println("Test");
Memory usage graph
according to the graph I assume that a new String object is created on every run of the loop but it is not cleared automatically on the next run of the loop - if that is the case I would like to know why is it happening and in case I'm misreading the situation I would like to know what really is happening "behind the curtains".
When I add Sleep to the code I presented above the graph becomes stable, what is the reason for that?
for(int i = 0; i < 999999999; i++){
System.out.println("Test");
try{
Thread.sleep(1);
}
catch(Exception e){}
}
Stable graph
Also I have a few question about the given case:
Can GC be forced to be more aggressive? I mean shorten the object lifetime and not reducing the memory allocated by JVM?
If I plug in a null value to the variable will it affect the time until it's cleared by the GC?
What is the correct way to work with Strings when I need to run a large number of regex matches on them?
What is the best way to declare a String object "obsolete" so the GC will clear it?
Does the above situation occur because Java does an automatic intern for Strings and if so is there a way to cancel it?
Thank you very much!
I assume that a new String object is created on every run of the loop
No, if it was creating a new String on each iteration you would get far more garbage.
At this garbage rate it could be the profiler which is allocating some objects.
A String literal is create once ever. (In a JVM)
but it is not cleared automatically on the next run of the loop
Correct, even if it was created on each iteration the GC only runs when it needs to, doing it on each iteration would be insanely expensive.
When I add Sleep to the code I presented above the graph becomes stable, what is the reason for that?
You have dramatically slowed down your application.
Can GC be forced to be more aggressive?
You can make the Eden space much smaller, but this would slow down your application.
If I plug in a null value to the variable will it affect the time until it's cleared by the GC?
No, this rarely does anything.
What is the correct way to work with Strings when I need to run a large number of regex matches on them
regex's create a lot of garbage. If you want to reduce allocations and speed up your application, avoid using regex's.
I recently speed up an application by 3x by replacing some commonly used regex with direct String handling.
What is the best way to declare a String object "obsolete" so the GC will clear it?
Use it in a limited scope. When the scope ends so does the reference to it and it can be GCed.
Does the above situation occur because Java does an automatic intern
Once a String is interned it is not recreated.
for Strings and if so is there a way to cancel it?
Sure, force it create a new String each time. This of course creates more garbage and is much slower (and the code is longer) but you can do it if you want.
The Garbage Collector collects when its time to collect, more or less.
Yes, depending on what collector you are using. There's literally dozens of vm properties you can set, some of them influencing each other.
I don't think it does in 'newer' JDK's
Normally you do not care. When it comes to GC, it's more about not loading tons of gigs of data into your memory. One specialty about strings are its its interns, but Strings will be gc'd like other objects, too.
When there's no reference to the string/intern anymore (when you exit the braces)
No, the situation does occur, because java's GC's work this way...
I can explain the GC effects on base on CMS/ParNew (since I know this combo best), it works like this:
The heap is splitted into two regions (i exclude PermGen for now).
Young and Old
Young is split into 'eden' and 'copy' (or survivor)
When you generate a new object, it will go Young->Eden. At some point, the eden will reach its max memory, then not used objects will be removed, objects still having references will be copied to Young->Copy.
As the program keeps running, Young->Copy will reach its max memory. It will be copied again in another Young->Copy memory space.
At some point, it can't do that anymore, so some objects it will be moved from Young->Copy to Old, depending on a copy counter (I think). Same story for the old heap.
So what can you tune? First of all, you normally have throughput (batching) and low-latency (webpages), the ParNew/CMS combo was used for low-latency.
Since I know ParNew/CMS best, I'll explain what you can consider tuning first:
You can tune max memory (more memory means more managing, the less memory an application needs to run, the better... in general)
You can tune heap ration between young and old
You can tune the ratios between eden and copy within young
You can tune the time, when CMS starts its collection cycle
And then there's a lot more. From my personal experience, for large applications, we used in general the following settings:
Fix min and max memory to the same size (no change of max heap)
New Ratio to Old something about 1:4 to 1:7
Disable System.gc()
Log a lot of gc stuff
put an alert on OutOfMemory
do weekly analysis on the log and decide on tuning parameters. (Only one parameter at a time ;)
If you really want to know what's behind everything, I'd recommend reading a book, because there's really, really, really a lot going on.
I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.
I just encountered the following code (slightly simplified):
/* periodically requests garbagecollect to improve memory usage and
garbage collect performance under most JVMs */
static class GCThread implements Runnable {
public void run() {
while(true) {
try {
Thread.sleep(300000);
} catch (InterruptedException e) {}
System.gc();
}
}
}
Thread gcThread = new Thread(new GCThread());
gcThread.setDaemon(true);
gcThread.start();
I respect the author of the code, but no longer has easy access to ask him to defend his assertion in the comment on top.
Is this true? It very much goes against my intuition that this little hack should improve anything. I would expect the JVM to be much better equipped to decide when to perform a collection.
The code is running in a web-application running inside a IBM WebSphere on Z/OS.
It depends.
The JVM can completely ignore System.gc() so this code could do absolutely nothing.
Secondly, a GC has a cost impact. If your program wouldn't otherwise have done a GC (say, it doesn't generate much garbage, or it has a huge heap and never needs to GC) then this code will be added overhead.
If the program would normally run with just minor GCs and this code causes a major GC, you will have a negative impact.
All in all, this kind of optimisation makes absolutely no sense whatsoever unless you have concrete evidence that it provides benefit and you would need to re-evaluate that evidence every time the program materially changed.
I also share your assumption. If this is really an optimization, it would have found its way in the JVM. Calling the garbage collector should be avoided - it can even have negative effect (because you are "disturbing" the JVM)
JVMs will probably have a setting for gc interval. See here for Sun's. And Hardcoding the value is rather questionable for anything, especially for garbage collection.
Maybe it could be a good thing if your application could time the calls so that GC would occur at times when the application has nothing useful to do, so the memory is clean whenever the next load peak comes. But this is not what this loop does (it simply runs all 5 minutes), so I would advise against it.
But I invite you to test it - run the application with and without this loop for similar work volumes and measure which takes more time (and maybe total memory, if important). Repeat the test some times. Maybe we get some surprising insights.
I am running JBOSS server by deploying my own classes.Now i started doing some operations on my application.Now i would like to know the memory used by my application before and after performing operations.please support me in this regard
By using
MemoryMXBean
(retrieved by calling
ManagementFactory.getMemoryMXBean())
as well as
Runtime.getRuntime()'s methods:
.totalMemory(),
.maxMemory()
and
.freeMemory().
Note that this is not an exact art: while creating a new object, other temporary ones may be allocated, which will not give you an accurate measurement. As we know, java garbage collection is not guaranteed so you can't necessarily do that to eliminate dead objects.
If you research, you'll see that most code that attempts to do these measurements will have loops of Runtime.gc() calls and sleeps etc to try and ensure that the measurement is accurate. And this will only work on certain JVM implementations...
On an app server/deployed application, you will likely only get gross measurements/usage changes as the heap is allocated and the gc fires, but it should be enough. [I'm presuming that you wouldn't implement gc()'s and sleeps in production code :)]
Get the free memory before doing the operation Runtime.getRuntime().freeMemory() and then again after finishing the operation and you will get the memory used by your operation.
You may find the results you get are inconclusive. The GC will clean up used memory at random points in the background so you might find at if you run the same operations many times you will get different results. You can even appear to have more memory free after performing an operation.