I've written a program in Java that detects when the computer it's being run on is idle. When the idle time is reset (in other words, the mouse or keyboard is used), the program locks the computer. This program is designed to run when the computer starts and continue to run while the machine is on. My problem is that the program takes up more and more space as it runs longer. I don't see any reason why it should; there's nothing like an ArrayList that's being added to constantly. The program "expands" in memory by about 10 megabytes per hour. Is there some sort of garbage collection I should be doing?
Try to set the heap size to a lower value... the garbage collector should then kick-in earlier. Manually calling System.gc() from time to time should also solve your problem. If this results in OutOfMemory exception after a while and/or the memory is still constantly increasing, then you really have a memory leak somewhere.
It doesn't sound like you even have a problem. 10 MB really isn't that large. It could be that the garbage collector simply hasn't "decided" to run in a while. You can try to call the GC directly by calling System.gc(), but really, I wouldn't worry too much unless you're running out of memory or having performance issues.
Any time your program uses the new operator the runtime will allocate new memory that may not be freed until the garbage collector decides it is time to reclaim available space. So even if you are not "leaking" memory by adding to a collection that is never cleared you are still using memory and your usage will grow over time.
Consider eliminating calls to new (e.g. by reusing existing objects) or tuning the heap size settings on the JVM to initiate the garbage collector more frequently if memory consumption is a concern.
Related
I am testing the usage of Heap size in a java application running in JDK 1.6. I use the tool VisualVM to monitor the heap usage. I found the Maximum heap size usage of around 500 MB for a few mins. I used the option "Perform GC" which calls System.gc(). The first time i used it, the Maximum heap is reduced to 410MB, then once again I used it to get 130MB and the next time to 85MB. I made all the four calls next to next without any interval. Why does the call System.gc() does not collect all the Heap to 85MB at first time. Is there any other reason behind this. Or I should try with any other methods?
The System.gc() will return when all objects have been scanned once.
An object should be finalized() AFTER it has been collected. Most objects don't implement this method but for the ones which do, they are added to a queue to be cleaned up later. This means those objects cannot be cleaned up yet (not the queue nodes which hold them) i.e. the act of triggering a GC can increase memory consumption temporarily.
Additionally there are SoftReferences to objects which may or may not be cleaned up by a GC. The assumption is these should only be cleaned up if not much else was cleaned up.
In short, not all objects can be cleaned up in one cycle.
System.gc() requests the JVM to start garbage collection. If you are expecting that GC is invoked as soon as System.gc() then it is a wrong notion. Calling it multiple times will not help. It is not possible to map System.gc() with the actual garbage collection. Also no matter how many times you call System.gc(), JVM will do the GC only when it is ready to do so. What may be happening is that heap size is getting reduced even with the first System.gc() but not exactly as soon as you call it. Garbage collection related to your first System.gc() may be finishing in background and in parallel your code is reaching third System.gc() statement.
If you are pretty sure that only adding multiple System.gc() helps you reducing the heap size. Then you need to check what all objects are getting created in JVM in between first and last System.gc(). There may be other threads creating the objects.
One possible reason might be the use of java.lang.ref.Reference types. If the GC is going to break a "Reference" this will happen after the GC proper has completed. Any objects that become unreachable as a result are left for the next GC cycle to deal with.
Finalization works the same way. If an object requires finalization, it and all of the objects reachable from it (only) are likely to only be collectable in the next GC cycle.
Then there is the issue that the GC's algorithm for shrinking the heap is non-aggressive. According to the Java HotSpot VM Options page, the GC only shrinks the heap if more than 70% is free after garbage collection. However, it is not entirely clear if this refers to a full GC or not. So you could get the GC doing a partial GC and shrinking, and then a full GC and shrinking some more.
(Some people infer from the wording of the System.gc() javadocs that it will perform a full GC. However, I suspect that this is actually version / GC dependent.)
But to be honest this should all be moot. Trying to coerce an application into giving back as much memory is possible is pointless. The chances are that you are forcing it to throw away cached data. When the application gets active again it will start reloading its caches.
I have an array int dom[][] = new int[28][3];, which I move to a different array. How can I free up that array's space? I'm getting high cpu warnings while running it on the android emulator.
In Java, you don't have to free array manually, Garbage collector will clear memory for you. You can just set you object to null: dom = null.
CPU warnings doesn't have anything with this, android emulator has some CPU intensive operations at startup, so your processor will be at 100% some time until emulator starts.
set dom to null. so that it will get freed up when the next Garbage collector runs.
You don't have to do anything: if the array is not reachable any more because you don't have a reference to it, it will be garbage collected (setting the reference to null won't make a difference).
Typically, if this is a local variable and you don't return it from the method where it is declared, it will become eligible for GC as soon as the method exits (i.e. as soon as the array is out of scope).
And if you are "getting high cpu warnings", then the problem is with CPU, not memory.
I'm getting high cpu warnings while running the android emulator?
It could be anything. However, I'm guessing that you've added some explicit System.gc() calls in an attempt to free up space a bit earlier.
Don't do that!
The virtual machine generally knows when there is lots of potential garbage to collect ... and that is the best time to run the GC. And you can be assured that the GC will be run immediately the JVM decides to bail out with an OOME.
If you call System.gc() yourself, the chances are that you will just cause the VM to waste CPU cycles to little useful effect.
In most situations, the best strategy is to let the VM schedule the GC as required. Null'ing references can help, but it is usually unnecessary. (And frankly a 28x3 array of integers takes very little space, and is probably not worth nulling.)
I have an interesting problem with Java memory consumption. I have a native C++ application which invokes my Java application.
The Application basically does some language translations\parses a few XML's and responds to network requests. Most of the state of Application doesn't have to be retained so it is full of Methods which take in String arguments and returns string results.
This application continues to take more and more memory with time and there comes a time where it starts to take close to 2 GB memory, which made us suspect that there is a leak somewhere in some Hashtable or static variables. On closer inspection we did not find any leaks. Comparing heap dumps over a period of time, shows the char[] and String objects take huge memory.
However when we inspect these char[], Strings we find that they do not have GC roots which means that they shouldn't be the cause of leak. Since they are a part of heap, it means they are waiting to get garbage collected. After using verious tools MAT\VisualVM\JHat and scrolling through a lot of such objects I used the trial version of yourkit. Yourkit gives the data straightaway saying that 96% of the char[] and String are unreachable. Which means that at the time of taking dump 96% of the Strings in the heap were waiting to get garbage collected.
I understand that the GC runs sparingly but when you check via VisualVM you can actually see it running :-( than how come there are so many unused objects on the heap all time.
IMO this Application should never take more than 400-500 MB memory, which is where it stays for the first 24 hours but than it continues to increase the heap :-(
I am running Java 1.6.0-25.
thanks for any help.
Java doesn't GC when you think it does/should :-) GC is too complex a topic to understand what is going on without spending a couple of weeks really digging into the details. So if you see behavior that you can't explain, that doesn't mean its broken.
What you see can have several reasons:
You are loading a huge String into memory and keep a reference to a substring. That can keep the whole string in memory (Java doesn't always allocate a new char array for substrings - since Strings are immutable, it simply reuses the original char array and remembers the offset and length).
Nothing triggered the GC so far. Some C++ developers believe GC is "evil" (anything that you don't understand must be evil, right?) so they configure Java not to run it unless absolutely necessary. This means the VM will eat memory until it hits the maximum and then, it will do one huge GC run.
build 25 is already pretty old. Try to update to the latest Java build (33, I think). The GC is one of the best tested parts of the VM but it does have bugs. Maybe you hit one.
Unless you see OutOfMemoryException, you don't have a leak. We have an application which eats all the heap you give it. If it gets 16GB of RAM ("just to be safe"), it will use the whole 16GB because we cache what we can. You never see out of memory, because the cache will shrink as needed but system admins routinely freak out "oh god! oh god! It's running out of memory" PANIK No, it's not. Unless Java tells you so, it's not running out of memory. It's just using it efficiently.
Tuning the GC with command line options is one of the best ways to break it. Hundreds of people which know a lot more about the topic than you ever will spent years making the GC efficient. You think you can do better? Good luck. -> Get rid of any "magic" command line options and calls to System.gc() and your problem might go away.
Try decreasing the heap size to 500 Megabytes and see if the software will start garbage collecting or die. Java isnt too fussy about using memory given to it. you might also research GC tuning options which will make the GC more prudent about cleaning stuff up.
String reallyLongString = "this is a really long String";
String tinyString = reallyLongString.substring(2, 3);
reallyLongString = null
The JVM can't collect the memory allocated for the long string in the above case, since there's a reference to part of it.
If you're doing stuff with Strings and you're suffering from memory issues, this might be the cause of your grief.
use tinyString = new String(reallyLongString.substring(2, 3); instead.
There might not be a leak at all - a leak would be if the Strings were reachable. If you've allocated as much as 2GB to the application, there is no reason for the garbage collector to start freeing up memory until you are approaching that limit. If you don't want it taking any more than 500MB, then pass -Xmx 512m when starting the JVM.
You could also try tuning the garbage collector to start cleaning up much earlier.
First of all, stop worrying about those Strings and char[]. In almost every java application I have profiled, they are on the top of memory consumer list. And in almost no of those java application they were the real problem.
If you have not received OutOfMemoryError yet, but do worry that 2GB is too much for your java process, then try to decrease Xmx value you pass to it. If it runs well and good with 512m or 1g, then problem solved, isn't it?
If you get OOM, then one more option you can try is to use Plumbr with your java process. It is memory leak discovery tool, to it can help you if there really is a memory leak.
I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.
I have this class and I'm testing insertions with different data distributions. I'm doing this in my code:
...
AVLTree tree = new AVLTree();
//insert the data from the first distribution
//get results
...
tree = new AVLTree();
//inser the data from the next distribution
//get results
...
I'm doing this for 3 distributions. Each one should be tested an average of 14 times, and the 2 lowest/highest values removed from to compute the average. This should be done 2000 times, each time for 1000 elements. In other words, it goes 1000, 2000, 3000, ..., 2000000.
The problem is, I can only get as far as 100000. When I tried 200000, I ran out of heap space. I increased the available heap space with -Xmx in the command line to 1024m and it didn't even complete the tests with 200000. I tried 2048m and again, it wouldn't work.
What I'm thinking is that the garbage collector isn't getting rid of the old trees once I do tree = new AVL Tree(). But why? I thought that the elements from the old trees would no longer be accessible and their memory would be cleaned up.
The garbage collector should have no trouble cleaning up your old tree objects, so I can only assume there's some other allocation that you're doing that's not being cleaned up.
Java has a good tool to watch the GC in progress (or not in your case), JVisualVM, which comes with the JDK.
Just run that and it will show you which objects are taking up the heap, and you can both trigger and see the progress of GC's. Then you can target those for pools so they can be re-used by you, saving the GC the work.
Also look into this option, which will probably stop the error you're getting that stops the program, and you program will finish, but it may take a long time because your app will fill up the heap then run very slowly.
-XX:-UseGCOverheadLimit
Which JVM you are using and what JVM parameters you have used to configure GC?
Your explaination shows there is a memory leak in your code. If you have any tool like jprofiler then use it to find out where is the memory leak.
There's no reason those trees shouldn't be collected, although I'd expect that before you ran out of memory you should see long pauses as the system ran a full GC. As it's been noted here that that's not what you're seeing, you could try running with flags like -XX:-PrintGC, -XX:-PrintGCDetails,-XX:-PrintGCTimeStamps to give you some more information on exactly what's going on, along with perhaps some sort of running count of roughly where you are. You could also explicitly tell the garbage collector to use a different garbage-collection algorithm.
However, it still seems unlikely to me. What other code is running? is it possible there's something in the AVLTree class itself that's keeping its instances from being GC'd? What about manually logging the finalize() on that class to insure that (some of them, at least) are collectible (e.g. make a few and manually call System.gc())?
GC params here, a nice ref on garbage collection from sun here that's well worth reading.
The Java garbage collector isn't guaranteed to garbage collect after each object's refcount becomes zero. So if you're writing code that is only creating and deleting a lot of objects, it's possible to expend all of the heap space before the gc has a chance to run. Alternatively, Pax's suggestion that there is a memory leak in your code is also a strong possibility.
If you are only doing benchmarking, then you may want to use the java gc function (in the System class I think) between tests, or even re-run you program for each distribution.
We noticed this in a server product. When making a lot of tiny objects that quickly get thrown away, the garbage collector can't keep up. The problem is more pronounced when the tiny objects have pointers to larger objects (e.g. an object that points to a large char[]). The GC doesn't seem to realize that if it frees up the tiny object, it can then free the larger object. Even when calling System.gc() directly, this was still a huge problem (both in 1.5 and 1.6 VMs)!
What we ended up doing and what I recommend to you is to maintain a pool of objects. When your object is no longer needed, throw it into the pool. When you need a new object, grab one from the pool or allocate a new one if the pool is empty. This will also save a small amount of time over pure allocation because Java doesn't have to clear (bzero) the object.
If you're worried about the pool getting too large (and thus wasting memory), you can either remove an arbitrary number of objects from the pool on a regular basis, or use weak references (for example, using java.util.WeakHashMap). One of the advantages of using a pool is that you can track the allocation frequency and totals, and you can adjust things accordingly.
We're using pools of char[] and byte[], and we maintain separate "bins" of sizes in the pool (for example, we always allocate arrays of size that are powers of two). Our product does a lot of string building, and using pools showed significant performance improvements.
Note: In general, the GC does a fine job. We just noticed that with small objects that point to larger structures, the GC doesn't seem to clean up the objects fast enough especially when the VM is under CPU load. Also, System.gc() is just a hint to help schedule the finalizer thread to do more work. Calling it too frequently causes a significant performance hit.
Given that you're just doing this for testing purposes, it might just be good housekeeping to invoke the garbage collector directly using System.gc() (thus forcing it to make a pass). It won't help you if there is a memory leak, but if there isn't, it might buy you back enough memory to get through your test.