I am building a Java web app, using the Play! Framework. I'm hosting it on playapps.net. I have been puzzling for a while over the provided graphs of memory consumption. Here is a sample:
The graph comes from a period of consistent but nominal activity. I did nothing to trigger the falloff in memory, so I presume this occurred because the garbage collector ran as it has almost reached its allowable memory consumption.
My questions:
Is it fair for me to assume that my application does not have a memory leak, as it appears that all the memory is correctly reclaimed by the garbage collector when it does run?
(from the title) Why is java waiting until the last possible second to run the garbage collector? I am seeing significant performance degradation as the memory consumption grows to the top fourth of the graph.
If my assertions above are correct, then how can I go about fixing this issue? The other posts I have read on SO seem opposed to calls to System.gc(), ranging from neutral ("it's only a request to run GC, so the JVM may just ignore you") to outright opposed ("code that relies on System.gc() is fundamentally broken"). Or am I off base here, and I should be looking for defects in my own code that is causing this behavior and intermittent performance loss?
UPDATE
I have opened a discussion on PlayApps.net pointing to this question and mentioning some of the points here; specifically #Affe's comment regarding the settings for a full GC being set very conservatively, and #G_H's comment about settings for the initial and max heap size.
Here's a link to the discussion, though you unfortunately need a playapps account to view it.
I will report the feedback here when I get it; thanks so much everyone for your answers, I've already learned a great deal from them!
Resolution
Playapps support, which is still great, didn't have many suggestions for me, their only thought being that if I was using the cache extensively this may be keeping objects alive longer than need be, but that isn't the case. I still learned a ton (woo hoo!), and I gave #Ryan Amos the green check as I took his suggestion of calling System.gc() every half day, which for now is working fine.
Any detailed answer is going to depend on which garbage collector you're using, but there are some things that are basically the same across all (modern, sun/oracle) GCs.
Every time you see the usage in the graph go down, that is a garbage collection. The only way heap gets freed is through garbage collection. The thing is there are two types of garbage collections, minor and full. The heap gets divided into two basic "areas." Young and tenured. (There are lots more subgroups in reality.) Anything that is taking up space in Young and is still in use when the minor GC comes along to free up some memory, is going to get 'promoted' into tenured. Once something makes the leap into tenured, it sits around indefinitely until the heap has no free space and a full garbage collection is necessary.
So one interpretation of that graph is that your young generation is fairly small (by default it can be a fairly small % of total heap on some JVMs) and you're keeping objects "alive" for comparatively very long times. (perhaps you're holding references to them in the web session?) So your objects are 'surviving' garbage collections until they get promoted into tenured space, where they stick around indefinitely until the JVM is well and good truly out of memory.
Again, that's just one common situation that fits with the data you have. Would need full details about the JVM configuration and the GC logs to really tell for sure what's going on.
Java won't run the garbage cleaner until it has to, because the garbage cleaner slows things down quite a bit and shouldn't be run that frequently. I think you would be OK to schedule a cleaning more frequently, such as every 3 hours. If an application never consumes full memory, there should be no reason to ever run the garbage cleaner, which is why Java only runs it when the memory is very high.
So basically, don't worry about what others say: do what works best. If you find performance improvements from running the garbage cleaner at 66% memory, do it.
I am noticing that the graph isn't sloping strictly upward until the drop, but has smaller local variations. Although I'm not certain, I don't think memory use would show these small drops if there was no garbage collection going on.
There are minor and major collections in Java. Minor collections occur frequently, whereas major collections are rarer and diminish performance more. Minor collections probably tend to sweep up stuff like short-lived object instances created within methods. A major collection will remove a lot more, which is what probably happened at the end of your graph.
Now, some answers that were posted while I'm typing this give good explanations regarding the differences in garbage collectors, object generations and more. But that still doesn't explain why it would take so absurdly long (nearly 24 hours) before a serious cleaning is done.
Two things of interest that can be set for a JVM at startup are the maximum allowed heap size, and the initial heap size. The maximum is a hard limit, once you reach that, further garbage collection doesn't reduce memory usage and if you need to allocate new space for objects or other data, you'll get an OutOfMemoryError. However, internally there's a soft limit as well: the current heap size. A JVM doesn't immediately gobble up the maximum amount of memory. Instead, it starts at your initial heap size and then increases the heap when it's needed. Think of it a bit as the RAM of your JVM, that can increase dynamically.
If the actual memory use of your application starts to reach the current heap size, a garbage collection will typically be instigated. This might reduce the memory use, so an increase in heap size isn't needed. But it's also possible that the application currently does need all that memory and would exceed the heap size. In that case, it is increased provided that it hasn't already reached the maximum set limit.
Now, what might be your case is that the initial heap size is set to the same value as the maximum. Suppose that would be so, then the JVM will immediately seize all that memory. It will take a very long time before the application has accumulated enough garbage to reach the heap size in memory usage. But at that moment you'll see a large collection. Starting with a small enough heap and allowing it to grow keeps the memory use limited to what's needed.
This is assuming that your graph shows heap use and not allocated heap size. If that's not the case and you are actually seeing the heap itself grow like this, something else is going on. I'll admit I'm not savvy enough regarding the internals of garbage collection and its scheduling to be absolutely certain of what's happening here, most of this is from observation of leaking applications in profilers. So if I've provided faulty info, I'll take this answer down.
As you might have noticed, this does not affect you. The garbage collection only kicks in if the JVM feels there is a need for it to run and this happens for the sake of optimization, there's no use of doing many small collections if you can make a single full collection and do a full cleanup.
The current JVM contains some really interesting algorithms and the garbage collection itself id divided into 3 different regions, you can find a lot more about this here, here's a sample:
Three types of collection algorithms
The HotSpot JVM provides three GC algorithms, each tuned for a specific type of collection within a specific generation. The copy (also known as scavenge) collection quickly cleans up short-lived objects in the new generation heap. The mark-compact algorithm employs a slower, more robust technique to collect longer-lived objects in the old generation heap. The incremental algorithm attempts to improve old generation collection by performing robust GC while minimizing pauses.
Copy/scavenge collection
Using the copy algorithm, the JVM reclaims most objects in the new generation object space (also known as eden) simply by making small scavenges -- a Java term for collecting and removing refuse. Longer-lived objects are ultimately copied, or tenured, into the old object space.
Mark-compact collection
As more objects become tenured, the old object space begins to reach maximum occupancy. The mark-compact algorithm, used to collect objects in the old object space, has different requirements than the copy collection algorithm used in the new object space.
The mark-compact algorithm first scans all objects, marking all reachable objects. It then compacts all remaining gaps of dead objects. The mark-compact algorithm occupies more time than the copy collection algorithm; however, it requires less memory and eliminates memory fragmentation.
Incremental (train) collection
The new generation copy/scavenge and the old generation mark-compact algorithms can't eliminate all JVM pauses. Such pauses are proportional to the number of live objects. To address the need for pauseless GC, the HotSpot JVM also offers incremental, or train, collection.
Incremental collection breaks up old object collection pauses into many tiny pauses even with large object areas. Instead of just a new and an old generation, this algorithm has a middle generation comprising many small spaces. There is some overhead associated with incremental collection; you might see as much as a 10-percent speed degradation.
The -Xincgc and -Xnoincgc parameters control how you use incremental collection. The next release of HotSpot JVM, version 1.4, will attempt continuous, pauseless GC that will probably be a variation of the incremental algorithm. I won't discuss incremental collection since it will soon change.
This generational garbage collector is one of the most efficient solutions we have for the problem nowadays.
I had an app that produced a graph like that and acted as you describe. I was using the CMS collector (-XX:+UseConcMarkSweepGC). Here is what was going on in my case.
I did not have enough memory configured for the application, so over time I was running into fragmentation problems in the heap. This caused GCs with greater and greater frequency, but it did not actually throw an OOME or fail out of CMS to the serial collector (which it is supposed to do in that case) because the stats it keeps only count application paused time (GC blocks the world), application concurrent time (GC runs with application threads) is ignored for those calculations. I tuned some parameters, mainly gave it a whole crap load more heap (with a very large new space), set -XX:CMSFullGCsBeforeCompaction=1, and the problem stopped occurring.
Probably you do have memory leaks that's cleared every 24 hours.
Related
For example in a service adapter you might:
a. have an input data model and an output data model, maybe even immutable, with different classes and use Object Mappers to transform between classes and create some short-lived objects along the way
b. have a single data model, some of the classes might be mutable, but the same object that was created for the input is also sent as output
There are other use-cases when you'd have to choose between clear code with many objects and less clear code with less objects and I would like to know if Garbage Collection still has a weight in this decision.
I should make this a comment as IMO it does not qualify as an answer, but it will not fit.
Even if the answer(s) are going to most probably be - do whatever makes your code more readable (and to be honest I still follow that all the time); we have faced this issue of GC in our code base.
Suppose that you want to create a graph of users (we had to - around 1/2 million) and load all their properties in memory and do some aggregations on them and filtering, etc. (it was not my decision), because these graph objects where pretty heavy - once loaded even with 16GB of heap the JVM would fail with OOM or GC would take huge pauses. And it's understandable - lots of data requires lots of memory, you can't run away from it. The solution proposed and that actually worked was to model that with simple BitSets - where each bit would be a property and a potential linkage to some other data; this is by far not readable and extremely complicated to maintain to this day. Lots of shifts, lots of intrinsics of the data - you have to know at all time what the 3-bit means for example, there's no getter for usernameIncome let's say - you have to do quite a lot shifts and map that to a search table, etc. But it would keep the GC pretty low, at least in the ranges where we were OK with that.
So unless you can prove that GC is taken your app time so much - you probably are even safer simply adding more RAM and increasing it(unless you have a leak). I would still go for clear code like 99.(99) % of the time.
Newer versions of Java have quite sophisticated mechanisms to handle very short-living objects so it's not as bad as it was in the past. With a modern JVM I'd say that you don't need to worry about garbage collection times if you create many objects, which is a good thing since there are now many more of them being created on the fly that this was the case with older versions of Java.
What's still valid is to keep the number of created objects low if the creation is coming with high costs, e.g. accessing a database to retrieve data from, network operations, etc.
As other people have said I think it's better to write your code to solve the problem in an optimum way for that problem rather than thinking about what the garbage collector (GC) will do.
The key to working with the GC is to look at the lifespan of your objects. The heap is (typically) divided into two main regions called generations to signify how long objects have been alive (thus young and old generations). To minimise the impact of GC you want your objects to become eligible for collection while they are still in the young generation (either in the Eden space or a survivor space, but preferably Eden space). Collection of objects in the Eden space is effectively free, as the GC does nothing with them, it just ignores them and resets the allocation pointer(s) when a minor GC is finished.
Rather than explicitly calling the GC via System.gc() it's much better to tune your heap. For example, you can set the size of the young generation using command line options like -XX:NewRatio=n, where n signifies the ratio of new to old (e.g. setting it to 3 will make the ratio of new:old 1:3 so the young generation will be 1 quarter of the heap). Alternatively, you can set the size explicitly using -XX:NewSize=n and -XX:MaxNewSize=m. The GC may resize the heap during collections so setting these values to be the same will keep it at a fixed size.
You can profile your code to establish the rate of object creation and how long your objects typically live for. This will give you the information to (ideally) configure your heap to minimise the number of objects being promoted into the old generation. What you really don't want is objects being promoted and then becoming garbage shortly thereafter.
Alternatively, you may want to look at the Zing JVM from Azul (full disclosure, I work for them). This uses a different GC algorithm, called C4, which enables compaction of the heap concurrently with application threads and so eliminates most of the impact of the GC on application latency.
I have a Java application that waits for the user to hit a key and then runs a task. Once done, it goes back and waits again. I was looking at memory profile for this application with jvisualvm, and it showed an increasing pattern.
Committed memory size is 16MB.
Used memory, on application startup, was 2.7 MB, and then it climbed with intermediate drops (garbage collection). Once this sawtooth pattern approached close to 16MB, a major drop occurred and the memory usage fell close to 4 MB. This major drop point has been increasing though. 4MB, 6MB, 8MB. The usage never goes beyond 16 MB but the whole sawtooth pattern is on a climb towards 16 MB.
Do I have a memory leak?
Since this is my first time posting to StackOverflow, do not have enough reputation to post an image.
Modern SunOracle JVMs use what is called a generational garbage collector:
When the collector runs it first tries a partial collection only releases memory that was allocated recently
recently created objects that are still active get 'promoted'
Once an object has been promoted a few times, it will no longer get cleaned up by partial collections even after it is ready for collection
These objects, called tenured, are only cleaned up when a full collection becomes necessary in order to make enough room for the program to continue running
So basically, bits of your program that stick around long enough to get missed by the fast 'partial' collections will hang around until JVM decides it has to do a full collection. If you let it go long enough you should eventually see the full collection happen and usage drop back down to your original starting point.
If that never happens and you eventually get an Out Of Memory exception, then you probably have a memory leak :)
That kind of sawtooth pattern is commonly observed and is not an indication of memory leak.
Because garbage collecting in big chunks is more efficient than constantly collecting small amounts, the JVM does the collecting in batches. That's why you see this pattern.
As stated by others, this behavior is normal. This is a good description of the garbage collection process. To summarize, the JVM usese a generational garbage collector. The vast majority of objects are very short-lived, and those that survive longer tend to last much longer. Knowing this, the GC will check the newer generation first to avoid having to repeatedly check the older objects which are less likely to be inaccessible. After a period of time, the survivors move to the older generation. This increasing saw-tooth is exactly what you're seeing- the rising troughs are due to the older generation growing larger as the survivors are being moved to it. If your program ran long enough eventually checking the newer generation wouldn't free up enough memory and it would have to GC the old generation as well.
Hope that helps.
Page 6 of the the document Memory Management in the Java
HotSpotâ„¢ Virtual Machine contains the following paragraphs:
Young generation collections occur relatively frequently and are
efficient and fast because the young generation space is usually small
and likely to contain a lot of objects that are no longer referenced.
Objects that survive some number of young generation collections are
eventually promoted, or tenured, to the
old generation. See Figure 1. This generation is typically larger than the young generation and its occupancy
grows more slowly. As a result, old generation collections are infrequent, but take significantly longer to
complete
Could someone please define what "frequent" and "infrequent" mean in the statements above? Are we talking microseconds, milliseconds, minutes, days?
It is not possible to give a definite answer to this. It really depends on a lot of factors, including the platform (JVM version, settings, etc), the application, and the workload.
At one extreme, it is possible for an application to never trigger a garbage collector. It might simply sit there doing nothing, or it might perform an extremely long computation in which no objects are created after the JVM initialization and application startup.
At the other extreme it is theoretically possible for one garbage collection end and another one to start within few nanoseconds. For example, this could happen if your application is in the last stages of dying from a full heap, or if it is allocating pathologically large arrays.
So:
Are we talking microseconds, milliseconds, minutes, days?
Possibly all of the above, though the first two would definitely be troubling if you observed them in practice.
A well behaved application should not run the GC too often. If your application is triggering a young space collection more than once or twice a second, then this could lead to performance problems. And too frequent "full" collections is worse because their impact is greater. However, it is certainly plausible for a poorly designed / implemented application to behave like this.
There is also the issue that the interval between GC runs is not always meaningful. For instance some of the HotSpot GCs actually have GC threads running concurrently with normal application threads. If you have enough cores, enough RAM and enough memory bus bandwidth, then a constantly running concurrent GC may not appreciably affect application performance.
Terminology note:
Strictly speaking a concurrent GC is one where the GC can run at the same time as the application threads.
Strictly speaking a parallel GC is one where the GC itself uses multiple threads.
A GC can be concurrent without being parallel, and vice versa.
Its a relative term. Young collections could be many times a seconds up to a few hours. Old generations collections can be every few seconds, up to daily. You should expect to have many more young collections than old collections in a most systems.
Its highly unlikely to be many days. If the GC occurs too often e.g. << 100 ms apart you get get a OutOfMemoryError: GC Overhead Exceeded as the JVM prevenets that from happening.
As it is, the terms "frequent" , "infrequent" are relative. And the timings are, in fact, not fixed. It depends on the system in question. It depends on lots of things like:
Your heap size and settings for different parts of the heap (young, old gen, perm gen)
Your application's memory behaviour. How many objects does it create and how fast? how long those objects are referenced etc?
If your application is monster memory eater, gc would run as if its running for its life. If your application does not demand too much of memory, then gc would run at intervals decided by how full the memory is.
TL DL: "Frequent" and "infrequent" are relative terms that depends on the memory allocation rate and the heap size. If you want a precise answer, you need to measure it yourself for your particular application.
Let's say your app has two modes, mode-1 allocates memory and does computation and mode-2 sits idle.
If mode-1 allocation is smaller than the heap available, no gc need to occur until it finishes. Maybe it used so little RAM that it could do a second round of mode-1 without collection. However, eventually you'll run out of free heap, and jvm will perform an "infrequent" collection.
However, if mode-1 allocation is a significant fraction of, or larger, than the young-generation heap, collection would happen more "frequently". During the young gen collection, allocations that survive (imagine data is needed through the entire mode-1 operation), will be promoted to old-gen, giving the young-gen more room. Young-gen allocation and collection can now continue. Eventually old-gen heap would run out, and must be collected, thus "infrequently".
So then, how frequent is frequent? It depends on the allocation rate and the heap size. If jvm is bumping into the heap limit often, it'll collect often. If there is plenty of heap (let's say 100GB), then jvm doesn't need to collect for a long long time. The down side is that when it finally does a collection, it might take a long time to free 100GB, stopping the jvm for many seconds (or minutes!). The current JVMs are smarter than that and would occasionanlly force a collection (preferably in mode-2). And with parallel collectors, it could happen all the time if necessary.
Ultimately, the frequency is task and heap dependent, as well as how various vm parameters are set. If you want a precise answer, you must measure them yourself for your particular application.
Because spec says "relatively frequently" and infrequent (regarding Young generation), we can't estimate the frequency in absolute units like microseconds, milliseconds, minutes or days
The GC has to check and find out which objects can be collected. My question is whether having too many objects to be checked can cause a GC overhead or somehow the GC is smart enough to avoid having to iterate through all the objects to find out which one is not referenced anymore?
Yes, it does matter to the mark-and-sweep collector how many objects you have. As to the size of those objects, that could matter too: a compacting collector would have more work to do if it needed to compact 10GB worth of stuff rather than 10MB of stuff.
Having said this, modern garbage collectors are extremely sophisticated (they operate on multiple heaps, do things in the background, can use multiple cores etc). They are also highly configurable. Furthermore, a typical JVM comes equipped with multiple garbage collectors.
It is therefore hard to give meaningful, precise answers to general questions like this.
One way this kind of thing is optimized is the concept of Generational Garbage Collection. (Look in Section 4). Apparently Java since 1.2 has had generational collection.
What this means is that often newer objects are likely to die more quickly, known as 'infant mortality'. These newer objects are put in a generation that is collected more aggressively. If an object has been around for an hour, it's likely to be around another 5 minutes and is put in a generation that's collected less frequently than the new objects. If an object survives for some time in the more frequently collected areas it'll be promoted to an less-frequently-collected generation.
This lets you not look at all active objects for each sweep.
It depends on which GC algorithm is being used, in case-of mark-sweep, it does matter because mark sweep need to identify roots for GC using enumeration. Here is link on how mark-sweep works
It has to iterate through all active objects to determine if an object is still used. The G1 collector has 1 MB mega-objects (which knows all the references within the 1 MB) but the performance is much the same.
When you get into multi-GB solutions, one option is to use off heap memory which you manage yourself. Or you can use a solution like Zing, which can handle tens of GB heap without significant pauses.
I have this class and I'm testing insertions with different data distributions. I'm doing this in my code:
...
AVLTree tree = new AVLTree();
//insert the data from the first distribution
//get results
...
tree = new AVLTree();
//inser the data from the next distribution
//get results
...
I'm doing this for 3 distributions. Each one should be tested an average of 14 times, and the 2 lowest/highest values removed from to compute the average. This should be done 2000 times, each time for 1000 elements. In other words, it goes 1000, 2000, 3000, ..., 2000000.
The problem is, I can only get as far as 100000. When I tried 200000, I ran out of heap space. I increased the available heap space with -Xmx in the command line to 1024m and it didn't even complete the tests with 200000. I tried 2048m and again, it wouldn't work.
What I'm thinking is that the garbage collector isn't getting rid of the old trees once I do tree = new AVL Tree(). But why? I thought that the elements from the old trees would no longer be accessible and their memory would be cleaned up.
The garbage collector should have no trouble cleaning up your old tree objects, so I can only assume there's some other allocation that you're doing that's not being cleaned up.
Java has a good tool to watch the GC in progress (or not in your case), JVisualVM, which comes with the JDK.
Just run that and it will show you which objects are taking up the heap, and you can both trigger and see the progress of GC's. Then you can target those for pools so they can be re-used by you, saving the GC the work.
Also look into this option, which will probably stop the error you're getting that stops the program, and you program will finish, but it may take a long time because your app will fill up the heap then run very slowly.
-XX:-UseGCOverheadLimit
Which JVM you are using and what JVM parameters you have used to configure GC?
Your explaination shows there is a memory leak in your code. If you have any tool like jprofiler then use it to find out where is the memory leak.
There's no reason those trees shouldn't be collected, although I'd expect that before you ran out of memory you should see long pauses as the system ran a full GC. As it's been noted here that that's not what you're seeing, you could try running with flags like -XX:-PrintGC, -XX:-PrintGCDetails,-XX:-PrintGCTimeStamps to give you some more information on exactly what's going on, along with perhaps some sort of running count of roughly where you are. You could also explicitly tell the garbage collector to use a different garbage-collection algorithm.
However, it still seems unlikely to me. What other code is running? is it possible there's something in the AVLTree class itself that's keeping its instances from being GC'd? What about manually logging the finalize() on that class to insure that (some of them, at least) are collectible (e.g. make a few and manually call System.gc())?
GC params here, a nice ref on garbage collection from sun here that's well worth reading.
The Java garbage collector isn't guaranteed to garbage collect after each object's refcount becomes zero. So if you're writing code that is only creating and deleting a lot of objects, it's possible to expend all of the heap space before the gc has a chance to run. Alternatively, Pax's suggestion that there is a memory leak in your code is also a strong possibility.
If you are only doing benchmarking, then you may want to use the java gc function (in the System class I think) between tests, or even re-run you program for each distribution.
We noticed this in a server product. When making a lot of tiny objects that quickly get thrown away, the garbage collector can't keep up. The problem is more pronounced when the tiny objects have pointers to larger objects (e.g. an object that points to a large char[]). The GC doesn't seem to realize that if it frees up the tiny object, it can then free the larger object. Even when calling System.gc() directly, this was still a huge problem (both in 1.5 and 1.6 VMs)!
What we ended up doing and what I recommend to you is to maintain a pool of objects. When your object is no longer needed, throw it into the pool. When you need a new object, grab one from the pool or allocate a new one if the pool is empty. This will also save a small amount of time over pure allocation because Java doesn't have to clear (bzero) the object.
If you're worried about the pool getting too large (and thus wasting memory), you can either remove an arbitrary number of objects from the pool on a regular basis, or use weak references (for example, using java.util.WeakHashMap). One of the advantages of using a pool is that you can track the allocation frequency and totals, and you can adjust things accordingly.
We're using pools of char[] and byte[], and we maintain separate "bins" of sizes in the pool (for example, we always allocate arrays of size that are powers of two). Our product does a lot of string building, and using pools showed significant performance improvements.
Note: In general, the GC does a fine job. We just noticed that with small objects that point to larger structures, the GC doesn't seem to clean up the objects fast enough especially when the VM is under CPU load. Also, System.gc() is just a hint to help schedule the finalizer thread to do more work. Calling it too frequently causes a significant performance hit.
Given that you're just doing this for testing purposes, it might just be good housekeeping to invoke the garbage collector directly using System.gc() (thus forcing it to make a pass). It won't help you if there is a memory leak, but if there isn't, it might buy you back enough memory to get through your test.