What is large object for JVM GC - java

Charlie Hunt says that large object is bad for JVM GC in his presentation. Because:
Large objects are expensive to allocate and initialize.
Large objects of different sizes can cause Java heap fregmentation.
How to define large object? How can I know if the object is large object? Thanks

The definition depends on the platform, JVM and JVM configuration. For instance, here is as excerpt from How Garbage Collection differs in the three big JVMs blog post by Michael Kopp:
Large and small objects
The JRockit differentiates between large and small objects during
allocation. The limit for when an object is considered large depends
on the JVM version, the heap size, the garbage collection strategy and
the platform used. (italics mine - DL.) It is usually somewhere between 2 and 128 KB. Large
objects are allocated outside thread local area in in case of a
generational heap directly in the old generation. This makes a lot of
sense when you start thinking about it. The young generation uses a
copy ccollection. At some point copying an object becomes more
expensive than traversing it in ever garbage collection.
To your second question, I am not sure how to obtain that threshold, but specifically in HotSpot you can set it:
-XX:PretenureSizeThreshold=2m
Refer to the HotSpot JVM garbage collection options cheat sheet by Alexey Ragozin for details on this and many many other -XX options.

There is no theoretical definition on its size but this will depend upon your JVM configuration for example if young generation is small then even small classes will be causing too many swaps (GC). If your objects are big enough w.r.t your JVM heap then GC will have to do more work to allocate and claim them from heap. This will lead to "stop the world" problem more often.

Large Objects in general from GC point of view means :
Objects which are expensive to allocate
Objects which are expensive to initialize
Eg: arraylist of size 10000.

Related

Does it still make sense to avoid creating objects for the sake of garbage collection?

For example in a service adapter you might:
a. have an input data model and an output data model, maybe even immutable, with different classes and use Object Mappers to transform between classes and create some short-lived objects along the way
b. have a single data model, some of the classes might be mutable, but the same object that was created for the input is also sent as output
There are other use-cases when you'd have to choose between clear code with many objects and less clear code with less objects and I would like to know if Garbage Collection still has a weight in this decision.
I should make this a comment as IMO it does not qualify as an answer, but it will not fit.
Even if the answer(s) are going to most probably be - do whatever makes your code more readable (and to be honest I still follow that all the time); we have faced this issue of GC in our code base.
Suppose that you want to create a graph of users (we had to - around 1/2 million) and load all their properties in memory and do some aggregations on them and filtering, etc. (it was not my decision), because these graph objects where pretty heavy - once loaded even with 16GB of heap the JVM would fail with OOM or GC would take huge pauses. And it's understandable - lots of data requires lots of memory, you can't run away from it. The solution proposed and that actually worked was to model that with simple BitSets - where each bit would be a property and a potential linkage to some other data; this is by far not readable and extremely complicated to maintain to this day. Lots of shifts, lots of intrinsics of the data - you have to know at all time what the 3-bit means for example, there's no getter for usernameIncome let's say - you have to do quite a lot shifts and map that to a search table, etc. But it would keep the GC pretty low, at least in the ranges where we were OK with that.
So unless you can prove that GC is taken your app time so much - you probably are even safer simply adding more RAM and increasing it(unless you have a leak). I would still go for clear code like 99.(99) % of the time.
Newer versions of Java have quite sophisticated mechanisms to handle very short-living objects so it's not as bad as it was in the past. With a modern JVM I'd say that you don't need to worry about garbage collection times if you create many objects, which is a good thing since there are now many more of them being created on the fly that this was the case with older versions of Java.
What's still valid is to keep the number of created objects low if the creation is coming with high costs, e.g. accessing a database to retrieve data from, network operations, etc.
As other people have said I think it's better to write your code to solve the problem in an optimum way for that problem rather than thinking about what the garbage collector (GC) will do.
The key to working with the GC is to look at the lifespan of your objects. The heap is (typically) divided into two main regions called generations to signify how long objects have been alive (thus young and old generations). To minimise the impact of GC you want your objects to become eligible for collection while they are still in the young generation (either in the Eden space or a survivor space, but preferably Eden space). Collection of objects in the Eden space is effectively free, as the GC does nothing with them, it just ignores them and resets the allocation pointer(s) when a minor GC is finished.
Rather than explicitly calling the GC via System.gc() it's much better to tune your heap. For example, you can set the size of the young generation using command line options like -XX:NewRatio=n, where n signifies the ratio of new to old (e.g. setting it to 3 will make the ratio of new:old 1:3 so the young generation will be 1 quarter of the heap). Alternatively, you can set the size explicitly using -XX:NewSize=n and -XX:MaxNewSize=m. The GC may resize the heap during collections so setting these values to be the same will keep it at a fixed size.
You can profile your code to establish the rate of object creation and how long your objects typically live for. This will give you the information to (ideally) configure your heap to minimise the number of objects being promoted into the old generation. What you really don't want is objects being promoted and then becoming garbage shortly thereafter.
Alternatively, you may want to look at the Zing JVM from Azul (full disclosure, I work for them). This uses a different GC algorithm, called C4, which enables compaction of the heap concurrently with application threads and so eliminates most of the impact of the GC on application latency.

How does V8 manage its heap?

I know when V8's Garbage Collection is working, it will trace from GC's root so that unreachable objects will be marked and then be swept. My question is how GC traverses traverse those objects? There must be a data structure to store all objects reachable or unreachable. Bitmap? Linked table?
BTW, does JVM do the same?
AllenShow,
Google's V8 Heap is organized into a couple of different spaces. There's a great post, "A tour of V8: Garbage Collection" which explains how the V8 heap is organized as:
New-space: Most objects are allocated here. New-space is small and is
designed to be garbage collected very quickly, independent of other
spaces.
Old-pointer-space: Contains most objects which may have pointers to
other objects. Most objects are moved here after surviving in new-space
for a while.
Old-data-space: Contains objects which just contain raw data (no
pointers to other objects). Strings, boxed numbers, and arrays of
unboxed doubles are moved here after surviving in new-space for a
while.
Large-object-space: This space contains objects which are larger than
the size limits of other spaces. Each object gets its own mmap'd region
of memory. Large objects are never moved by the garbage collector.
Code-space: Code objects, which contain JITed instructions, are
allocated here. This is the only space with executable memory (although
Codes may be allocated in large-object-space, and those are executable, too).
Cell-space, property-cell-space and map-space: These spaces contain
Cells, PropertyCells, and Maps, respectively. Each of these spaces
contains objects which are all the same size and has some constraints
on what kind of objects they point to, which simplifies collection.
Conrad's article goes on to explain the V8 GC is built from a flavor of Cheney's Algorithm.
V8's heap implementation resides in heap.cc and heap.h. Initialization of the heap begins at line 5423. The method Address NewSpaceStart() found on line 615 of heap.h contains the address location of where new-space starts, and the objects are stored where by taking advantage of temporal locality.
Now for your second question: does JVM do the same? A fun fact: there are 3 major production JVMs and they all implement their GC algorithms differently. There's a great performance blog which wrote the article, "How Garbage Collection differs in the three big JVMs" which will discusses their implementations in further detail.
There are also flavors of GC, such as if you want a low-latency environment, if you re-wrote the JVM in Scala, and the Latency tuning options within the .NET environment.
Please let me know if you have any questions!
Thank you for your time,
Warm Regards,

Why is the default size of PermGen so small?

What would be the purpose of limiting the size of the Permgen space on a Java JVM? Why not always set it equal to the max heap size? Why does Java default to such a small number of 64MB? Are they trying to force people to notice permgen issues in their code by doing this?
If my app uses 85MB of permgen, then it might be safe to set it to 96MB but why set it so small if its just really part of the main heap? Wouldn't it be efficient to allow the JVM to use as much PermGen as the heap allows?
The PermGen is set to disappear in JDK8.
What would be the purpose of limiting the size of the Permgen space on a Java JVM?
Not exhausting resources.
Why not always set it equal to the max heap size?
The PermGen is not part of the Java heap. Besides, even if it was, it wouldn't be of much help to the application to fill the heap with class metadata and constant Strings, since you'd then get "OutOfMemoryError: Java heap size" errors instead.
Conceptually to the programmer, you could argue that a "Permanent Generation" is largely pointless. If you need to load a class or other "permanent" data and there is memory space left, then in principle you may as well just load it somewhere and not care about calling the aggregate of these items a "generation" at all.
However, the rationale is probably more that:
there is potentially a benefit (e.g. from a processor cache point of view) from having all code/class metadata near together in memory space, and to guarantee this it is easier to allocate fixed sized area(s);
similarly, memory space where code/class metadata is stored potentially has certain "special" properties (notably, you don't want it to get paged out to disk if you can help it) and the system may not be able to set such properties on memory in a very granular way, so that it is more practical to have all "special" objects together in one (or a small number of) contiguous block or memory space;
having permanent objects all together helps avoid fragmenting the remaining memory space and again, the most practical way to do this is to allocate one contiguous block of memory of fixed size from the outset.
So as I see things, most of the time the reason for allocating a permanent "generation" is really for practical implementation reasons than because the programmer really cares terribly much.
On the other hand, the situation isn't usually terrible for the programmer either: the amount of permanent generation needed is usually predictable, so that you should be able to allocate the required amount with decent leeway. So if you find you are unexpectedly exceeding the allocation, this may well be a signal that "something serious is wrong".
N.B. It is probably the case that some of the issues that the PermGen originally was designed to solve are not such big issues on modern 64-bit processors with larger processor caches. If it is removed in future releases of Java, this is likely a sign that the JVM designers feel it has now "served its purpose".
PermGen is where class data and other static stuff (like string literals) are allocated.
You'd rather allocate memory to the Java heap for your application data (Xms and Xmx, where young (short-lived) and tenured objects go (when the the JVM realizes they need to stay around longer)).
So the historic PermGen 64MB default may be arbitrary but the having you explicitly set it lets you know (and control) how much static data your application is causing the JVM to store.

Java GCs overhead: Does it matter if you have 10mb or 10gb of *referenced* objects?

The GC has to check and find out which objects can be collected. My question is whether having too many objects to be checked can cause a GC overhead or somehow the GC is smart enough to avoid having to iterate through all the objects to find out which one is not referenced anymore?
Yes, it does matter to the mark-and-sweep collector how many objects you have. As to the size of those objects, that could matter too: a compacting collector would have more work to do if it needed to compact 10GB worth of stuff rather than 10MB of stuff.
Having said this, modern garbage collectors are extremely sophisticated (they operate on multiple heaps, do things in the background, can use multiple cores etc). They are also highly configurable. Furthermore, a typical JVM comes equipped with multiple garbage collectors.
It is therefore hard to give meaningful, precise answers to general questions like this.
One way this kind of thing is optimized is the concept of Generational Garbage Collection. (Look in Section 4). Apparently Java since 1.2 has had generational collection.
What this means is that often newer objects are likely to die more quickly, known as 'infant mortality'. These newer objects are put in a generation that is collected more aggressively. If an object has been around for an hour, it's likely to be around another 5 minutes and is put in a generation that's collected less frequently than the new objects. If an object survives for some time in the more frequently collected areas it'll be promoted to an less-frequently-collected generation.
This lets you not look at all active objects for each sweep.
It depends on which GC algorithm is being used, in case-of mark-sweep, it does matter because mark sweep need to identify roots for GC using enumeration. Here is link on how mark-sweep works
It has to iterate through all active objects to determine if an object is still used. The G1 collector has 1 MB mega-objects (which knows all the references within the 1 MB) but the performance is much the same.
When you get into multi-GB solutions, one option is to use off heap memory which you manage yourself. Or you can use a solution like Zing, which can handle tens of GB heap without significant pauses.

Why does java wait so long to run the garbage collector?

I am building a Java web app, using the Play! Framework. I'm hosting it on playapps.net. I have been puzzling for a while over the provided graphs of memory consumption. Here is a sample:
The graph comes from a period of consistent but nominal activity. I did nothing to trigger the falloff in memory, so I presume this occurred because the garbage collector ran as it has almost reached its allowable memory consumption.
My questions:
Is it fair for me to assume that my application does not have a memory leak, as it appears that all the memory is correctly reclaimed by the garbage collector when it does run?
(from the title) Why is java waiting until the last possible second to run the garbage collector? I am seeing significant performance degradation as the memory consumption grows to the top fourth of the graph.
If my assertions above are correct, then how can I go about fixing this issue? The other posts I have read on SO seem opposed to calls to System.gc(), ranging from neutral ("it's only a request to run GC, so the JVM may just ignore you") to outright opposed ("code that relies on System.gc() is fundamentally broken"). Or am I off base here, and I should be looking for defects in my own code that is causing this behavior and intermittent performance loss?
UPDATE
I have opened a discussion on PlayApps.net pointing to this question and mentioning some of the points here; specifically #Affe's comment regarding the settings for a full GC being set very conservatively, and #G_H's comment about settings for the initial and max heap size.
Here's a link to the discussion, though you unfortunately need a playapps account to view it.
I will report the feedback here when I get it; thanks so much everyone for your answers, I've already learned a great deal from them!
Resolution
Playapps support, which is still great, didn't have many suggestions for me, their only thought being that if I was using the cache extensively this may be keeping objects alive longer than need be, but that isn't the case. I still learned a ton (woo hoo!), and I gave #Ryan Amos the green check as I took his suggestion of calling System.gc() every half day, which for now is working fine.
Any detailed answer is going to depend on which garbage collector you're using, but there are some things that are basically the same across all (modern, sun/oracle) GCs.
Every time you see the usage in the graph go down, that is a garbage collection. The only way heap gets freed is through garbage collection. The thing is there are two types of garbage collections, minor and full. The heap gets divided into two basic "areas." Young and tenured. (There are lots more subgroups in reality.) Anything that is taking up space in Young and is still in use when the minor GC comes along to free up some memory, is going to get 'promoted' into tenured. Once something makes the leap into tenured, it sits around indefinitely until the heap has no free space and a full garbage collection is necessary.
So one interpretation of that graph is that your young generation is fairly small (by default it can be a fairly small % of total heap on some JVMs) and you're keeping objects "alive" for comparatively very long times. (perhaps you're holding references to them in the web session?) So your objects are 'surviving' garbage collections until they get promoted into tenured space, where they stick around indefinitely until the JVM is well and good truly out of memory.
Again, that's just one common situation that fits with the data you have. Would need full details about the JVM configuration and the GC logs to really tell for sure what's going on.
Java won't run the garbage cleaner until it has to, because the garbage cleaner slows things down quite a bit and shouldn't be run that frequently. I think you would be OK to schedule a cleaning more frequently, such as every 3 hours. If an application never consumes full memory, there should be no reason to ever run the garbage cleaner, which is why Java only runs it when the memory is very high.
So basically, don't worry about what others say: do what works best. If you find performance improvements from running the garbage cleaner at 66% memory, do it.
I am noticing that the graph isn't sloping strictly upward until the drop, but has smaller local variations. Although I'm not certain, I don't think memory use would show these small drops if there was no garbage collection going on.
There are minor and major collections in Java. Minor collections occur frequently, whereas major collections are rarer and diminish performance more. Minor collections probably tend to sweep up stuff like short-lived object instances created within methods. A major collection will remove a lot more, which is what probably happened at the end of your graph.
Now, some answers that were posted while I'm typing this give good explanations regarding the differences in garbage collectors, object generations and more. But that still doesn't explain why it would take so absurdly long (nearly 24 hours) before a serious cleaning is done.
Two things of interest that can be set for a JVM at startup are the maximum allowed heap size, and the initial heap size. The maximum is a hard limit, once you reach that, further garbage collection doesn't reduce memory usage and if you need to allocate new space for objects or other data, you'll get an OutOfMemoryError. However, internally there's a soft limit as well: the current heap size. A JVM doesn't immediately gobble up the maximum amount of memory. Instead, it starts at your initial heap size and then increases the heap when it's needed. Think of it a bit as the RAM of your JVM, that can increase dynamically.
If the actual memory use of your application starts to reach the current heap size, a garbage collection will typically be instigated. This might reduce the memory use, so an increase in heap size isn't needed. But it's also possible that the application currently does need all that memory and would exceed the heap size. In that case, it is increased provided that it hasn't already reached the maximum set limit.
Now, what might be your case is that the initial heap size is set to the same value as the maximum. Suppose that would be so, then the JVM will immediately seize all that memory. It will take a very long time before the application has accumulated enough garbage to reach the heap size in memory usage. But at that moment you'll see a large collection. Starting with a small enough heap and allowing it to grow keeps the memory use limited to what's needed.
This is assuming that your graph shows heap use and not allocated heap size. If that's not the case and you are actually seeing the heap itself grow like this, something else is going on. I'll admit I'm not savvy enough regarding the internals of garbage collection and its scheduling to be absolutely certain of what's happening here, most of this is from observation of leaking applications in profilers. So if I've provided faulty info, I'll take this answer down.
As you might have noticed, this does not affect you. The garbage collection only kicks in if the JVM feels there is a need for it to run and this happens for the sake of optimization, there's no use of doing many small collections if you can make a single full collection and do a full cleanup.
The current JVM contains some really interesting algorithms and the garbage collection itself id divided into 3 different regions, you can find a lot more about this here, here's a sample:
Three types of collection algorithms
The HotSpot JVM provides three GC algorithms, each tuned for a specific type of collection within a specific generation. The copy (also known as scavenge) collection quickly cleans up short-lived objects in the new generation heap. The mark-compact algorithm employs a slower, more robust technique to collect longer-lived objects in the old generation heap. The incremental algorithm attempts to improve old generation collection by performing robust GC while minimizing pauses.
Copy/scavenge collection
Using the copy algorithm, the JVM reclaims most objects in the new generation object space (also known as eden) simply by making small scavenges -- a Java term for collecting and removing refuse. Longer-lived objects are ultimately copied, or tenured, into the old object space.
Mark-compact collection
As more objects become tenured, the old object space begins to reach maximum occupancy. The mark-compact algorithm, used to collect objects in the old object space, has different requirements than the copy collection algorithm used in the new object space.
The mark-compact algorithm first scans all objects, marking all reachable objects. It then compacts all remaining gaps of dead objects. The mark-compact algorithm occupies more time than the copy collection algorithm; however, it requires less memory and eliminates memory fragmentation.
Incremental (train) collection
The new generation copy/scavenge and the old generation mark-compact algorithms can't eliminate all JVM pauses. Such pauses are proportional to the number of live objects. To address the need for pauseless GC, the HotSpot JVM also offers incremental, or train, collection.
Incremental collection breaks up old object collection pauses into many tiny pauses even with large object areas. Instead of just a new and an old generation, this algorithm has a middle generation comprising many small spaces. There is some overhead associated with incremental collection; you might see as much as a 10-percent speed degradation.
The -Xincgc and -Xnoincgc parameters control how you use incremental collection. The next release of HotSpot JVM, version 1.4, will attempt continuous, pauseless GC that will probably be a variation of the incremental algorithm. I won't discuss incremental collection since it will soon change.
This generational garbage collector is one of the most efficient solutions we have for the problem nowadays.
I had an app that produced a graph like that and acted as you describe. I was using the CMS collector (-XX:+UseConcMarkSweepGC). Here is what was going on in my case.
I did not have enough memory configured for the application, so over time I was running into fragmentation problems in the heap. This caused GCs with greater and greater frequency, but it did not actually throw an OOME or fail out of CMS to the serial collector (which it is supposed to do in that case) because the stats it keeps only count application paused time (GC blocks the world), application concurrent time (GC runs with application threads) is ignored for those calculations. I tuned some parameters, mainly gave it a whole crap load more heap (with a very large new space), set -XX:CMSFullGCsBeforeCompaction=1, and the problem stopped occurring.
Probably you do have memory leaks that's cleared every 24 hours.

Categories

Resources