Why is Java 10 recommended if you're using the G1 GC? - java

Java 10 reduces Full GC pause times by iteratively improving on its existing algorithm.
-XX:ParallelGCThreads
As I understood it G1 does not run its collection cycles concurrently with our application. It will still pause the application periodically and Full GC pauses increase with larger heap sizes. 
Then how does it improve performance? Can anyone explain this?

Because it wasn't until Java 10 that G1GC became fully parallel in stop-the-world full GC cycle. As per JEP 307: Parallel Full GC for G1 this improves the latency of the worst case scenario:
The G1 garbage collector is designed to avoid full collections, but when the concurrent collections can't reclaim memory fast enough a fall back full GC will occur. The current implementation of the full GC for G1 uses a single threaded mark-sweep-compact algorithm. We intend to parallelize the mark-sweep-compact algorithm and use the same number of threads as the Young and Mixed collections do. The number of threads can be controlled by the -XX:ParallelGCThreads option, but this will also affect the number of threads used for Young and Mixed collections.

In fact, Java 11 is recommended if you use G1GC because a lot of works was done on it to reduce its footprint and lower its pauses compared to 10.
A summary was done on the hotspot-gc-use mailing list around the various improvements made on 11, 10 and 9 for G1GC, you can find it at this link:
http://mail.openjdk.java.net/pipermail/hotspot-gc-use/2018-June/002759.html
A quick summary from this post on the list :
[...] I would like to point out that overall, with G1, compared to JDK8 it is possible to get 60% lower pause times "for free" on x64
processors (probably more on ARM/PPC due to mentioned specific
changes), at a highly reduced memory footprint.

Related

Why Garbage-First (G1) targeted for multiprocessor machines with large memories

According to:
9 Garbage-First Garbage Collector
and:
G1: Java's Garbage First Garbage Collector
G1 targeted for multiprocessor machines with large memories.
Those 2 papers (and other web papers), does not describe why we need:
a. large memories
b. multiprocessor (I assume this need due to concurrent & parallel)
What is the technical explanation for those requirements ?
It's other way around. G1 is not targeted for large memories. If your application demands large heap size, G1 is effective.
Why your application demands large heaps? It's depend on business requirements and specific needs of application. You may load huge set of master data Or you may use in - memory caching to reduce response times. Think of big data applications,(Spark,Hadoop) which are processing teta bytes of data and use memory for processing.
Multiprocessors machines have more processing powers and effective for parallel execution of different tasks. Large heap applications obviously demands more processing power.
By setting Max pause time goal, G1GC try to meet that goal. Compared to other algorithms, by default G1GC spends 10% of time in garbage collection activities. You have to fine tune the parameters properly to achieve your pause time goals.
This related question is helpful to get some more insight into G1GC: Java 7 (JDK 7) garbage collection and documentation on G1
G1 is the only collection algorithm in the hotspot VM that can deal with very large heaps efficiently. However, a large heap is NOT a requirement but instead the G1 is built for situations where your application needs a very large heap. In low-heap situations, it is still outperformed by older algorithms. The same is true for the number of processors.

Multithreading Performance of JVM's Memory Allocator

I have a multi-threaded program that does heavy memory allocation. The performance is fine on a quad-core i7 CPU and the speed up is around 3.9X. But, when the program is executed on a 12-core Xeon CPU, the speedup value does not go beyond 5.5X.
I should mention that the GC seems not to be a problem because VisualGC reports below 1 seconds for GC after more than 100 seconds of execution. The main memory usage belongs to the Eden section of heap and other sections hardly get used. The code does massive int array allocations and performs some arithmetic operations on them. It is somehow like state-space exploring and allocation of new instances cannot be avoided.
As you know, the standard memory allocators of both Windows and Linux show unsatisfactory performance for multi-threaded programs and good alternatives like tcmalloc and Hoard are available for C/C++. Since the parallel section consists of fully independent tasks and the GC time is very low, I doubted that the main reason should be the bad performance of JVM's memory allocator when too many threads compete for allocation.
Does anybody have experience with JVM's allocator in massive multithreaded programs and can give advise on how I can overcome this problem??
P.S. I have tested the code using JVM 6,7, and 8. The allocation rate is also very high (around 10 millions per second) but as I mentioned the Eden section is heavily used and the working set is less than a Gigabyte.
Is it the case that Eden space is less than what it should be in your case? If so, consider using
-XX:NewRatio=1 or other appropriate value.
To ascertain that, please use
-XX:+PrintTenuringDistribution
to see the distribution..

Garbage collector for young generation

Have sort question - is it true that all GC in JDK 7 (other than G1) always use stop-the-world for young generation collection?
thanks
For OpenJDK, JRockit, IBM JVM, and Sun/Oracle JDK, the young collection is always stop the world for every available collector.
The only JVM I know of which does not have a stop the world collector is Azul's Zing. (Not free)
While OpenJDK/Hotspot has CMS this is mostly concurrent. There is still stop the world portions and in some cases CMS will fall back to a Full GC which is stop-the-world.
AFAIK, It is hard to find real world examples where G1 is faster in terms of pause time than CMS, however it is improving all the time.
Do your GC logs speak to you
All (almost) Java garbage collectors has some sort of a Stop-the-world phase where all the Java threads (not native threads) are suspended waiting for exclusive system operations to complete. This state is sometimes referred to as a safepoint.
The modern garbage collectors are concurrently running together with the applications threads, which means that the garbage collector perform its work at the same time as the application threads are running. During the garbage collector process there are phases where exclusive access memory is needed, in that phase the application Java threads goes into the safepoint state.
One alternative to get rid of the stop-the-world garbage collections is to go for the Zing JVM with the C4 collector from Azul systems. The implementation has a low pause approach with no stop-the-world collections at all. Instead it is using a concurrent compacting approach with no stop-the-world phase.
No it is not true. Java 7 also supports the older Concurrent Mark Sweep (CMS) collector. CMS is a low pause collector, just like G1.
UPDATE
Apparently CMS is only for the tenured generation ... according to the blog posting that you found at http://blogs.oracle.com/jonthecollector/entry/our_collectors
So that means that your proposition is in fact true.
One could argue that all of the low-pause collectors:
- need to stop the mutator threads to do some phases of their work, and
- may fall back to a Full GC using the mark/sweep collector when they can't keep up.
However, there is a qualitive difference between "mostly concurrent" collectors like G1 and CMS, and other collectors that suspend non-GC threads for the entire duration of the collection process. That is what is normally meant by a "stop the world" strategy.

Why will a full gc on small heap takes 5 seconds?

I am running an J2EE application on 3 year old Solaris system with a used heap that is about 300 MB. From the gc logs I have seen that the full gc that is triggered a few times a day takes about 5 seconds and recovers about 200 MB every time. What could be the reason for a full gc to take such a long time on such a small heap?
I run java 1.6.0_37.
A slow full GC (and minor GC for that matter) is primary a result of a poor hardware setup and secondly software configuration (i.e. GC ergonomics), and at last the number of object residing in the heap.
Looking at the hardware, what CPU model and vendor are you using on your Solaris? Is it a SMP system with more than one core. Do you have more than one thread per core? Do your GC utilize all available virtual processors on the system i.e. is the garbage collection distributed across more than one processor?
Another situation making full GC to perform slow is if a part of the heap is swapped out from main memory. In that case the memory pages swapped out must be swapped in during the garbage collection which can be a rather time consuming process. In that case you do not have sufficient physical memory installed on the machine.
Does any other applications on the system compete for the same physical resources, i.e. CPU and memory?
Looking at the GC ergonomics, what collector are you using? I would recommend the parallel throughput collector or the G1 collector using multiple collector threads. I would also recommend to use a NUMA configuration.
Some general rules:
The better hardware and GC ergonomics, the faster the individual garbage collections will perform.
The fewer and smaller objects the application creates, the less often will the garbage collector run.
The fewer long lived object created, the less often will the full garbage collector run.
For more information about GC ergonomics:
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html

Java very large heap sizes [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Does anyone have experience with using very large heaps, 12 GB or higher in Java?
Does the GC make the program unusable?
What GC params do you use?
Which JVM, Sun or BEA would be better suited for this?
Which platform, Linux or Windows, performs better under such conditions?
In the case of Windows is there any performance difference to be had between 64 bit Vista and XP under such high memory loads?
If your application is not interactive, and GC pauses are not an issue for you, there shouldn't be any problem for 64-bit Java to handle very large heaps, even in hundreds of GBs. We also haven't noticed any stability issues on either Windows or Linux.
However, when you need to keep GC pauses low, things get really nasty:
Forget the default throughput, stop-the-world GC. It will pause you application for several tens of seconds for moderate heaps (< ~30 GB) and several minutes for large ones (> ~30 GB). And buying faster DIMMs won't help.
The best bet is probably the CMS collector, enabled by -XX:+UseConcMarkSweepGC. The CMS garbage collector stops the application only for the initial marking phase and remarking phases. For very small heaps like < 4 GB this is usually not a problem, but for an application that creates a lot of garbage and a large heap, the remarking phase can take quite a long time - usually much less then full stop-the-world, but still can be a problem for very large heaps.
When the CMS garbage collector is not fast enough to finish operation before the tenured generation fills up, it falls back to standard stop-the-world GC. Expect ~30 or more second long pauses for heaps of size 16 GB. You can try to avoid this keeping the long-lived garbage production rate of you application as low as possible. Note that the higher the number of the cores running your application is, the bigger is getting this problem, because the CMS utilizes only one core. Obviously, beware there is no guarantee the CMS does not fall back to the STW collector. And when it does, it usually happens at the peak loads, and your application is dead for several seconds. You would probably not want to sign an SLA for such a configuration.
Well, there is that new G1 thing. It is theoretically designed to avoid the problems with CMS, but we have tried it and observed that:
Its throughput is worse than that of CMS.
It theoretically should avoid collecting the popular blocks of memory first, however it soon reaches a state where almost all blocks are "popular", and the assumptions it is based on simply stop working.
Finally, the stop-the-world fallback still exists for G1; ask Oracle, when that code is supposed to be run. If they say "never", ask them, why the code is there. So IMHO G1 really doesn't make the huge heap problem of Java go away, it only makes it (arguably) a little smaller.
If you have bucks for a big server with big memory, you have probably also bucks for a good, commercial hardware accelerated, pauseless GC technology, like the one offered by Azul. We have one of their servers with 384 GB RAM and it really works fine - no pauses, 0-lines of stop-the-world code in the GC.
Write the damn part of your application that requires lots of memory in C++, like LinkedIn did with social graph processing. You still won't avoid all the problems by doing this (e.g. heap fragmentation), but it would be definitely easier to keep the pauses low.
I am CEO of Azul Systems so I am obviously biased in my opinion on this topic! :) That being said...
Azul's CTO, Gil Tene, has a nice overview of the problems associated with Garbage Collection and a review of various solutions in his Understanding Java Garbage Collection and What You Can Do about It presentation, and there's additional detail in this article: http://www.infoq.com/articles/azul_gc_in_detail.
Azul's C4 Garbage Collector in our Zing JVM is both parallel and concurrent, and uses the same GC mechanism for both the new and old generations, working concurrently and compacting in both cases. Most importantly, C4 has no stop-the-world fall back. All compaction is performed concurrently with the running application. We have customers running very large (hundreds of GBytes) with worse case GC pause times of <10 msec, and depending on the application often times less than 1-2 msec.
The problem with CMS and G1 is that at some point Java heap memory must be compacted, and both of those garbage collectors stop-the-world/STW (i.e. pause the application) to perform compaction. So while CMS and G1 can push out STW pauses, they don't eliminate them. Azul's C4, however, does completely eliminate STW pauses and that's why Zing has such low GC pauses even for gigantic heap sizes.
We have an application that we allocate 12-16 Gb for but it really only reaches 8-10 during normal operation. We use the Sun JVM (tried IBMs and it was a bit of a disaster but that just might have been ignorance on our part...I have friends that swear by it--that work at IBM). As long as you give your app breathing room, the JVM can handle large heap sizes with not too much GC. Plenty of 'extra' memory is key.
Linux is almost always more stable than Windows and when it is not stable it is a hell of a lot easier to figure out why. Solaris is rock solid as well and you get DTrace too :)
With these kind of loads, why on earth would you be using Vista or XP? You are just asking for trouble.
We don't do anything fancy with the GC params. We do set the minimum allocation to be equal to the maximum so it is not constantly trying to resize but that is it.
I have used over 60 GB heap sizes on two different applications under Linux and Solaris respectively using 64-bit versions (obviously) of the Sun 1.6 JVM.
I never encountered garbage collection problems with the Linux-based application except when pushing up near the heap size limit. To avoid the thrashing problems inherent to that scenario (too much time spent doing garbage collection), I simply optimized memory usage throughout the program so that peak usage was about 5-10% below a 64 GB heap size limit.
With a different application running under Solaris, however, I encountered significant garbage-collection problems which made it necessary to do a lot of tweaking. This consisted primarily of three steps:
Enabling/forcing use of the parallel garbage collector via the -XX:+UseParallelGC -XX:+UseParallelOldGC JVM options, as well as controlling the number of GC threads used via the -XX:ParallelGCThreads option. See "Java SE 6 HotSpot Virtual Machine Garbage Collection Tuning" for more details.
Extensive and seemingly ridiculous setting of local variables to "null" after they are no longer needed. Most of these were variables that should have been eligible for garbage collection after going out of scope, and they were not memory leak situations since the references were not copied. However, this "hand-holding" strategy to aid garbage collection was inexplicably necessary for some reason for this application under the Solaris platform in question.
Selective use of the System.gc() method call in key code sections after extensive periods of temporary object allocation. I'm aware of the standard caveats against using these calls, and the argument that they should normally be unnecessary, but I found them to be critical in taming garbage collection when running this memory-intensive application.
The three above steps made it feasible to keep this application contained and running productively at around 60 GB heap usage instead of growing out of control up into the 128 GB heap size limit that was in place. The parallel garbage collector in particular was very helpful since major garbage-collection cycles are expensive when there are a lot of objects, i.e., the time required for major garbage collection is a function of the number of objects in the heap.
I cannot comment on other platform-specific issues at this scale, nor have I used non-Sun (Oracle) JVMs.
12Gb should be no problem with a decent JVM implementation such as Sun's Hotspot.
I would advice you to use the Concurrent Mark and Sweep colllector ( -XX:+UseConcMarkSweepGC) when using a SUN VM.Otherwies you may face long "stop the world" phases, were all threads are stopped during a GC.
The OS should not make a big difference for the GC performance.
You will need of course a 64 bit OS and a machine with enough physical RAM.
I recommend also considering taking a heap dump and see where memory usage can be improved in your app and analyzing the dump in something such as Eclipse's MAT . There are a few articles on the MAT page on getting started in looking for memory leaks. You can use jmap to obtain the dump with something such as ...
jmap -heap:format=b pid
As mentioned above, if you have a non-interactive program, the default (compacting) garbage collector (GC) should work well. If you have an interactive program, and you (1) don't allocate memory faster than the GC can keep up, and (2) don't create temporary objects (or collections of objects) that are too big (relative to the total maximum JVM memory) for the GC to work around, then CMS is for you.
You run into trouble if you have an interactive program where the GC doesn't have enough breathing room. That's true regardless of how much memory you have, but the more memory you have, the worse it gets. That's because when you get too low on memory, CMS will run out of memory, whereas the compacting GCs (including G1) will pause everything until all the memory has been checked for garbage. This stop-the-world pause gets bigger the more memory you have. Trust me, you don't want your servlets to pause for over a minute. I wrote a detailed StackOverflow answer about these pauses in G1.
Since then, my company has switched to Azul Zing. It still can't handle the case where your app really needs more memory than you've got, but up until that very moment it runs like a dream.
But, of course, Zing isn't free and its special sauce is patented. If you have far more time than money, try rewriting your app to use a cluster of JVMs.
On the horizon, Oracle is working on a high-performance GC for multi-gigabyte heaps. However, as of today that's not an option.
If you switch to 64-bit you will use more memory. Pointers become 8 bytes instead of 4. If you are creating lots of objects this can be noticeable seeing as every object is a reference (pointer).
I have recently allocated 15GB of memory in Java using the Sun 1.6 JVM with no problems. Though it is all only allocated once. Not much more memory is allocated or released after the initial amount. This was on a Linux but I imagine the Sun JVM will work just as well on 64-bit Windows.
You should try running visualgc against your app. It´s a heap visualization tool that´s part of the jvmstat download at http://java.sun.com/performance/jvmstat/
It is a lot easier than reading GC logs.
It quickly helps you understand how the parts (generations) of the heap are working. While your total heap may be 10GB, the various parts of the heap will be much smaller. GCs in the Eden portion of the heap are relatively cheap, while full GCs in the old generation are expensive. Sizing your heap so that that the Eden is large and the old generation is hardly ever touched is a good strategy. This may result in a very large overall heap, but what the heck, if the JVM never touches the page, it´s just a virtual page, and doesn´t have to take up RAM.
A couple of years ago, I compared JRockit and the Sun JVM for a 12G heap. JRockit won, and Linux hugepages support made our test run 20% faster. YMMV as our test was very processor/memory intensive and was primarily single-threaded.
here's an article on gc FROM one of Java Champions --
http://kirk.blog-city.com/is_your_concurrent_collector_failing_you.htm
Kirk, the author writes
"Send me your GC logs
I'm currently interested in studying Sun JVM produced GC logs. Since these logs contain no business relevent information it should be ease concerns about protecting proriatary information. All I ask that with the log you mention the OS, complete version information for the JRE, and any heap/gc related command line switches that you have set. I'd also like to know if you are running Grails/Groovey, JRuby, Scala or something other than or along side Java. The best setting is -Xloggc:. Please be aware that this log does not roll over when it reaches your OS size limit. If I find anything interesting I'll be happy to give you a very quick synopsis in return. "
An article from Sun on Java 6 can help you: https://www.oracle.com/java/technologies/javase/troubleshooting-javase.html
The max memory that XP can address is 4 gig(here). So you may not want to use XP for that(use a 64 bit os).
sun has had an itanium 64-bit jvm for a while although itanium is not a popular destination. The solaris and linux 64-bit JVMs should be what you should be after.
Some questions
1) is your application stable ?
2) have you already tested the app in a 32 bit JVM ?
3) is it OK to run multiple JVMs on the same box ?
I would expect the 64-bit OS from windows to get stable in about a year or so but until then, solaris/linux might be better bet.

Categories

Resources