Boost JUnit performance using GraalVM - java

Have read some high level articles about new GraalVM and thought it would be a good idea to use it for enhancing JUnit test performance, especially for big test suites which run in forked mode.
According to SO question "Does GraalVM JVM support java 11?" I added following to the VM arguments of an unit test run configuration in my eclipse (jee-2019-12, JUnit4):
-XX:+UnlockExperimentalVMOptions -XX:+UseJVMCICompiler
Effect: The unit test takes somewhat longer than without these switches (2800 ms with, 2200 ms without, reproduceable).
Did I miss something? Or did I misunderstand the promises of enhanced boot time in GraalVM?

Yes, unfortunately, it feels like there's some misunderstanding at play here. I'll try to elaborate on a number of issue regarding performance and GraalVM here.
GraalVM is a polyglot runtime and it can run JVM applications, normally it does so by running Java HotSpot VM (the same as OpenJDK JDK, for example) with the top tier optimizing just-in-time (JIT) compiler replaced with its own GraalVM compiler.
Simplifying somewhat, during the run, the JVM loads the class files, verifies them, starts interpreting them, then compiles them with a series of compilers which tend to go from the fastest to compile to the most optimizing -- so the longer your application runs and more you use the same methods -- they get progressively compiled to a better and better machine code.
The GraalVM compiler is really good at optimizing code, so when your application runs enough time and it gets to work, the result is usually better than other compilers can show. This leads to better peak-performance, which is great for your medium/long-running workloads.
Your unit test run takes 2 seconds, which is really not that much time to execute code a lot, gather profile and use the optimizing compiler. It might also be that the particular code patterns and the workload is really well suited for C2 (default HotSpot's top tier JIT) so it's hard to be better. Remember, C2 is an excellent JIT which is developed for at least two decades and its results are really-really good too.
Now there's also another option that GraalVM gives you -- GraalVM native images, which allows you to compile your code ahead of time (AOT) to the native binary that will not depend on the JVM and will not load classes, verify them, initialize them, so the startup of such binary until it gets to do useful "business" work is much better.
This is a very interesting option for shorter running workloads or resource constrained environments (the binary doesn't need to do JIT compilation, so it doesn't need resources for it, making runtime resource consumption smaller).
However to use this approach you need to compile your application with the native image utility from GraalVM and it can take longer than your workload that runs in 2 seconds.
Now, in the setup that you're describing, you're not using the GraalVM distribution, but enable the GraalVM compiler in your OpenJDK (I assume) distribution. The options you specify turn on the GraalVM compiler as the top tier JIT compiler. There are 2 main differences at play compared to what you'd get with running java from the GraalVM distribution:
The compiler is not up-to-date, at some point of time the GraalVM compiler sources are pulled into the OpenJDK project and that's how it ends in your distribution.
GraalVM compiler is written in Java, and in your setup it is executed as normal Java code so it first might need to JIT compile itself which leads to longer warm-up phase of the run, somewhat polluted JIT profile with its code, etc.
In the GraalVM distribution, which I'd encourage you to try for this experiment -- the GraalVM compiler is up-to-date and is by default precompiled as a shared library using the GraalVM native image technology, so at runtime it doesn't need be JIT compiled, so its warmup is much more similar to the characteristics of C2.
Still, 2 seconds might not be enough time for the optimizing compiler to show major differences. Could also be that tests run a lot of code once, and the body of the hot code which is JIT compiled is not significant enough.

Related

Why is it important that Java (and other JVM languages) is highly portable? [duplicate]

I've been thinking about it lately, and it seems to me that most advantages given to JIT compilation should more or less be attributed to the intermediate format instead, and that jitting in itself is not much of a good way to generate code.
So these are the main pro-JIT compilation arguments I usually hear:
Just-in-time compilation allows for greater portability. Isn't that attributable to the intermediate format? I mean, nothing keeps you from compiling your virtual bytecode into native bytecode once you've got it on your machine. Portability is an issue in the 'distribution' phase, not during the 'running' phase.
Okay, then what about generating code at runtime? Well, the same applies. Nothing keeps you from integrating a just-in-time compiler for a real just-in-time need into your native program.
But the runtime compiles it to native code just once anyways, and stores the resulting executable in some sort of cache somewhere on your hard drive. Yeah, sure. But it's optimized your program under time constraints, and it's not making it better from there on. See the next paragraph.
It's not like ahead-of-time compilation had no advantages either. Just-in-time compilation has time constraints: you can't keep the end user waiting forever while your program launches, so it has a tradeoff to do somewhere. Most of the time they just optimize less. A friend of mine had profiling evidence that inlining functions and unrolling loops "manually" (obfuscating source code in the process) had a positive impact on performance on his C# number-crunching program; doing the same on my side, with my C program filling the same task, yielded no positive results, and I believe this is due to the extensive transformations my compiler was allowed to make.
And yet we're surrounded by jitted programs. C# and Java are everywhere, Python scripts can compile to some sort of bytecode, and I'm sure a whole bunch of other programming languages do the same. There must be a good reason that I'm missing. So what makes just-in-time compilation so superior to ahead-of-time compilation?
EDIT To clear some confusion, maybe it would be important to state that I'm all for an intermediate representation of executables. This has a lot of advantages (and really, most arguments for just-in-time compilation are actually arguments for an intermediate representation). My question is about how they should be compiled to native code.
Most runtimes (or compilers for that matter) will prefer to either compile them just-in-time or ahead-of-time. As ahead-of-time compilation looks like a better alternative to me because the compiler has more time to perform optimizations, I'm wondering why Microsoft, Sun and all the others are going the other way around. I'm kind of dubious about profiling-related optimizations, as my experience with just-in-time compiled programs displayed poor basic optimizations.
I used an example with C code only because I needed an example of ahead-of-time compilation versus just-in-time compilation. The fact that C code wasn't emitted to an intermediate representation is irrelevant to the situation, as I just needed to show that ahead-of-time compilation can yield better immediate results.
Greater portability: The
deliverable (byte-code) stays
portable
At the same time, more platform-specific: Because the
JIT-compilation takes place on the
same system that the code runs, it
can be very, very fine-tuned for
that particular system. If you do
ahead-of-time compilation (and still
want to ship the same package to
everyone), you have to compromise.
Improvements in compiler technology can have an impact on
existing programs. A better C
compiler does not help you at all
with programs already deployed. A
better JIT-compiler will improve the
performance of existing programs.
The Java code you wrote ten years ago will run faster today.
Adapting to run-time metrics. A JIT-compiler can not only look at
the code and the target system, but
also at how the code is used. It can
instrument the running code, and
make decisions about how to optimize
according to, for example, what
values the method parameters usually
happen to have.
You are right that JIT adds to start-up cost, and so there is a time-constraint for it,
whereas ahead-of-time compilation can take all the time that it wants. This makes it
more appropriate for server-type applications, where start-up time is not so important
and a "warm-up phase" before the code gets really fast is acceptable.
I suppose it would be possible to store the result of a JIT compilation somewhere, so that it could be re-used the next time. That would give you "ahead-of-time" compilation for the second program run. Maybe the clever folks at Sun and Microsoft are of the opinion that a fresh JIT is already good enough and the extra complexity is not worth the trouble.
The ngen tool page spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Executables that are compiled ahead-of-time typically have the following benefits:
Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler);
Native images can share library code, while JIT-compiled images cannot.
Just-in-time compiled executables typically have the upper hand in these cases:
Native images are larger than their bytecode counterpart;
Native images must be regenerated whenever the original assembly or one of its dependencies is modified.
The need to regenerate an image that is ahead-of-time compiled every time one of its components is a huge disadvantage for native images. On the other hand, the fact that JIT-compiled images can't share library code can cause a serious memory hit. The operating system can load any native library at one physical location and share the immutable parts of it with every process that wants to use it, leading to significant memory savings, especially with system frameworks that virtually every program uses. (I imagine that this is somewhat offset by the fact that JIT-compiled programs only compile what they actually use.)
The general consideration of Microsoft on the matter is that large applications typically benefit from being compiled ahead-of-time, while small ones generally don't.
Simple logic tell us that compiling huge MS Office size program even from byte-codes will simply take too much time. You'll end up with huge starting time and that will scare anyone off your product. Sure, you can precompile during installation but this also has consequences.
Another reason is that not all parts of application will be used. JIT will compile only those parts that user care about, leaving potentially 80% of code untouched, saving time and memory.
And finally, JIT compilation can apply optimizations that normal compilators can't. Like inlining virtual methods or parts of the methods with trace trees. Which, in theory, can make them faster.
Better reflection support. This could be done in principle in an ahead-of-time compiled program, but it almost never seems to happen in practice.
Optimizations that can often only be figured out by observing the program dynamically. For example, inlining virtual functions, escape analysis to turn stack allocations into heap allocations, and lock coarsening.
Maybe it has to do with the modern approach to programming. You know, many years ago you would write your program on a sheet of paper, some other people would transform it into a stack of punched cards and feed into THE computer, and tomorrow morning you would get a crash dump on a roll of paper weighing half a pound. All that forced you to think a lot before writing the first line of code.
Those days are long gone. When using a scripting language such as PHP or JavaScript, you can test any change immediately. That's not the case with Java, though appservers give you hot deployment. So it is just very handy that Java programs can be compiled fast, as bytecode compilers are pretty straightforward.
But, there is no such thing as JIT-only languages. Ahead-of-time compilers have been available for Java for quite some time, and more recently Mono introduced it to CLR. In fact, MonoTouch is possible at all because of AOT compilation, as non-native apps are prohibited in Apple's app store.
I have been trying to understand this as well because I saw that Google is moving towards replacing their Dalvik Virtual Machine (essentially another Java Virtual Machine like HotSpot) with Android Run Time (ART), which is a AOT compiler, but Java usually uses HotSpot, which is a JIT compiler. Apparently, ARM is ~ 2x faster than Dalvik... so I thought to myself "why doesn't Java use AOT as well?".
Anyways, from what I can gather, the main difference is that JIT uses adaptive optimization during run time, which (for example) allows ONLY those parts of the bytecode that are being executed frequently to be compiled into native code; whereas AOT compiles the entire source code into native code, and code of a lesser amount runs faster than code of a greater amount.
I have to imagine that most Android apps are composed of a small amount of code, so on average it makes more sense to compile the entire source code to native code AOT and avoid the overhead associated from interpretation / optimization.
It seems that this idea has been implemented in Dart language:
https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf
JIT compilation is used during development, using a compiler that is especially fast. Then, when an app is ready for release, it is compiled AOT. Consequently, with the help of advanced tooling and compilers, Dart can deliver the best of both worlds: extremely fast development cycles, and fast execution and startup times.
One advantage of JIT which I don't see listed here is the ability to inline/optimize across separate assemblies/dlls/jars (for simplicity I'm just going to use "assemblies" from here on out).
If your application references assemblies which might change after install (e. g. pre-installed libraries, framework libraries, plugins), then a "compile-on-install" model must refrain from inlining methods across assembly boundaries. Otherwise, when the referenced assembly is updated we would have to find all such inlined bits of code in referencing assemblies on the system and replace them with the updated code.
In a JIT model, we can freely inline across assemblies because we only care about generating valid machine code for a single run during which the underlying code isn't changing.
The difference between platform-browser-dynamic and platform-browser is the way your angular app will be compiled.
Using the dynamic platform makes angular sending the Just-in-Time compiler to the front-end as well as your application. Which means your application is being compiled on client-side.
On the other hand, using platform-browser leads to an Ahead-of-Time pre-compiled version of your application being sent to the browser. Which usually means a significantly smaller package being sent to the browser.
The angular2-documentation for bootstrapping at https://angular.io/docs/ts/latest/guide/ngmodule.html#!#bootstrap explains it in more detail.

Is there a precompiler solution for JVM like NGen?

I was recently comparing the JVM and CLR platforms and was discussing the impact of JIT compilation. The discussion I was having led to .NET's ability to precompile code using the NGen tool, which would eliminate the need for JIT compilation. Is there an equivalent in the Java platform?
The native compiler that I've heard the most about is Excelsior JET - Java Virtual Machine and Native Compiler. See http://www.excelsior-usa.com/jet.html for more details. Basically you can turn your project into a single executable without the hassles of the usual Java packaging.
They have been around for a long time (they where here when I joined the Java ranks in 2001 and they are here now. I just heard a week ago how a company is using their solution are and are happy with it.
About couple of years ago at JavaONE I also met a developer or two of the product and they told me that they are live and kicking and doing well.
I'm not aware of any, personally (you can look at options like launch4j, jexe, etc, they are just wrappers, not like NGen), and I'm not sure how feasible it would be... The answer to this question goes into some details on the subject of compiling to native code: Differences in JIT between Java and .Net
Yes. As of Java 9, you can. The feature you are looking for is called Ahead of Time (AOT) compilation.
There are two main JVMs in use today:
HotSpot (and it's variants), which is the main JVM from Oracle.
OpenJ9, originally from IBM.
OpenJ9
In OpenJ9, just run the Java Virtual Machine with the -Xshareclasses:persistent option. This causes the VM to store compiled binaries into a cache, which persists across system boots. It does not force the Java VM to use the precompiled code in all cases, as in many cases Just In Time (JIT) code can execute faster. The JVM will pick whether to use the cached binary or not on a case by case basis. When a cached AOT method is run it might also be optimized further by the Just-In-Time (JIT) compiler.
Full details are available at:
https://www.eclipse.org/openj9/docs/aot/
https://www.eclipse.org/openj9/docs/xshareclasses
HotSpot (OpenJDK)
The HotSpot (OpenJDK) builds now include an experimental VM that can completely precompile everything, called GraalVM. You manually create native binaries using the jaotc executable. Note: this is currently only available on Linux (and possibly Mac) systems.
Details on how to use the jaotc compiler are at:
https://docs.oracle.com/javase/10/vm/java-hotspot-virtual-machine-performance-enhancements.htm#JSJVM-GUID-F33D8BD0-5C4A-4CE8-8259-FD9D73C7C7C6
https://openjdk.java.net/jeps/295 (this includes an example where you generate and use an AOT library for the java.base module).
Note: jaotc is not as simple to use as the OpenJ9 shared classes flag.

Why is java bytecode interpreted?

As far as I understand Java compiles to Java bytecode, which can then be interpreted by any machine running Java for its specific CPU. Java uses JIT to interpret the bytecode, and I know it's gotten really fast at doing so, but why doesn't/didn't the language designers just statically compile down to machine instructions once it detects the particular machine it's running on? Is the bytecode interpreted every single pass through the code?
The original design was in the premise of "compile once run anywhere". So every implementer of the virtual machine can run the bytecodes generated by a compiler.
In the book Masterminds for Programming, James Gosling explained:
James: Exactly. These days we’re
beating the really good C and C++
compilers pretty much always. When you
go to the dynamic compiler, you get
two advantages when the compiler’s
running right at the last moment. One
is you know exactly what chipset
you’re running on. So many times when
people are compiling a piece of C
code, they have to compile it to run
on kind of the generic x86
architecture. Almost none of the
binaries you get are particularly well
tuned for any of them. You download
the latest copy of Mozilla,and it’ll
run on pretty much any Intel
architecture CPU. There’s pretty much
one Linux binary. It’s pretty generic,
and it’s compiled with GCC, which is
not a very good C compiler.
When HotSpot runs, it knows exactly
what chipset you’re running on. It
knows exactly how the cache works. It
knows exactly how the memory hierarchy
works. It knows exactly how all the
pipeline interlocks work in the CPU.
It knows what instruction set
extensions this chip has got. It
optimizes for precisely what machine
you’re on. Then the other half of it
is that it actually sees the
application as it’s running. It’s able
to have statistics that know which
things are important. It’s able to
inline things that a C compiler could
never do. The kind of stuff that gets
inlined in the Java world is pretty
amazing. Then you tack onto that the
way the storage management works with
the modern garbage collectors. With a
modern garbage collector, storage
allocation is extremely fast.
Java is commonly compiled to machine instructions; that's what just-in-time (JIT) compilation is. But Sun's Java implementation by default only does that for code that is run often enough (so startup and shutdown bytecode, that is executed only once, is still interpreted to prevent JIT overhead).
Bytecode interpretation is usually "fast enough" for a lot of cases. Compiling, on the other hand, is rather expensive. If 90% of the runtime is spent in 1% of the code it's far better to just compile that 1% and leave the other 99% alone.
Static compiling can blow up on you because all the other libraries you use also need to be write-once run everywhere (i.e. byte-code), including all of their dependencies. This can lead to a chain of compilations following dependencies that can blow up on you. Compiling only the code as (while running) the runtime discovers it actually needs that section of code compiled is the general idea I think. There may be many code paths you don't actually follow, especially when libraries come into question.
Java Bytecode is interpreted because bytecodes are portable across various platforms.JVM, which is platform dependent,converts and executes bytecodes to specific instruction set of that machine whether it may be a Windows or LINUX or MAC etc...
One important difference of dynamic compiling is that it optimises the code base don how it is run. There is an option -XX:CompileThreshold= which is 10000 by default. You can decrease this so it optimises the code sooner, but if you run a complex application or benchmark, you can find that reducing this number can result in slower code. If you run a simple benchmark, you may not find it makes any difference.
One example where dynamic compiling has an advantage over static compiling is inlining "virtual" methods, esp those which can be replaced. For example, the JVM can inline up to two heavily used "virtual" methods, which may be in a separate jar compiled after the caller was compiled. The called jar(s) can even be removed from the running system e.g. OSGi and have another jar added or replace it. The replacement JAR's methods can then be inlined. This can only be achieved with dynamic compiling.

Java JIT compiler for download

I want to download a JIT compiler for Java. Where can I get a good JIT compiler?
The JIT compiler runs as part as the JVM - it's not something you download and run separately.
In the early days of Java, you used to be able to run a JIT as a plugin - but these days there's no need. Any mainstream, modern desktop Java environment will include a JIT.
Most of the JVMs have a JIT built in. Download any of them.
My guess is you are looking for a java to native .exe compiler in the mistaken belief that this will yield a significant performance difference. For Java this is not the case for most real applications and just makes deploying them harder.
Any modern well-performing Java implementation comes with a JIT, and normally you do not have to worry about these kind of things. The most frequent is the Oracle Java implementation available from http://java.com.
If you, however, have a performance problem it is usually a problem with your own code, so use a suitable profiler (jvisualvm in the Sun 6 JDK is a good, free starting point) to identify your bottlenecks, so you can correct them.

What can you not do on the Dalvik VM (Android's VM) that you can in Sun VM?

I know that you can run almost all Java in Dalvik's VM that you can in Java's VM but the limitations are not very clear. Has anyone run into any major stumbling blocks? Any major libraries having trouble? Any languages that compile to Java byte code (Scala, Jython etc...) not work as expected?
There is a number of things that Dalvik will not handle or will not handle quite the same way as standard Java bytecode, though most of them are quite advanced.
The most severe example is runtime bytecode generation and custom class loading. Let's say you would like to create some bytecode and then use classloader to load it for you, if that trick works on your normal machine, it is guaranteed to not work on Dalvik, unless you change your bytecode generation.
That prevents you from using certain dependency injection frameworks, most known example being Google Guice (though I am sure some people work on that). On the other hand AspectJ should work as it uses bytecode instrumentation as a compilation step (though I don't know if anyone tried).
As to other jvm languages -- anything that in the end compiles to standard bytecode and does not use bytecode instrumentation at runtime can be converted to Dalvik and should work. I know people did run Jython on Android and it worked ok.
Other thing to be aware of is that there is no just in time compilation. This is not strictly Dalviks problem (you can always compile any bytecode on the fly if you wish) but that Android does not support that and is unlikely to do so. In the effect while microbenchmarking for standard Java was useless -- components had different runtime characterstics in tests than as parts of larger systems -- microbenchmarks for Android phones totally make sense.
If you see "Dalvik Virtual Machine internals" Google IO session, you can find Dalvik does not support generational GC.
So, it could degrade performance of frequent object creation and deletion. Java VM supports generational GC so, it would show better GC performance for the same situation.
And also, Dalvik uses trace-granuality JIT instead of method granuality JIT.
Another thing that I guess could be added here is that Dalvik apparently does not preserve field order when listing the fields of a class using the reflection API. Now, the reflection API does not make any guarantees on it anyway (so ideally you shouldn't depend on it anyway), but most of the other VMs out there do preserve the order.
Just to add to the conversation, not intended to revive an old thread. I just ran across this in my search, and want to add that Jython does not work out of the box with Dalvik either. Simply trying to do a hello world example will yield the following:

Categories

Resources