I was recently comparing the JVM and CLR platforms and was discussing the impact of JIT compilation. The discussion I was having led to .NET's ability to precompile code using the NGen tool, which would eliminate the need for JIT compilation. Is there an equivalent in the Java platform?
The native compiler that I've heard the most about is Excelsior JET - Java Virtual Machine and Native Compiler. See http://www.excelsior-usa.com/jet.html for more details. Basically you can turn your project into a single executable without the hassles of the usual Java packaging.
They have been around for a long time (they where here when I joined the Java ranks in 2001 and they are here now. I just heard a week ago how a company is using their solution are and are happy with it.
About couple of years ago at JavaONE I also met a developer or two of the product and they told me that they are live and kicking and doing well.
I'm not aware of any, personally (you can look at options like launch4j, jexe, etc, they are just wrappers, not like NGen), and I'm not sure how feasible it would be... The answer to this question goes into some details on the subject of compiling to native code: Differences in JIT between Java and .Net
Yes. As of Java 9, you can. The feature you are looking for is called Ahead of Time (AOT) compilation.
There are two main JVMs in use today:
HotSpot (and it's variants), which is the main JVM from Oracle.
OpenJ9, originally from IBM.
OpenJ9
In OpenJ9, just run the Java Virtual Machine with the -Xshareclasses:persistent option. This causes the VM to store compiled binaries into a cache, which persists across system boots. It does not force the Java VM to use the precompiled code in all cases, as in many cases Just In Time (JIT) code can execute faster. The JVM will pick whether to use the cached binary or not on a case by case basis. When a cached AOT method is run it might also be optimized further by the Just-In-Time (JIT) compiler.
Full details are available at:
https://www.eclipse.org/openj9/docs/aot/
https://www.eclipse.org/openj9/docs/xshareclasses
HotSpot (OpenJDK)
The HotSpot (OpenJDK) builds now include an experimental VM that can completely precompile everything, called GraalVM. You manually create native binaries using the jaotc executable. Note: this is currently only available on Linux (and possibly Mac) systems.
Details on how to use the jaotc compiler are at:
https://docs.oracle.com/javase/10/vm/java-hotspot-virtual-machine-performance-enhancements.htm#JSJVM-GUID-F33D8BD0-5C4A-4CE8-8259-FD9D73C7C7C6
https://openjdk.java.net/jeps/295 (this includes an example where you generate and use an AOT library for the java.base module).
Note: jaotc is not as simple to use as the OpenJ9 shared classes flag.
Related
Have read some high level articles about new GraalVM and thought it would be a good idea to use it for enhancing JUnit test performance, especially for big test suites which run in forked mode.
According to SO question "Does GraalVM JVM support java 11?" I added following to the VM arguments of an unit test run configuration in my eclipse (jee-2019-12, JUnit4):
-XX:+UnlockExperimentalVMOptions -XX:+UseJVMCICompiler
Effect: The unit test takes somewhat longer than without these switches (2800 ms with, 2200 ms without, reproduceable).
Did I miss something? Or did I misunderstand the promises of enhanced boot time in GraalVM?
Yes, unfortunately, it feels like there's some misunderstanding at play here. I'll try to elaborate on a number of issue regarding performance and GraalVM here.
GraalVM is a polyglot runtime and it can run JVM applications, normally it does so by running Java HotSpot VM (the same as OpenJDK JDK, for example) with the top tier optimizing just-in-time (JIT) compiler replaced with its own GraalVM compiler.
Simplifying somewhat, during the run, the JVM loads the class files, verifies them, starts interpreting them, then compiles them with a series of compilers which tend to go from the fastest to compile to the most optimizing -- so the longer your application runs and more you use the same methods -- they get progressively compiled to a better and better machine code.
The GraalVM compiler is really good at optimizing code, so when your application runs enough time and it gets to work, the result is usually better than other compilers can show. This leads to better peak-performance, which is great for your medium/long-running workloads.
Your unit test run takes 2 seconds, which is really not that much time to execute code a lot, gather profile and use the optimizing compiler. It might also be that the particular code patterns and the workload is really well suited for C2 (default HotSpot's top tier JIT) so it's hard to be better. Remember, C2 is an excellent JIT which is developed for at least two decades and its results are really-really good too.
Now there's also another option that GraalVM gives you -- GraalVM native images, which allows you to compile your code ahead of time (AOT) to the native binary that will not depend on the JVM and will not load classes, verify them, initialize them, so the startup of such binary until it gets to do useful "business" work is much better.
This is a very interesting option for shorter running workloads or resource constrained environments (the binary doesn't need to do JIT compilation, so it doesn't need resources for it, making runtime resource consumption smaller).
However to use this approach you need to compile your application with the native image utility from GraalVM and it can take longer than your workload that runs in 2 seconds.
Now, in the setup that you're describing, you're not using the GraalVM distribution, but enable the GraalVM compiler in your OpenJDK (I assume) distribution. The options you specify turn on the GraalVM compiler as the top tier JIT compiler. There are 2 main differences at play compared to what you'd get with running java from the GraalVM distribution:
The compiler is not up-to-date, at some point of time the GraalVM compiler sources are pulled into the OpenJDK project and that's how it ends in your distribution.
GraalVM compiler is written in Java, and in your setup it is executed as normal Java code so it first might need to JIT compile itself which leads to longer warm-up phase of the run, somewhat polluted JIT profile with its code, etc.
In the GraalVM distribution, which I'd encourage you to try for this experiment -- the GraalVM compiler is up-to-date and is by default precompiled as a shared library using the GraalVM native image technology, so at runtime it doesn't need be JIT compiled, so its warmup is much more similar to the characteristics of C2.
Still, 2 seconds might not be enough time for the optimizing compiler to show major differences. Could also be that tests run a lot of code once, and the body of the hot code which is JIT compiled is not significant enough.
I have simple java programm that will just print Hello World.Can it be possible to print without using JVM installed in a machine ?But compiler is there.
You can compile Java Byte Code to Machine Code using something like this:
http://en.wikipedia.org/wiki/GNU_Compiler_for_Java
Or you can use any of the MANY Java to C/C++ converters out there to make a C/C++ version, and compile that. (Search Google for "Java C Converter")
Warning: I have never tried any of these, including the linked GNU Compiler, so I cannot make any attestations to their usefulness or quality.
#SplinterReality already pointed you to some compilers (googling "java bytecode to native code compiler" will show you some more options).
I will just expand on that seeing some people are a bit confused:
The JVM is a program that takes java bytecode, which you get by running javac on your java source code (or you generate it in some other fashion). Bytecode is just a set of instructions that the JVM understands, it's there to give you a consistent set of opcodes which are platform independent. It's the JVM's job to then map those opcodes to native instructions, that's why JVMs are platform dependent. That's why when you compile your java source code to java bytecode you can run it on every platform.
That's why, whether you have java source or bytecode, you can take it and just compile it directly to native code (platform dependent, actually sometimes that's exactly what the JVM, and to be more precise the JIT, does - it compiles stuff to native code when it sees the need to). Still there's more to the JVM than just bytecode interpretation - for instance you need to take care of garbage collection.
So the answer is: yes, but do you really want to do it? Unless you want to be running it on some embedded systems I don't really see any reason to.
Yees, it is possible to run a java program without a JVM, albeit with limitations. Aside from the http://en.wikipedia.org/wiki/GNU_Compiler_for_Java , there is the GraalVM native-image generator, which could be used : https://www.graalvm.org.
If that program is a one file source code (not compiled in bytecode) then you can ;)
Here or here. Here will be fine as well ;)
Just put it there and press compile and run.
But it will work only with simple stuff only, and you have to have source code.
I would not be surprised if there would be a site that would allowed to upload class from users PC
I want to download a JIT compiler for Java. Where can I get a good JIT compiler?
The JIT compiler runs as part as the JVM - it's not something you download and run separately.
In the early days of Java, you used to be able to run a JIT as a plugin - but these days there's no need. Any mainstream, modern desktop Java environment will include a JIT.
Most of the JVMs have a JIT built in. Download any of them.
My guess is you are looking for a java to native .exe compiler in the mistaken belief that this will yield a significant performance difference. For Java this is not the case for most real applications and just makes deploying them harder.
Any modern well-performing Java implementation comes with a JIT, and normally you do not have to worry about these kind of things. The most frequent is the Oracle Java implementation available from http://java.com.
If you, however, have a performance problem it is usually a problem with your own code, so use a suitable profiler (jvisualvm in the Sun 6 JDK is a good, free starting point) to identify your bottlenecks, so you can correct them.
Is there any way to compile from Java to standalone (or library) machine code without requiring a JVM?
There used to be a tool called GCJ that was part of GCC, but it's been removed. Now, all the links in the GCC site re-direct to their non-GCJ equivalents.
NB: the comments all refered to my original answer saying you can compile Java to native code with GCJ.
Yes!
Oracle has been working on the GraalVm, which supports Native Images. Check here: https://www.graalvm.org/
Native Image
The native image feature with the GraalVM SDK helps improve the startup time of Java applications and gives them a smaller footprint. Effectively, it's converting bytecode that runs on the JVM (on any platform) to native code for a specific OS/platform — which is where the performance comes from. It's using aggressive ahead-of-time (AOT) optimizations to achieve good performance.
See more:
Summary
https://www.graalvm.org/docs/getting-started/#native-images
Demos: Native images for faster startup
https://www.graalvm.org/docs/examples/native-list-dir/
Detailed: 'Ahead-of-time Compilation'
https://www.graalvm.org/docs/reference-manual/aot-compilation/
The Micronaut platform uses GraalVM to make native microservices:
https://guides.micronaut.io/latest/micronaut-creating-first-graal-app.html
Excelsior JET is a commercial Java to native code compiler. However, it was discontinued in May 2019.
Yes, the JIT in the JVM does exactly that for you.
In fact it can produce faster code than compiling the code in advance as it can generate code optimised for the specific platform based on how the code is used at runtime.
The JVM is always involved even if a very high percentage is compiled to native code as you could load and run byte code dynamically.
Another possibility would be RoboVM.
However, it only seems to work on Linux, iOS and Mac OS X.
As of today, the project still seems somewhat alive contrary to some posts online claiming the project to be dead.
I know that you can run almost all Java in Dalvik's VM that you can in Java's VM but the limitations are not very clear. Has anyone run into any major stumbling blocks? Any major libraries having trouble? Any languages that compile to Java byte code (Scala, Jython etc...) not work as expected?
There is a number of things that Dalvik will not handle or will not handle quite the same way as standard Java bytecode, though most of them are quite advanced.
The most severe example is runtime bytecode generation and custom class loading. Let's say you would like to create some bytecode and then use classloader to load it for you, if that trick works on your normal machine, it is guaranteed to not work on Dalvik, unless you change your bytecode generation.
That prevents you from using certain dependency injection frameworks, most known example being Google Guice (though I am sure some people work on that). On the other hand AspectJ should work as it uses bytecode instrumentation as a compilation step (though I don't know if anyone tried).
As to other jvm languages -- anything that in the end compiles to standard bytecode and does not use bytecode instrumentation at runtime can be converted to Dalvik and should work. I know people did run Jython on Android and it worked ok.
Other thing to be aware of is that there is no just in time compilation. This is not strictly Dalviks problem (you can always compile any bytecode on the fly if you wish) but that Android does not support that and is unlikely to do so. In the effect while microbenchmarking for standard Java was useless -- components had different runtime characterstics in tests than as parts of larger systems -- microbenchmarks for Android phones totally make sense.
If you see "Dalvik Virtual Machine internals" Google IO session, you can find Dalvik does not support generational GC.
So, it could degrade performance of frequent object creation and deletion. Java VM supports generational GC so, it would show better GC performance for the same situation.
And also, Dalvik uses trace-granuality JIT instead of method granuality JIT.
Another thing that I guess could be added here is that Dalvik apparently does not preserve field order when listing the fields of a class using the reflection API. Now, the reflection API does not make any guarantees on it anyway (so ideally you shouldn't depend on it anyway), but most of the other VMs out there do preserve the order.
Just to add to the conversation, not intended to revive an old thread. I just ran across this in my search, and want to add that Jython does not work out of the box with Dalvik either. Simply trying to do a hello world example will yield the following: