How to induce or help Java Compilers to optimize code? - java

I'm wondering which optimizations of Java Compilers can be usually blocked (or not detected) because of non-clear or badly written code, and what kind of commong mistakes are made that obfuscates the code for the Compiler

Please understand that modern runtime environments (the actual Java command) is not executing the java bytecodes naively one by one, but is doing very heavy processing to compile to actual machine code.
This means that there is no special reason to make the bytecode particularly smart or optimized, as the JRE gives the same results anyway. For mobile Java devices, where the interpreter is less smart, and memory constraints are present, the ProGuard system allows for quite a bit of optimizing transformations. You might find these interesting.

JIT compilers are typically optimized for common coding patterns and use cases. Your best bet is to adhere to common conventions, patterns, and idioms. Trying to "optimize code for the compiler" might result in code that is actually harder to optimize.
I would advice to just try to make your code clear and expressive, and let the compiler do its job.

... I wanted to know if there were very common mistakes that could be avoided with a bit more of attention when coding.
For Sun's HotSpot JVMs, the only mistake that you can make in the general sense is to try to do things in tricky ways (possibly) in the belief that it makes your code faster. It is best to just write simple code. I've seen this advice from someone senior in the HotSpot team.
Best practice is to leave optimization to the JIT compiler, and only attempt to micro-optimize if the profiler tells you that you have a problem.
(There are well know things you should avoid, like using exceptions for flow control, doing string concatenation in a loop, or trying to do your own memory management. But these are probably higher level than you are interested in.)

Related

A concrete Example of the effect of the JIT in java

So I am aware the java has just in time compilation (the JIT), which gives it an advantage over statically compiled languages like C++. Are there any examples illustrating the java JIT? Possible examples could be outperforming C or C++ code for a given algorithm? Or showing an algorithm's iterations getting faster with time (I am unsure if that would be an instance of the JIT). Or just any example which can show some sort of measurement of the existence of the JIT doing this? I ask this question because I have only ever read about the JIT and wish to prove it's existence as opposed to just believing in it like some sort of religious God.
Remark - If this question is too opinionated please comment and let me know why. I am just curious about the JIT and after using java for a few years still to this day am unaware of how I benefit from it, and if it lives up to the hype of outperforming its statically compiled counterparts.
Additional Information - I have read about when it does it, and am not looking for more information I will just need to believe is true, I want to see something which shows me doing what it is suppose to do.
EDIT - Good that I have allot of responses, what has been said is that comparing speed alone JIT optimised vs. C++ is not a good approach, and that a pure java comparison would be the least horrible. What about an example showing this with java:
So a JIT and Non-JIT optimised program doing the same are executed. At the start the JIT has not kicked in, and the program begins getting quicker whilst the static always has the same performance. Then the conditions change at 5.5 seconds or so and the application is being used slightly differently. The JIT has the ability to adapt to these changes again, firstly the time spikes and then it begins optimising again and can even reach a better optima becaue the application is being used slightly different.
Would this be an acceptable example to show a JIT? (I will endevour to achieve this and review everyones links and videos).
I do not think you can convincingly prove that java using JIT is faster than C/C++ statically compiled code.
You could find some code in java that beat its c/c++ implementation. For that you need to search for keywords like (benchmark,Java,JIT,C,C++ )
I have purposely not mentioned any code or links for the above because of my point below.
Most of the times people show java code beating statically compiled c/c++ in following ways
Find part where java is fast compared to c/c++(memory allocation)and write only code to highlight it.
Find weak points of c/C++ code and try to write java code that beat the c/c++ code in achieving the result.
Run code in environment where you have advantage like having fast hardware and good amount of memory .
My point being you are trying to find exception where java is faster that C/C++ and then generalizing it to the whole language. You could easily find more more examples of c/c++ beating java code just by using pointer in many algorithm.
Such code benchmark testing is of no value in real life application development.
Summarizing ( in real life application development )
Java was slow compared to c/c++ when it first came out. But in the past decade improvements made in JVM coupled with JIT,Hotspot etc have made java as good as C/C++. Java is not slow nowadays. But I would not call it fast over c/c++. Any difference in real life application development in negligible because of language improvement as well as better hardware.
You cannot generalize that java is faster than c/c++ by beating it one time in a particular environment with a particular algorithm or code.
You might find some interesting info in the following links
https://softwareengineering.stackexchange.com/questions/110634/why-would-it-ever-be-possible-for-java-to-be-faster-than-c
Is Java really slow?
Since question has been edited to now try and find the performance improvement of using JIT , I am editing my answer to add a few more points.
My understanding of JIT is that it improves the code that is most executed , to a version that can be run really fast by compiler. Most of the examples of JIT optimisation techniques I have come across shows actions which could also be done by the programmer but then would affect the readability of the program or may not confirm to the framework or coding styles the programmer is/has to use.
So what I am trying to say here is if you write a program that can be improved by JIT it will do so and you will see an increase in performace. But if you are someone who understand JVM and write java code that is already optimized then JIT may not give you much benefit.
So in effect if you see a performace improvement when running a program using JIT that improvement is not guaranteed for all java programs. It depends on the program.
These links below show some JIT improvements using code examples.
http://www.infoq.com/articles/Java-Application-Hostile-to-JIT-Compilation
https://plumbr.eu/blog/java/do-you-get-just-in-time-compilation
Anyway if we need to to differentiate the performance while using JIT, we would run a java program with JIT enabled and run the same program again with JIT disabled.
This link http://www.javacodegeeks.com/2013/07/java-just-in-time-compilation-more-than-just-a-buzzword.html has a case study on this topic and recommends the following
Assessing the JIT benefits for your application
In order to understand the impact of not using JIT for your Java application, I recommend that you preform the following experiment:
Generate load to your application with JIT enabled and capture some baseline data such as CPU %, response time, # requests etc
Disable JIT
Redo the same testing and compare the results.
This link http://people.cse.iitd.ac.in/~sbansal/csl862-virt/readings/CompileJava97.pdf does benchmark JIT and shows speed improvements over basic JVM interpretations.
To understand what JIT does to your code , you could use the tool JITwatch.
https://github.com/AdoptOpenJDK/jitwatch
The links below explain its utility.
http://www.oracle.com/technetwork/articles/java/architect-evans-pt1-2266278.html
http://zeroturnaround.com/rebellabs/why-it-rocks-to-finally-understand-java-jit-with-jitwatch/
First, you want to watch this video. It gives you tools to see the JIT in action.
Where I believe your questions is misled is that you are asking for an example of tailored code where you could potentially measure faster performance in some JVM-based language X vs some non JVM-based language Y (where, for instance, X is Java and Y is C).
This is not the way to think about the JIT. Unless you actually write a compiler for the JVM language by yourself, or have to debug some serious performance issue, and only after you have considered refactoring your code and seen it fail then you can delve that deep into details.
But otherwise, the principle is simple: the JIT is your friend and it does things right; all you have to do is write code which just works; if there are ways that the JIT can make it faster at runtime, it will most certainly do so.
There are countless examples on Stack Overflow of questions like "why is my code running faster all of a sudden?" - usually when people try to benchmark their code. The answer is, invariably, because the JIT was able to make optimizations mid-benchmark.
See: How do I write a correct micro-benchmark in Java?, What is going on in this java benchmark?, and Java benchmarking - why is the second loop faster? for some examples.
I have only ever read about the JIT and wish to prove it's existence as opposed to just believing in it like some sort of religious God.
This is an unnecessary line of thinking; there's a lot going on between your keyboard and your monitor that you've never noticed or don't understand. The JIT is documented behavior of the JVM, that's all you need to know. It's fine if you don't understand it and want to learn more, but it's not some mythical, ethereal construct.
JIT Just in Time compilation is a sort of pre compilation that is done prior of execution of byte code. From ORacle site:
"In theory, the JIT comes into use whenever a Java method is called,
and it compiles the bytecode of that method into native machine code,
thereby compiling it “just in time” to execute"
The most reliable effect of JIT is visible comparing java itself with and without jit.
JIT (Just In Time compilation) was introduced in java 1.2 so the best is to execute the same code with java 1.1 and java 1.2 and check the performances.
Prior to java 1.2 java was considered a very slow language and only after the introduction of JIT it has been extensively used in any field.
Instead is difficult to compare C++ or C and java. Potentially C++ is faster then java, because also with JIT java is an interpreted language. JIT compilation helps because the code that is executed more often is interpred only one time instead of each time it is executed.
Differences between java and C++ can involve how libraries are designed, presence or absence of certain primitive types, how code is compiled, level of optimizations, in case of java how gc is configured and so on.
Note that there can be differences also between java and java also with same jdk and same jvm depending on compilation parameters and execution parameters.
It is not possible to say that Java is faster than C or viceversa, too many parameters are involved in this kind of comparison. Sometime C++ is faster, sometime java is the best.
Here is a reference from Oracle on JIT compilation: http://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/underst_jit.html

Performance in Java through code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
First of all I should mention that I'm aware of the fact that performance optimizations can be very project specific. I'm mostly not facing these special issues right now. I'm facing a bunch of performance issues with the JVM itself.
I wonder now:
which code-optimization make sense
from a compiler perspective: for
example to support the garbage
collector I declared variables as
final - very much following PMD's
suggestions here from Eclipse.
what best practices there are for: vmargs,
heap and other stuff passed to the
JVM for initialization. How do I get
the right values here? Is there any
formula or is it try and error?
Java automates a lot, does many optimization on byte-code level and stuff. However I think most of that must be planed by a developer in order to work.
So how do you speed up your programs in Java? :)
Which code-optimization make sense from a compiler perspective: for example to support the garbage collector I declared variables as final - very much following PMD's suggestions here from Eclipse.
Assuming you are talking about potential micro-optimizations you can make to your code, the answer is pretty much none. The best way to increase your application performance is to run a profiler to figure out where the performance bottlenecks are, then figure out if there is anything you can do to speed them up.
All of the classic tricks like declaring classes, variables and methods final, reorganizing loops, changing primitive types are pretty much a waste of effort in most cases. The JIT compiler can typically do a much better job than you can. For example, recent JIT compilers will analyse all loaded classes to figure out which method calls are not subject to overloading, without you declaring the classes or methods as final. It will then use a quicker call sequence, or even inline the method body.
Indeed, the Sun experts say that some programmer attempts at optimization fail because they actually make it harder for JIT compiler to apply the optimizations it knows about.
On the other hand, higher level algorithmic optimizations are definitely worthwhile ... provided that your profiler tells you that your application is spending a significant amount of time in that area of the code.
Using arrays instead of collections can be a worthwhile optimization in unusual cases, and in rare cases using object pools might be too. But these optimizations 1) will make your code more complicated and bug prone and 2) can slow your application down if used inappropriately. These kinds of optimizations should only be tried as a last resort. For example, if your profiling says that such and such a HashMap<Integer,Integer> is a CPU bottleneck or a memory hog, then it is a better idea to look for an existing specialized Map or Map-like library class than to try and implement the map yourself using arrays. In other words, optimize at the high level.
If you spend long enough or your application is small enough, careful micro-optimization will probably give you a faster application (on a given JVM version / hardware platform) than just relying on the JIT compiler. If you are implementing a smallish application to do large-scale number crunching in Java, the pay-off of micro-optimization may well be considerable. But this is clearly not a typical case! For typical Java applications, the effort is large enough and the performance difference is small enough that micro-optimization is not worthwhile.
(Incidentally, I don't see how declaring a variable can make any possible difference to GC performance. The GC has to trace a variable every time it is encountered whether or not it is final. Besides, it is an open secret that final variables can actually change under certain circumstances, so it would be unsafe for the GC to assume that they don't. Unsafe as in "creates a dangling pointer resulting in a JVM crash".)
I see this a lot. The sequence generally goes:
Thinking performance is about compiler optimizations, big-O, and so on.
Designing software using the recommended ideas, lots of classes, two-way linked lists, trees with pointers up, down, left, and right, hash sets, dictionaries, properties that invoke other properties, event handlers that invoke other event handlers, XML writing, parsing, zipping and unzipping, etc. etc.
Since all those data structures were like O(1) and the compiler's optimizing its guts out, the app should be "efficient", right? Well, then, what's that little voice telling one that the startup is slow, the shutdown is slow, the loading and unloading could be faster, and why is the UI so sluggish?
Hand it off to the "performance expert". With luck, that person finds out, all this stuff is done in the recommended way, but that's why it's cranking its heart out. It's doing all that stuff because it's the recommended way to do things, not because it's needed.
With luck, one has the chance to re-engineer some of that stuff, to make it simple, and gradually remove the "bottlenecks". I say, "with luck" because often it's just not possible, so development relies on the next generation of faster processors to take away the pain.
This happens in every language, but moreso in Java, C#, C++, where abstraction has been carried to extremes. So by all means, be aware of best practices, but also understand what simple software is. Typically it consists of saving those best practices for the circumstances that really need them.
which code-optimization make sense
from a compiler perspective?
All the ones that a compiler can't reason about, because a compiler is very dumb and Java doesn't have "design by contract" (which, hence, cannot help the dumb compiler reason about your code).
For example if you're crunching data and using use int[] or long[] arrays, you may know something about your data that is IMPOSSIBLE for the compiler to figure out and you may use low-level bit-packing/compacting to improve the locality of reference in that part of your code.
Been there, done that, saw gigantic speedup. So much for the "super smart compiler".
This is just one example. There are a huge number of cases like this.
Remember that a compiler is really stupid: it cannot know that if ( Math.abs(42) > 0 ) will always return true.
This should give some food for thoughts to people that think that those compilers are "smart" (things would be different here if Java had DbC, but it doesn't).
what best practices there are for:
vmargs, heap and other stuff passed to
the JVM for initialization. How do I
get the right values here? Is there
any formula or is it try and error?
The real answer is: there shouldn't be. Sadly the situation is so pathetic that such low-level hackery is needed, due to serious failure on Java's part. Oh, one more "tiny" detail: playing with VM fine-tuning only works for server-side app. It doesn't work for desktop apps.
Anyone who has worked on Java desktop applications installed on hundreds or thousands of machines, on various OSes knows all too well what the issue is: full GC pauses making your app look like it's broken. The Apple VM on OS X 10.4 comes to mind for it's particularly afwul, but ALL the JVMs are subject to that issue.
What is worse: it is impossible to "fine tune" the GC's parameters across different OSes / VMs / memory configuration when your application is going to be run on hundreds/thousands of different configuration.
Anyone disputing that: please tell me how you "fine tune" your app knowing that it is going to be run both on octo-cores Mac loaded with 20 GB of ram (I've got users with such setups) and old OS X 10.4 PowerBook that have 768 MB of ram. Please?
But it is not bad: you should not, in the first place, have to be concerned with super-low-level detail like GC "fine tuning". The very fact that this is hinted to is a testimony to one area where Java has a major issue.
Java fans will keep on saying "the GC is super fast, object creation is cheap" while this is blatantly wrong. There's a reason with Trove' TIntIntHashMap runs around circles an HashMap<Integer,Integer>.
There's also a reason why at every new JVM release you'll get countless release notes explaining why -XXGCHyperSteroidMultiTopNotch offers better performance than the last "big JVM param" that every cool Java programmer had to know: maybe the JVM wasn't that great at GC'ing after all.
So to answer your question: how do you speed up Java programs? Easy, do like what the Trove guys did: stop needlessly creating gigantic amount of objects and stop needlessly auto(un)boxing primitives because they will kill your app's perfs.
A TIntIntHashMap OWNS the default HashMap<Integer,Integer> for a reason: for the same reason my apps are now much faster than before.
I stopped believing in crap like "object creation costs nothing" and "the GC is super-optimized, don't worry about it".
I'm using Java to crunch data (I know, I'm a bit crazy) and the one thing that made my app faster was to stop believing all the propaganda surrounding the "cheap object creation" and "amazingly fast GC".
The truth is: INSTEAD OF TRYING TO FINE-TUNE YOUR GC SETTINGS, STOP CREATING THAT MUCH GARBAGE IN THE FIRST PLACE. This can be stated this way: if changing the GC settings radically changes the way your app run, it may be time to wonder if all the needless junk objects your creating are really needed.
Oh, you know what, I'm betting we'll see more and more release notes explaining why Java version x.y.z's GC is faster than version x.y.z-1's GC ;)
Generally there are two kinds of performance optimizations you need to do with Java:
Algorithmic optimization. Choose an algorithm which behaves like you need to. For instance, a simple algorithm may perform best for small datasets, but the overhead of preparing a smarter algorithm may first pay off for much larger datasets.
Bottleneck identification. Here you need to be familiar with a profiler that can tell you what the problem is (humans always guess wrong) - memory leak?, slow method? etc... A good one to start with is VisualVM which can attach to a running program, and is available in the latest Sun JDK. When you know the problem, you can fix it.
Todays JVM's are surprisingly robust when it comes to performance. Any microoptimizations you can apply will, in practically all cases, have only very minor impact on performance. This is easy to understand if you take a look on how typical language constructs (e.g. FOR vs WHILE) translate to bytecode - they are almost indistinguishable.
Making methods/variables final has absolutely no impact on performance on a decent JIT'd JVM. The JIT will keep track of which methods are really polymorphic and optimize away the dynamic dispatch where possible. Static methods can still be faster, since they don't have a this-reference = one less local variable (which at the same time, limits their application). Most efficient micro optimizations are not so much Java specific, for example code with lots of conditional statements can become very slow due to branch mispredictions by the processor. Sometimes conditionals can be replaced by other, sequential code flow constructs (often at the cost of readability), reducing the number of mispredicted branches (and this applies to all languages that somehow compile to native code).
Note that profilers tend to inflate the time spent in short, frequently called methods. This is due to the fact that profilers need to instrument the code to keep track of invocations - this can interfere with the JIT's ability to inline those methods (and the instrumentation overhead becomes significantly larger than the time spent actually executing the methods body). Manual inlining, while apparently very performance boosting under a profiler has in most cases no effect under "real world" conditions. Don't rely purely on the profilers results, verify that optimizations you make have real impact under real runtime conditions, too.
Notable performance boosts can only be expected from changes that reduce the amount of work done, more cache friendly data layout or superior algorytms. Java partially limits your possibilities for cache friendly data layouts, since you have no control where the parts (arrays/objects) that form your data structure will be located in memory in relation to each other. Still, there are plenty of opportunities where choosing the right data structure for the job can make a huge difference (e.g. ArrayList vs LinkedList).
There is little you can do to aid the garbage collector. However, a point worth noting is, while object allocation in Java is very very fast, there is still the cost of object initialization (which is mostly under your control). Poor performance of applications that creating lots of (short lived) objects is more likely to be attributed to poor cache utilization than to the garbage collectors work.
Different applications types require different optimization strategies - so before asking about specific optimizations, find out where your application really spends its time.
If you are experiencing performance issues with your application, you should seriously consider trying some profiling (eg: hprof) to see whether the problem is algorithmic in nature, and also checking the GC performance logging (eg: -verbose:gc) to see if you could benefit from tuning your JVM GC options.
It is worth noting that the compiler does next to no optimisations, and the JVM doesn't optimise at the byte code level either. Most of the optimisations are performed by the JIT in the JVM and it optmises how the code is converted to native machine code.
The best way to optimise your code is to use a profiler which measures how much time and resources your application is using when you give it a realistic data set. Without this information you are just guessing and you can change alot of code where it really, really doesn't matter and find you have added bugs in the process.
Many come to the conclusion that its never worth optmising you code, even counter productive as it can waste time and introduce bugs and I would say that is true for 95+% of your code. However, with aprofiler you can measure the critical pieces of code and optmise the <5% worth optimising and done carefully, you won't get too many issues from trying to optimise your code.
It's hard to answer this too thoroughly because you haven't even mentioned what sort of project you're talking about. Is it a desktop application? A server-side application?
Desktop applications favor application startup time, so the HotSpot client VM is a good start. Client applications don't necessarily need all of their heap space all the time, so a good balance between starting heap and max heap is useful. (Like, maybe -Xms128m -Xmx512m)
Server applications favor overall throughput, which is something the HotSpot server VM is tuned for. You should always allocate the min and max heap sizes the same on a server application. There is an added cost at the system level to it having to malloc() and free() during garbage collection. Use something like -Xms1024m -Xmx1024m.
There are several different garbage collectors also, which are tuned to different application types.
Take a read through the Java SE 6 Performance White Paper if you want more info on the garbage collector and other performance related items from Java 6.

Is the hotspot JVM Bytecode Interpreter a tracing JIT?

The question pretty much says it all, I've been looking around for an answer even through the VM spec but I it doesn't explicitly state it.
No.
There are some other JVMs with tracing JITs, though: HotPath and Maxine, for example.
Aside: for those who don't know what a tracing JIT is, the following description comes from this page:
Although tracing JITs are a complex technology, the core concept is about optimizing execution of the hot paths in a program. The emphasis is specifically on hot paths that return to the start of a path which sounds very much like a loop. However, the traditional definition of a programming loop is only a subset of these hot paths. The broader definition includes code that spans methods and possibly even modules. This broader definition of a loop is what’s called a trace.
Had to google what a "tracing JIT" was, but apparently it isn't.
> non-tracing JIT implementations (Sun’s Java VM
But it does optimise what you might call "hot spots".
How bytecode is optimised will not be part of the specification for the bytecode.
It's not even a JIT actually, let alone a 'tracing JIT', whatever that might be.

Should I look at the bytecode that is produce by a java compiler?

No
The JIT compiler may "transform" the bytecode into something completely different anyway.
It will lead you to do premature optimization.
Yes
You do not know which method will be compiled by the JIT, so it is better if you optimize them all.
It will make you a better Java programmer.
I am asking without really knowing (obviously) so feel free to redirect to JIT hyperlinks.
Yes, but to a certain extent -- it's good as an educational opportunity to see what is going on under the hood, but probably should be done in moderation.
It can be a good thing, as looking at the bytecode may help in understanding how the Java source code will be compiled into Java bytecode. Also, it may give some ideas about what kind of optimizations will be performed by the compiler, and perhaps some limitations to the amount of optimization the compiler can perform.
For example, if a string concatenation is performed, javac will optimize the concatenation into using a StringBuilder and performing append methods to concatenate the Strings.
However, if the string concatenation is performed in a loop, a new StringBuilder may be instantiated on each iteration, leading to possible performance degradation compared to manually instantiating a StringBuilder outside the loop and only performing appends inside the loop.
On the issue of the JIT. The just-in-time compilation is going to be JVM implementation specific, so it's not very easy to find out what is actually happening to the bytecode when it is being converted to the native code, and furthermore, we can't tell which parts are being JITted (at least not without some JVM-specific tools to see what kind of JIT compilation is being performed -- I don't know any specifics in this area, so I am just speculating.)
That said, the JVM is going to execute the bytecode anyway, the way it is being executed is more or less opaque to the developer, and again, JVM-specific. There may be some performance tricks that one JVM performs while another doesn't.
When it comes down to the issue of looking at the bytecode generated, it comes down to learning what is actually happening to the source code when it is compiled to bytecode. Being able to see the kinds of optimizations performed by the compiler, but also understanding that there are limits to the way the compiler can perform optimizations.
All that said, I don't think it's a really good idea to become obsessive about the bytecode generation and trying to write programs that will emit the most optimized bytecode. What's more important is to write Java source code that is readable and maintainable by others.
That depends entirely on what you're trying to do. If you're trying to optimize a method/module, looking at the byte code is going to be a waste of your time. Always profile first to find where your bottlenecks are, then optimize the bottlenecks. If your bottleneck seems as tight as it possibly can be and you need to make it faster, you may have no choice but to rewrite that in native code and interface with JNI.
Trying to optimize the generated bytecode will be of little help, since the JIT compiler will do a lot of work, and you won't have much of an idea of exactly what it's doing.
I wouldn't think so. Short of having to debug the javac compiler or wanting to know as a matter of interest, I cannot think of one good reason why someone would care what bytecode gets generated.
Knowing bytecode won't make you a better Java programmer any more than knowing how an internal combustion engine works will make you a better driver.
Think in terms of abstractions. You don't need to know about the actions of quarks or atoms when trying to calculate the orbits of planets. To be a good Java programmer, you should probably learn ... um .. Java. Yes, Java, that's it :-)
Unless you're developing a high-capacity server of some sort, you'll likely never need to examine the bytecode, except out of curiosity. Source code that adheres to acceptable coding practices in your organization will provide ample performance for most applications.
Don't fret over performance until you've found issues after load-testing your application (or the entire customer service force lynches you for that screen that takes "forever" to load). Then, hammer away at the bottlenecks and leave the rest of the code alone.
Bytecode requires a modest learning curve to understand. Sure, it never hurts to learn more, but pragmatism suggests putting it off until it's necessary. (And should that moment come, I recommend finding someone to mentor you.)

C++/Java Performance for Neural Networks?

I was discussing neural networks (NN) with a friend over lunch the other day and he claimed the the performance of a NN written in Java would be similar to one written in C++. I know that with 'just in time' compiler techniques Java can do very well, but somehow I just don't buy it. Does anyone have any experience that would shed light on this issue? This page is the extent of my reading on the subject.
The Hotspot JIT can now produce code faster than C++. The reason is run-time empirical optimization.
For example, it can see that a certain loop takes the "false" branch 99% of the time and reorder the machine code instructions accordingly.
There's lots of articles about this. If you want all the details, read Sun's excellent whitepaper. For more informal info, try this one.
I'd be interested in a comparison between Hotspot JIT and profile-guided optimization optimized C++.
The problem I see with the Hotspot JIT (and any runtime-profile-optimized JIT compiler) is that statistics must be kept and code modified. While there are isolated cases this will result in faster-running code, I doubt that profile-optimized JIT compilers will run faster than well optimized C or C++ code in most circumstances. (Of course I could be wrong.)
Anyway, usually you're going to be at the mercy of the larger project, using the same language it is written in. Or you'll be at the mercy of the knowledge base of your co-workers. Or you'll be at the mercy of the platform you are targetting (is a JVM available on the architecture you're targetting?). In the rare case you have complete freedom and you're familiar with both languages, do some comparisons with the tools you have at your disposal. That is really the only way to determine what's best.
The only possible answer is: make a prototype and measure for yourself. If my experience is of any interest, Java and C# were always much slower than C++ for the kind of work I was doing - I believe mostly because of the high memory consumption. Of course, you can come to a completely different conclusion.
This is not strictly about C++ vs Java performance but nonetheless interesting in that regard: A paper about the performance of programs running in a garbage collected environment.
If excessive garbage collection is a concern, you can always reuse unused high-churn objects.
Create a factory that keeps a queue of SoftReferences to recycled objects, using those before creating new objects. Then in code that uses these objects, explicitly return these objects to the factory for recycling.
Probably C++, although I believe you'll hardly notice the difference besides a slow startup time. Java however makes development faster and maintenance easier.
In the grand scheme of things, you're debating maybe a 5% performance difference where you'd get several orders of magnitude increase by moving to CUDA or dedicated hardware.

Categories

Resources