Why is it that bytecode might run faster than native code [closed] - java

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Java is slow.
That's more than an "urban legend", it seems to be a fact. You don't use it for live-coding because of latency and you don't use it for clusters/parallel computing. There are thousands of benchmarks out there, specially "Java vs C# vs C++".
http://benchmarksgame.alioth.debian.org/
According to the above site, not only is Java performance almost as good as C (far from the rest), but Scala and Clojure (both functional languages which runs on the JVM) both have a better performance that OCaml, Erlang.
And there are a lot of "Java is faster then X" out there, also (for instance, a question here on SO: Java Runtime Performance Vs Native C / C++ Code?).
So Java seems to be fast, for certain cases. Can someone explain why?
Why is it that bytecode might run faster then native code, in some cases, given dynamic code (Scala, Clojure) and garbage collection? How come if it is faster, there is still latency?
It seems to be a contradiction here, can someone shed light?

In the book masterminds for programming, James Gosling explained:
James: Exactly. These days we’re
beating the really good C and C++
compilers pretty much always. When you
go to the dynamic compiler, you get
two advantages when the compiler’s
running right at the last moment. One
is you know exactly what chipset
you’re running on. So many times when
people are compiling a piece of C
code, they have to compile it to run
on kind of the generic x86
architecture. Almost none of the
binaries you get are particularly well
tuned for any of them. You download
the latest copy of Mozilla,and it’ll
run on pretty much any Intel
architecture CPU. There’s pretty much
one Linux binary. It’s pretty generic,
and it’s compiled with GCC, which is
not a very good C compiler.
When HotSpot runs, it knows exactly
what chipset you’re running on. It
knows exactly how the cache works. It
knows exactly how the memory hierarchy
works. It knows exactly how all the
pipeline interlocks work in the CPU.
It knows what instruction set
extensions this chip has got. It
optimizes for precisely what machine
you’re on. Then the other half of it
is that it actually sees the
application as it’s running. It’s able
to have statistics that know which
things are important. It’s able to
inline things that a C compiler could
never do. The kind of stuff that gets
inlined in the Java world is pretty
amazing. Then you tack onto that the
way the storage management works with
the modern garbage collectors. With a
modern garbage collector, storage
allocation is extremely fast.

Fast JVMs use Just-In-Time (JIT) compilation. The bytecode gets translated into native code on the fly at run time. JIT provides many opportunities for optimization. See this Wikipedia article for more info on JIT.

JVM's come in many flavours!
The fastest ones compile byte code to native code on the fly, based on performance characteristics being collected. All this require extra memory, so they buy speed at the cost of memory. Also, top speed takes a while to reach, so the benefit does not show for short-lived programs.
Even so, the JamVM interpreter (which is tiny compared to the Oracle JVM) still reaches a top speed of a reasonable fraction of the compiled JVM's.
Regarding garbage collection, the speed again comes from having plenty of memory available. Also the true benefit come from the removal of responsibility from the code to keep track of when an object is not in use anymore. This result in simpler, less error prone programs.

Well this is an old argument. Almost all prevalent as Emacs and VI.(but definitely not as old). I have seen lot of c++ developers providing lots of arguments on why most of the performance benchmarks (especially mentioning how java is as fast as c++__) is skewed and to be honest they have a point.
I do not have enough knowledge or time to go deeper in to how Java could be as fast as C++, but the following are the reason why it could be...
1- When you ask two very capable developers to write code in Java and C++ for a real world problem (same problem), then I would be surprised if java performs faster than C++. C++ when well written uses a fraction of memory Java will use. ( Java objects are slightly more bloated).
2- Java is inherently meant to be a simpler language and it was written to make sure that sub- optimal code is hard to write. By abstracting memory management and also by handling low level optimization, its easy to write good code in Java than c++. (this I believe is the most important thing.... Its difficult to write bad code in Java).On the flip side a good C++ developer could handle memory management much better than automatic GC in java. ( Java stores everything in heap, so uses more memory... )
3- Java compiler has been improved consistently and ideas like hotspot has proved to be better than marketing term. (when JIT was introduced, it was just a marketing term.. according to me.. :))
4- Ergonomics or tailoring the settings based on the underlying operating system parameters makes java handle variation better. So in some environments, its not hard to imaging Java performing as good as C++.
5- Opening up high level concurrency and parallelism api's for java developers also is a reason. Java concurrency package is arguably the easiest way to write high performance code that could leverage today's multi processor environments.
6- As hardware as become more and more cheaper, developer competency has become a bigger factor and that's why I believe that lots of c++ code in the wild is probably slower than Java.

Java Byte code is much easier to optimize than most native opcode. Because the byte code is restricted, and you can't do some dangerous things, you can optimize more fully. Take pointer aliasing, for instance. http://en.wikipedia.org/wiki/Pointer_aliasing
In c/c++ you can do
char[] somebuffer = getABuffer();
char* ptr = &someBuffer[2];
memcpy(ptr, somebuffer, length);
this makes it difficult to optimize in some cases, because you can't be sure what is referring to what. In Java this kind of thing is not allowed.
in general the code mutations that you can perform on a higher level of abstraction are much more powerful than can be had on object code.

Related

Is it possible to get a Java program faster than the same program (optimized) in C? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
Regarding the fact that C/C++ optimization are produced at compilation time, and Java optimizations are produced at runtime. Is it possible to get a Java program faster than the same program (optimized) in C?
I understand that runtime optimizations can be better than compilation time. Hence I am wondering if the gain of these optimization can be compared to the overhead of running the JVM.
In theory, yes.
In practice, it is highly unlikely.
One of the fundamental assumption is that C/C++ is compiled once for a binary opcode target, and that Java is compiled for the specific machine it is running on. This should give an edge to Java. But the reality is that even C/C++ can have several optimization path, dynamically selected at run time, and get most of the benefit of specific-target compilation.
Conversely, as stated by Rekin, the Java JVM needs to dynamically profile the Java program to know what to optimize and how. Profiling is an expensive operation in itself, and that overhead can't be taken off Java JVM. On the other hand, selecting the right set of optimization for the current workload can give an edge. In practice, most C programs (but not all :) are well tuned for their task, and there is little left to optimize using profiling techniques.
There are other effects in Java which completely dwarf these compilation issues. Probably at first position is the Garbage Collector.
Garbage collector first mission is to ease programming, take care of, and avoiding one of the most nasty recurrent bug of C/C++, memory leaks. This feature alone justify the large use of Java in many industrial environment.
However, it has a cost. A very large one. According to studies, it is necessary to have available about 5x the amount of strictly necessary memory to make sure garbage collector works with minimal overhead. So, whenever such amount of memory is lacking, GC overhead starts to become significant, putting performance to a crawl.
Adversely, it may happen, in some circumstances, that freeing the algorithm of memory allocation charge may allow to change the algorithm, and adopt a better, faster one. In such a circumstance, Java can get the edge, and be faster than a C program.
But as you can guess, this is uncommon...
The fact that C/C++ programs are specifically written for a specific platform and directly compiled into machine code, they are bound to be more closer to the software/hardware platform they are running. Hence they would be faster.
Java optimization is built into JVM and best optimization (in terms of how fast program would be executed) is achieved with just in time (JIT) way of processing bytecode. Though JIT proved to be more memory intensive.
So, comparing these strategy clearly shows that C/C++ native code would be faster; as JVM still have some overhead to convert bytecode into native even with JIT.
But this is the price paid for platform in dependency and java being more portable.
Taken from when is java faster than c++ (or when is JIT faster then precompiled)?, I found some of the scenario's when Java execution could outperform C/C++
Lots of little memory allocations/deallocations. The major JVMs have
extremely efficient memory subsystems, and garbage collection can be
more efficient than requiring explicit freeing (plus it can shift
memory addresses and such if it really wants to).
Efficient access through deep hierarchies of method calls. The JVM is
very good at eliding anything that is not necessary, usually better in
my experience than most C++ compilers (including gcc and icc). In part
this is because it can do dynamic analysis at runtime (i.e. it can
overoptimize and only deoptimize if it detects a problem).
Encapsulation of functionality into small short-lived objects.
The overhead of the JVM is huge. It has to load several classes, which live inside zip (jar) files and need to be extracted.
For every class loaded, some static analysis methods will be run on it, to see if there is unreachable code, operand stack type problems and other things.
Then, a profiler runs all the time to decide which parts of code are worth optimize, and usually this means that those methods need to be called a few thousand of times before they are optimized.
Besides all that, you also get the garbage collector.
I can't really imagine a properly written C program, compiled for the platform it's running on, being outperformed by a Java equivalent. Perhaps only if you hit some rare corner-case where the JVM has some optimization and the C compiler doesn't have that specific optimization implemented.
Where I have seen Java run faster than C is when there is limited resources and time to market is more important. In this situation, you don't have as much time to write the code as efficiently as you could as in C and C++ you can end up doing things which are less efficient than what the JVM does for you. i.e. if you need the sort of things the JVM does already, it can be faster.
If you have a machines with enough resources, the developer time is more expensive/critical and you can end up with a working, stable system in less time and have time to profile/optimise while a C team might still be fixing all the core dumps. Your mileage will vary.
For resources limited devices, C still dominates because you have control over resource allocation. BTW Most mobile phone now run Java (or objective-C) fine.

C++11(native code) vs Java(bytecode) [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Lately I've been thinking about the difference about native and bytecode.
I did some research and all information I've read was about C++ code compiled into native code vs Java code compiled into bytecode(runs on JVM which takes advantage of multi-core processors). The conclusion was that applications written in Java run faster.
Does C++11 change this?
Does applications written in C++11(since adds threads) run faster than Java ones?
The conclusion was that applications written in Java run faster.
That's a big leap to come to. There are so many factors which contribute to the performance of a system that it's very hard to say one approach will always be better or even faster than another.
C++ has always been able to use threads, it just didn't have as much detail on how they could be used. I don't believe C++11 is about performance but standardising things like memory models.
IMHO Given any amount of time and expert developers, a C++ program will always be faster than a Java program. However, given a limited amount of time and developers of mixed ability you are more likely to get something working and performing well in Java. Your mileage will vary. ;)
Making my answer clearer...
Does C++11 change this?
No. I don't agree it is the situation nor does it change it.
Does applications written in C++11(since adds threads) run faster than Java ones
Yes, but not always, Just like earlier versions.
Neither C++ nor Java automatically split your program into multiple threads. The closest you can get to automatic parallelization in modern languages is to use parallel collections. There are libraries to do that in C++ but there is better support for that kind of stuff in more functional languages e.g. Haskell, Scala, Clojure.
Another way to get automatic parallelization is to use an actor library and write your entire program with actors. Erlang was the first language with full support for that but the Akka framework for Scala/Java is also very good.
I would just say All Your Java Bases Are Belong To C++.. The JVM itself is written in C/C++. C/C++ run at native speeds on the bare-metal of the machine, while bytecodes are interpreted by a C/C++ code(that's running on top of the metal). One byte-code instruction could translate to about 5-10 asm instructions(or more). Hence speed of execution of C/C++ is considered faster than Java's. Ofcourse, if Java's runtime were thrown onto the metal and we had bytecode interpreted at machine speed, then it would be a fair comparison.
That said, see an example in the book called "Programming Pearls" where the author runs an interpreted BASIC program on a Radioshack personal computer, which with sufficient optimizations, runs faster than it does on a super computer. Which means, speed of execution of your program depends on your algorithms and coding/optimization practices.

Haskell vs JVM performance [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I want to write a backend system for a web site (it'll be a custom search-style service). It needs to be highly concurrent and fast. Given my wish for concurrency, I was planning on using a functional language such as Haskell or Scala.
However, speed is also a priority. http://benchmarksgame.alioth.debian.org results appear to show that Java is almost as fast as C/C++, Scala is generally pretty good, but Haskell ranges from slower to a lot slower for most tasks.
Does anyone have any performance benchmarks/experience of using Haskell vs Scala vs Java for performing highly concurrent tasks?
Some sites I've seen suggest that Scala has memory leaks which could be terrible for long running services such as this one.
What should I write my service in, or what should I take into account before choosing (performance and concurrency being the highest priorities)?
Thanks
This question is superficially about performance of code compiled with GHC vs code running on the JVM. But there are a lot of other factors that come into play.
People
Is there a team working on this, or just you?
How familiar/comfortable is that team with these languages?
Is this a language you (all) want to invest time in learning?
Who will maintain it?
Behavior
How long is this project expected to live?
When, if ever, is downtime acceptable?
What kind of processing will this program do?
Are there well-known libraries that can aid you in this?
Are you willing to roll your own library? How difficult would this be in that language?
Community
How much do you plan to draw from open source?
How much do you plan to contribute to open source?
How lively and helpful is the community
on StackOverflow
on irc
on Reddit
working on open source components that you might make use of
Tools
Do you need an IDE?
Do you need code profiling?
What kind of testing do you want to do?
How helpful is the language's documentation? And for the libraries you will use?
Are there tools to fill needs you didn't even know you had yet?
There are a million and one other factors that you should consider. Whether you choose Scala, Java, or Haskell, I can almost guarantee that you will be able to meet your performance requirements (meaning, it probably requires approximately the same amount of intelligence to meet your performance requirements in any of those languages). The Haskell community is notoriously helpful, and my limited experience with the Scala community has been much the same as with Haskell. Personally I am starting to find Java rather icky compared to languages that at least have first-class functions. Also, there are a lot more Java programmers out there, causing a proliferation of information on the internet about Java, for better (more likely what you need to know is out there) or worse (lots of noise to sift through).
tl;dr I'm pretty sure performance is roughly the same. Consider other criteria.
You should pick the language that you know the best and which has the best library support for what you are trying to accomplish (note that Scala can use Java libraries). Haskell is very likely adequate for your needs, if you learn enough to use it efficiently, and the same for Scala. If you don't know the language reasonably well, it can be hard to write high-performance code.
My observation has been that one can write moderately faster and more compact high-performance parallel code in Scala than in Haskell. You can't just use whatever most obviously comes to mind in either language, however, and expect it to be blazing fast.
Scala doesn't have actor-related memory leaks any more except if you use the default actors in a case where either you're CPU-limited so messages get created faster than they're consumed, or you forget to process all your messages. This is a design choice rather than a bug, but can be the wrong design choice for certain types of fault-tolerant applications. Akka overcomes these problems by using a different implementation of actors.
Take a look at the head-to-head comparison. For some problems ghc and java7-server are very close. For equally many, there's a 2x difference, and for only one there's a 5x difference. That problem is k-nucleotide for which the GHC version uses a hand-rolled mutable hashtable since there isn't a good one in the stdlibs. I'd be willing to bet that some of the new datastructures work provides better hashtables than that one now.
In any case, if your problem is more like the first set of problems (pure computation) then there's not a big performance difference and if its more like the second (typically making essential use of mutation) then even with mutation you'll probably notice somewhat of a performance difference.
But again, it really depends on what you're doing. If you're searching over a large data set, you'll tend to be IO bound. If you're optimizing traversal of an immutable structure, haskell will be fine. If you're mutating a complex structure, then you may (depending) pay somewhat more.
Additionally, GHC's lightweight green threads can make certain types of server applications extremely efficient. So if the serving/switching itself would tend to be a bottleneck, then GHC may have the leg up.
Speed is well and good to care about, but the real difference is between using any compiled language and any scripting language. Beyond that, only in certain HPC situations are the sorts of differences we're talking about really going to matter.
The shootout benchmark assumes the same algorithm is used in all implementations. This gives the most advantage to C/C++ (which is the reference implementation in most cases) and languages like it. If you were to use a different approach which suited a different language, this is disqualified.
If you start with a problem which more naturally described in Haskell it will perform best in that language (or one very much like it)
Often when people talk about using concurrency they forget the reason they are doing it is to make the application faster. There are plenty of examples where using multiple threads is not much faster or much much slower. I would start with an efficient single threaded implementation, as profiled/tuned as you can make it and then consider what could be performed concurrently. If its not faster this more than one CPU, don't make it concurrent.
IMHO: Performance is your highest priority (behind correctness), concurrency is only a priority in homework exercise.
Does anyone have any performance benchmarks/experience of using
Haskell vs Scala vs Java for performing highly concurrent tasks?
Your specific solution architecture matters - it matters a lot.
I would say Scala, but then I have been experimenting with Scala so my preference would definitely be Scala. Any how, I have seen quite a few high performance multi-threaded applications written in Java, so I am not sure why this nature of an application would mandate going for FP. I would suggest you write a very small module based on what your application would need in both scala and haskell and measure the performance on your set up. And, may I also add clojure to the mix ? :-) I suspect you may want to stay with java, unless you are looking at benefiting from any other feature of the language you choose.

Why does Android use Java? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
OK, this should really be asked to someone from Google, but I just want other opinions.
Even though Android supports Native code applications, the main development tool is still Java. But why? I mean, isn't it too slow to interpret code on a mobile device? When introducing Froyo, Google said that new JIT compiler can achieve 2-5 times faster applications. This means, that using Java over native code is 2-x times slower.
Yes, I know that using managed code applications is safer in terms of system stability, since virtual machines have better control over program execution, but still, this performance drop is huge, and I don't see any point why to use it.
Some points:
Java is a known language, developers know it and don't have to learn it
it's harder to shoot yourself with Java than with C/C++ code since it has no pointer arithmetic
it runs in a VM, so no need to recompile it for every phone out there and easy to secure
large number of development tools for Java (see point 1)
several mobile phones already used Java ME, so Java was known in the industry
the speed difference is not an issue for most applications; if it was you should code in low-level language
On the byte-code level, Android doesn't use Java. The source is Java, but it doesn't use a JVM.
The improvement to system stability is very important on a device like a cell phone.
Security is even more important. The Android environment lets users run semi-trusted apps which could exploit the phone in truly unpleasant ways without excellent security. By running all apps in a virtual machine, you guarantee that no app can exploit the OS kernel unless there is a flaw in the VM implementation. The VM implementation, in turn, is presumably small and has a small, well-defined security surface.
Perhaps most important, when programs are compiled to code for a virtual machine, they do not have to be recompiled for new hardware. The market for phone chips is diverse and rapidly-changing, so that's a big deal.
Also, using Java makes it less likely that the apps people write will be exploitable themselves. No buffer-overruns, mistakes with pointers, etc...
Native code is not necessarily any faster than Java code. Where is your profile data showing that native code could run faster?
Why Java?
Android runs on many different hardware platforms. You would need to compile and optimize your native code for each of these different platforms to see any real benefits.
There are a large number of developers already proficient in Java.
Java has huge open source support, with many libraries and tools available to make developers life easier.
Java protects you from many of the problems inherent in native code, like memory leaks, bad pointer usage, etc.
Java allows them to create sandbox applications, and create a better security model so that one bad App can't take down your entire OS.
First of all, according to Google, Android doesn't use Java. That's why Oracle is suing Google. Oracle claims that Android infringes on some Java technology, but Google says it's Dalvik.
Secondly, I haven't seen a Java byte code interpreter since 1995.
Can you back up your performance conjecture with some actual benchmarks? The scope of your presumptions don't seem justified given the inaccurate background information you provide.
Java has a pretty compelling argument for Google using it in Android: it has a huge base of developers. All these developers are (kind of) ready to develop for their mobile platform.
Keep in mind that, technically speaking, Android does not use pure Java.
As touched on elsewhere, the main issue is that Android is designed as a portable OS, to run on a wide variety of hardware.
It's also building on a framework and language familiar to many existing mobile developers.
Finally, I would say it is a bet against the future - whatever performance issues exist will become irrelevant as hardware improves - equally by getting developers to code against an abstraction, Google can rip-out and change the underlying OS far more easily, than if developers were coding to the POSIX/Unix APIs.
For most applications the overhead of using a VM-based language over native is not significant (the bottleneck for apps consuming web services, like Twitter, is mostly networking). The Palm WebOS also demonstrates this - and that uses JavaScript rather than Java as the main language.
Given that almost all VMs JIT compile down to native code, raw code speed is often comparable with native speed. A lot of delays attributed to higher-level languages are less to do with the VM overhead than other factors (a complex object runtime, 'safety' checking memory access by doing bounds checking, etc).
Also remember that regardless of the language used to write an application, a lot of the actual work is done in lower level APIs. The top level language is often just chaining API calls together.
There are, of course, many exceptions to this rule - games, audio and graphics apps that push the limits of phone hardware. Even on the iOS, developers often drop down to C/C++ to get speed in these areas.
The new JIT is running the applications 2 - 5 times faster than the old dalvikVM (both JAVA). So comparison is not C over JAVA, but JIT over dalvikVM.
First of all it's about the same thing will windows mobile or the iPhone, the .net framework needs its own VM as well as cocoa.
And even if the performance is not at the best, because it's an interpretation of byte code, android brings the entire java community as potential developers. More applications, more clients, etc.
To finish, no performance is not that bad, that's why java is used even on smaller devices (see JavaMe).

Which language/framework for HPC: Java/.Net/Delphi/C/C++/Objective-C? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have deliberated endlessly over which language/framework best suits the following. I need to develop a HPC framework. All processing will be completely OO. It will pass objects between instances (external) and internally between threads & engines. The Objects will be an extension of Active Messages.
These instances could run on mobile, Windows, Mac, Linux, etc.
The system needs to be able to perform highly parallel computing with speed & efficiency, ideally take advantage of SSE, and ideally support CUDA/OpenCL.
I have considered the following:
Java - it is memory hungry and doesn't run on Mac (not officially, anyway)
.Net - memory hungry; limited in platform scope; no native SSE
Delphi - not 64 bit; limited platform scope
C/C++ - not directly supported by Mac; complex to code; however it is ubiquitous
Objective-C - supported by Mac; appears to be supported elsewhere; works by passing messages, which is per my design requirements; don't know much about it
Any suggestions?
Here is my run down of some options (in no particular order):
C/C++
If all you care about is performance (and nothing else), these will provide. Direct access to system level constructs, such as processor affinity and inline assembly can certainly have an impact on performance. However there a 2 main drawbacks to the C/C++ option. Firstly neither have a well defined memory model, so the memory model you are developing against is that of the CPU you are running the system on (if you don't know what a memory model is how it applies to concurrent programming, then you shouldn't be doing it). This ties you very tightly to a single platform. The second is the lack of a garbage collector, manual memory management is tricky (but doable) in the simple cases, with C++ a number of supporting utilities simplify the problem (e.g. auto pointers/smart pointers). When writing concurrent code it is an order of magnitude harder as the rules for should release a certain piece of memory become very hard to define. E.g. when a object is passed onto a different thread, who's responsible for releasing it? If using C++ it's important to ensure that you are using thread safe versions of the classes used to help manage memory. E.g. the boost smart pointer only supports the use of methods declared as "const" across different threads (e.g. dereferencing is OK), however non-const methods (e.g. assignment) are not thread safe.
Java
Java would be my recommendation, it has support on all of the platforms you mentioned (including mobile, e.g JavaME and Android) as well as CUDA support. Java has a well defined memory model that is consistent across platforms, a robust and mature JIT and optimiser, and number of good and improving garbage collectors. Most general purpose applications written in Java will run just as fast as their C/C++ counterparts. While it is a bit of memory hog, if you are doing HPC work you are most likely going to throw some decent hardware at the problem. Given that you can address 100s of GB on commodity hardware, the memory problem is less of an issue than it used to be. Mobile is the only real area where memory usage is constrained and the specialist runtime environments perform better in this respect (see above). The only main drawback for Java (from an HPC perspective) is the lack of pass by copy complex types (i.e. structs in C#) so all complex objects have to be heap allocated putting pressure on the GC. Escape analysis is supposed to help with this somewhat, however it is difficult to get it do work well in very general cases (note that it has jumped in and out of various revisions of the JDK recently).
Worth mentioning the broad language support (Scala and Groovy++ have pretty good performance profiles) and there are a number of message passing concurrent frameworks (actors, akka, kilim).
C#/.Net
Probably the most complete from a language perspective, especially if you include things like F# for approaching things in a functional manner. However you are generally pushed toward an MS platform if you want the best performance (and the mobile story here is not great either). Mono produces a pretty good platform feature wise (caveat: I'm a contributor) however they are still playing catch up with performance, e.g. an accurate, compacting collector has recently been added and is still a bit experimental.
Google Go
Very new, but interesting from the perspective that it is natively compiled (no jit overhead), but still has a strong runtime, memory model, garbage collector and direct support for CSP (concurrent sequential processing) features in the languages (channels). Probably has some ways to go, the development of a new GC has gotten under way but not surfaced yet and there is probably a lot more they can get out of the compiler.
Haskell and other functional languages
Not a lot of experience here, but some of the functional constructs such as immutability and persistent collections can be quite useful for building robust concurrent applications.
The system needs to be able to perform highly parallel computing with speed & efficiency, ideally take advantage of SSE, and ideally support CUDA/OpenCL.
This narrows it down to C or C++. And since you want OO, it's narrowed down to just C++.
Seriously though, pick what you or your developers are best in.

Categories

Resources