MonoTouch performance vs. Objective-C/Java - java

I'm in the process of developing a multiplatform game engine and am using MonoTouch to cover Android and iPhone. I'm really interested in the performance aspect of using MonoTouch for iOS and Android development, does anyone know what, if any, performance impact MonoTouch will have over developing using Java or Objective-C for their relevant platforms? I ask this specificlaly from a game developers perspective, so things like drawing code and such really worry me. From what I've seen mono apps run fine, but say you made a game in the level of Angry Birds (art work, sound, physics processing), would that run well enough through mono that you wouldn't be put at a significant disadvantage over using the native language of the platform?

First, a clarification: on Android, the code is not executed by Java runtime, but by Dalvik (written from scratch by Google). Thus, the Java VM performance is of no relevance to this question.
With this in mind: most programs on Android don't execute native code, but run on Dalvik VM (which runs the translated Java bytecode). The Mono JIT has been benchmarked against it before and was consistently been found faster (check for example http://www.koushikdutta.com/2009/01/dalvik-vs-mono.html ).
On iOS, MonoTouch has to pre-compile the code into a native application before it can be installed on an Apple device (because of license restrictions, which are enforced by the Operating System). That said, both Objective C compiler and Mono's Ahead Of Time Compilation use the same LLVM backend for generating and optimizing the binary code, so the results you will get should be almost identical (with some memory overhead for Mono).
Please remember one important quote from Donald Knuth: "Premature optimization is the root of all evil." Write your code with performance in mind, but remember that maintainability is more important. Optimization should be done only when it's necessary (because usually the compiler will do a much better job than you can).

Related

Why operating systems are not written in java?

All the operating systems till date have been written in C/C++ while there is none in Java. There are tonnes of Java applications but not an OS. Why?
Because we have operating systems already, mainly. Java isn't designed to run on bare metal, but that's not as big of a hurdle as it might seem at first. As C compilers provide intrinsic functions that compile to specific instructions, a Java compiler (or JIT, the distinction isn't meaningful in this context) could do the same thing. Handling the interaction of GC and the memory manager would be somewhat tricky also. But it could be done. The result is a kernel that's 95% Java and ready to run jars. What's next?
Now it's time to write an operating system. Device drivers, a filesystem, a network stack, all the other components that make it possible to do things with a computer. The Java standard library normally leans heavily on system calls to do the heavy lifting, both because it has to and because running a computer is a pain in the ass. Writing a file, for example, involves the following layers (at least, I'm not an OS guy so I've surely missed stuff):
The filesystem, which has to find space for the file, update its directory structure, handle journaling, and finally decide what disk blocks need to be written and in what order.
The block layer, which has to schedule concurrent writes and reads to maximize throughput while maximizing fairness.
The device driver, which has to keep the device happy and poke it in the right places to make things happen. And of course every device is broken in its own special way, requiring its own driver.
And all this has to work fine and remain performant with a dozen threads accessing the disk, because a disk is essentially an enormous pile of shared mutable state.
At the end, you've got Linux, except it doesn't work as well because it doesn't have near as much effort invested into functionality and performance, and it only runs Java. Possibly you gain performance from having a single address space and no kernel/userspace distinction, but the gain isn't worth the effort involved.
There is one place where a language-specific OS makes sense: VMs. Let the underlying OS handle the hard parts of running a computer, and the tenant OS handles turning a VM into an execution environment. BareMetal and MirageOS follow this model. Why would you bother doing this instead of using Docker? That's a good question.
Indeed there is a JavaOS http://en.wikipedia.org/wiki/JavaOS
And here is discuss about why there is not many OS written in java Is it possible to make an operating system using java?
In short, Java need to run on JVM. JVM need to run on an OS. writing an OS using Java is not a good choice.
OS needs to deal with hardware which is not doable using java (except using JNI). And that is because JVM only provided limited commands which can be used in Java. These command including add, call a method and so on. But deal with hardware need command to operate reg, memory, CPU, hardware drivers directly. These are not supported directly in JVM so JNI is needed. That is back to the start - it is still needed to write an OS using C/assembly.
Hope this helps.
One of the main benefits of using Java is that abstracts away a lot of low level details that you usually don't really need to care about. It's those details which are required when you build an OS. So while you could work around this to write an OS in Java, it would have a lot of limitations, and you'd spend a lot of time fighting with the language and its initial design principles.
For operating systems you need to work really low-level. And that is a pain in Java. You do need e.g. unsigned data types, and Java only has signed data types. You need struct objects that have exactly the memory alignment the driver expects (and no object header like Java adds to every object).
Even key components of Java itself are no longer written in Java.
And this is -by no means- a temporary thing. More and more does get rewritten in native code to get better performance. The HotSpot VM adds "intrinsics" for performance critical native code, and there is work underway to reduce the overall cost of native calls.
For example JavaFX: The reason why it is much faster than AWT/Swing ever were is because it contains/uses a huge amount of native code. It relies on native code for rendering, and e.g. if you add the "webview" browser component it is actually using the webkit C library to provide the browser.
There is a number of things Java does really well. It is a nicely structured language with a fantastic toolchain. Python is much more compact to write, but its toolchain is a mess, e.g. refactoring tools are disappointing. And where Java shines is at optimizing polymorphism at run-time. Where C++ compilers would need to do expensive virtual calls - because at compile time it is not known which implementation will be used - there Hotspot can aggressively inline code to get better performance. But for operating systems, you do not need this much. You can afford to manually optimize call sites and inlining.
This answer does not mean to be exhaustive in any way, but I'd like to share my thoughts on the (very vast) topic.
Although it is theoretically possible to write some OS in pure java, there are practical matters that make this task really difficult. The main problem is that there is no (currently up to date and reliable) java compiler able to compile java to byte code. So there is no existing tool to make writing a whole OS from the ground up feasible in java, at least as far as my knowledge goes.
Java was designed to run in some implementation of the java virtual machine. There exist implementations for Windows, Mac, Linux, Android, etc. The design of the language is strongly based on the assumption that the JVM exists and will do some magic for you at runtime (think garbage collection, JIT compiler, reflection, etc.). This is most likely part of the reason why such a compiler does not exist: where would all these functionality go? Compiled down to byte code? It's possible but at this point I believe it would be difficult to do. Even Android, whose SDK is purely java based, runs Dalvik (a version of the JVM that supports a subset of the language) on a Linux Kernel.

OpenCV (JavaCV) vs OpenCV (C/C++ interfaces)

I am just wondering whether there would be a significant speed performance advantage relatively on a given set of machines when using JavaCV as opposed to the C/C++ implementation of OpenCV.
Please correct me if I am wrong, but my understanding is that the c/c++ implementation of opencv is closer to the machine where as the Java implementation of OpenCV, JavaC, would have a slight speed performance disadvantage (in milliseconds) as there would be a virtual machine converting your source code to bytecode which then gets converted to machine code. Whereas, with c/c++, it gets converted straight to machine code and thus doesn't carry that intermediary step of the virtual machine overhead.
Please don't kill me here if I made mistakes; I am just learning and would welcome constructive criticism.
Thank you
I'd like to add a couple of things to #ejbs's answer.
First of all, you concerned 2 separate issues:
Java vs. C++ performance
OpenCV vs JavaCV
Java vs. C++ performance is a long, long story. On one hand, C++ programs are compiled to a highly optimized native code. They start quickly and run fast all the time without pausing for garbage collection or other VM duties (as Java do). On other hand, once compiled, program in C++ can't change, no matter on what machine they are run, while Java bytecode is compiled "just-in-time" and is always optimized for processor architecture they run on. In modern world, with so many different devices (and processor architectures) this may be really significant. Moreover, some JVMs (e.g. Oracle Hotspot) can optimize even the code that is already compiled to native code! VM collect data about program execution and from time to time tries to rewrite code in such a way that it is optimized for this specific execution. So in such complicated circumstances the only real way to compare performance of implementations in different programming languages is to just run them and see the result.
OpenCV vs. JavaCV is another story. First you need to understand stack of technologies behind these libraries.
OpenCV was originally created in 1999 in Intel research labs and was written in C. Since that time, it changed the maintainer several times, became open source and reached 3rd version (upcoming release). At the moment, core of the library is written in C++ with popular interface in Python and a number of wrappers in other programming languages.
JavaCV is one of such wrappers. So in most cases when you run program with JavaCV you actually use OpenCV too, just call it via another interface. But JavaCV provides more than just one-to-one wrapper around OpenCV. In fact, it bundles the whole number of image processing libraries, including FFmpeg, OpenKinect and others. (Note, that in C++ you can bind these libraries too).
So, in general it doesn't matter what you are using - OpenCV or JavaCV, you will get just about same performance. It more depends on your main task - is it Java or C++ which is better suited for your needs.
There's one more important point about performance. Using OpenCV (directly or via wrapper) you will sometimes find that OpenCV functions overcome other implementations by several orders. This is because of heavy use of low-level optimizations in its core. For example, OpenCV's filter2D function is SIMD-accelerated and thus can process several sets of data in parallel. And when it comes to computer vision, such optimizations of common functions may easily lead to significant speedup.
JavaCV interfaces to OpenCV, so when you call something OpenCV related there would be some overhead but in general most of the heavy work will still be on the C++ side and therefore there won't be a very large performance penalty.
You would have to do performance benchmarks to find out more.
PS. I'm pretty new here but I'm rather sure that this is not a suitable question for StackOverflow.
i would like to add a few more insights on java as an interface to c++ libraries...
A)developing:
1)while java may be easier to manage large scale projects and compiles extremely fast, it is very very hard, and next to impossible to debug native code from java...
when code crush on native side...or memory leaks( something that happens a lot... ) you feel kind of helpless...
2)unless you build the bindings yourself( not an easy task even with using swig or whatever... ) you are dependent on the good will/health/time of the bindings builder....
so in this case i would prefer the official "desktop java " bindings over javacv...
B)performance.
1) while bindings may be optimized( memory transfer using neobuffer ) as in the javacv case there is still a very very small jni overhead for each native function call -
this is meaningless in our case since most opencv functions consume X100000++ cpu cycles compared to this jni overhead...
2) The BIG-PROBLEM ---- stop the world GARBAGE COLLECTIOR( GC )
java uses a garbage collector that halts all cpu's threads making it UNSUITABLE for REAL-TIME application's , there are work-around's iv'e heard of like redesigning your app not to produce garbage, use a spaciel gc, or use realtime java( cost money... ) they all seem to be extra-work( and all you wanted is a nice easy path to opencv.... )
conclusion - if you want to create a professional real time app - then go with c++
unless you have a huge modular project to manage - just stick with c++ and precompiled headers( make things compile faster... )
while java is a pleasure to work with , when it comes to native binding's HELL breaks loose...i know iv'e been there....

Suitable replacement for java to develop a cross-platform application

I've always used java for developing cross platform applications, however, this time java can not solve my problem. The problem is, I have to develop an application which is computationally expensive. More precisely, in my application there is a simulation which is a little too heavy. I made a java prototype app but it's not fast enough and I have some lag in my simulation so I started to think to switch to c++.
My application has a GUI and I was wondering if I want to switch to c++ for a cross platform application, what should I do with GUI?
My questions are:
If I use Qt framework, is my application going to be significantly faster?
If I deploy my jar file to native os executable (.exe, .app, etc) is my application going to be significantly faster?
p.s. Mac OSx, Windows and Ubuntu are target platforms for my software.
This Article may Help You, I Face the same questions a couple of years ago. I decided to stick with Java for my own programming experience, since I'm not that good in C++ and my project was to be honest, very simple. As you know, Java is a very spread / wide around the world, tons of docs and libraries ready for you to use, Qt is faster, but you will need to get your hands dirty to do the job. If performance is your goal, Go Qt. Or redesign your application to hava Java/Swing GUI and C++ programs server side. Anyways here's the link.
http://turing.iimas.unam.mx/~elena/PDI-Lic/qt-vs-java-whitepaper.pdf
Java/Swing may be appropriate for certain projects, especially those without GUIs
or with limited GUI functionality. C++/Qt is an overall superior solution, particularly for GUI applications.
Using C++ instead of Java improves CPU performance, sometimes as much as 10-30%. However using multiple threads also increases the amount of CPU you have available. Given using multiple threads didn't help, I suspect your bottleneck is not in CPU and switching language is unlikely to help.
Where C can help is in programming graphics cards, e.g. CUDA. You can get dramatically faster results for certain types of problems using a high performance processing card. http://www.nvidia.co.uk/object/cuda_home_new_uk.html There are JOCL libraries to use CUDA from Java, but the code which does the real work is in a C-like language.
I suggest you determine where your bottle neck really is as switching to C++ will not increase the size of your cache, your memory bandwidth, IO bandwidth or the size of your main memory.

Is there an advantage to running JRuby if you don't know any Java?

I've heard great things about JRuby and I know you can run it without knowing any Java. My development skills are strong, Java is just not one of the tools I know. It's a massive tool with a myriad of accompanying tools such as Maven/Ant/JUnit etc.
Is it worth moving my current Rails applications to JRuby for performance reasons alone? Perhaps if I pick up some basic Java along side, there can be so added benefits that aren't obvious such as better debugging/performance optimization tools?
Would love some advice on this one.
I think you pretty much nailed it.
JRuby is just yet another Ruby execution engine, just like MRI, YARV, IronRuby, Rubinius, MacRuby, MagLev, SmallRuby, Ruby.NET, XRuby, RubyGoLightly, tinyrb, HotRuby, BlueRuby, Red Sun and all the others.
The main differences are:
portability: for example, YARV is only officially supported on x86 32 Bit Linux. It is not supported on OSX or Windows or 64 Bit Linux. Rubinius only works on Unix, not on Windows. JRuby OTOH runs everywhere: desktops, servers, phones, App Engine, you name it. It runs on the Oracle JDK, OpenJDK, IBM J9, Apple SoyLatte, RedHat IcedTea and Oracle JRockit JVMs (and probably a couple of others I forgot about) and also on the Dalvik VM. It runs on Windows, Linux, OSX, Solaris, several BSDs, other proprietary and open Unices, OpenVMS and several mainframe OSs, Android and Google App Engine. In fact, on Windows, JRuby passes more RubySpec tests than "Ruby" (meaning MRI or YARV) itself!
extensibility: Ruby programs running on JRuby can use any arbitrary Java library. Through JRuby-FFI, they can also use any arbitrary C library. And with the new C extension support in JRuby 1.6, they can even use a large subset of MRI and YARV C extensions, like Mongrel for example. (And note that "Java" or "C" library does not actually mean written in those languages, it only means with a Java or C API. They could be written in Scala or Clojure or C++ or Haskell.)
tooling: whenever someone writes a new tool for YARV or MRI (like e.g. memprof), it turns out that JRuby already had a tool 5 years ago which does the same thing, only better. The Java ecosystem has some of the best tools for "runtime behavior comprehension" (which is a term I just made up, by which I mean much more than just simple profiling, I mean tools for deeply understanding what exactly your program does at runtime, what its performance characteristics are, where the bottlenecks are, where the memory is going, and most importantly why all of that is happening) and visualization available on the market, and pretty much all of those work with JRuby, at least to some extent.
deployment: assuming that your target system already has a JVM installed, deploying a JRuby app (and I'm not just talking about Rails, I also mean desktop, mobile, other kinds of servers) is literally just copying one JAR (or WAR) and a double-click.
performance: JRuby has much higher startup overhead. In return you get much higher throughput. In practice, this means that deploying a Rails app to JRuby is a good idea, as is running your integration tests, but for developer unit tests and scripts, MRI, YARV or Rubinius are better choices. Note that many Rails developers simply develop and unit test on MRI and integration test and deploy on JRuby. There's no need to choose a single execution engine for everything.
concurrency: JRuby runs Ruby threads concurrently. This means two things: if your locking is correct, your program will run faster, and if your locking is incorrect, your program will break. (Unfortunately, neither MRI nor YARV nor Rubinius run threads concurrently, so there's still some broken multithreaded Ruby code out there that doesn't know it's broken, because obviously concurrency bugs can only show up if there's actual concurrency.)
platforms (this is somewhat related to portability): there are some amazing Java platforms out there, e.g. the Azul JCA with 768 GiBytes of RAM and 864 CPU cores specifically designed for memory-safe, pointer-safe, garbage-collected, object-oriented languages. Android. Google App Engine. All of those run JRuby.
I would modify what Peter said slightly. JRuby may use more memory compared to standard Ruby, but that's usually because you're doing the work in a single process what would take several processes with Ruby.
You should try the Rails.threadsafe! option with a single JRuby runtime (for example, the Trinidad gem with the --threadsafe option). We've heard several stories where it gives you great performance and low memory usage, while leveraging multiple CPU cores with a single process.
JRuby is one of the few implementations that uses native threads. So if you care to do some multithreading, go for it.
As far as hosting is concerned, you have to put your app in some sort of java container, which I personally find to be far less straightforward than using something like passenger (for Rack apps)
I use JRuby for an app as we communicate over JMS and it works fine, but if I wasn't using any Java I would certainly stick to CRuby. My biggest beef is that in testing, running tests takes forever with JRuby as you have to spin up a VM each time you run them. This makes it a lot harder to TDD as it's a significant hit on your testing time.
Jruby has advantages if you're on Windows. It supports 64 bits and you can use a lot of proprietary databases with standard JDBC drivers.
The latest releases are significantly faster than Ruby but also use significantly more memory. If that is your only reason for using JRuby, I wouldn't bother unless you have a specific performance need that it solves, simply because, while it is pretty popular, it is less standard for hosting and less people use it as compared to standard Ruby. That being said, there are many other reasons to use JRuby such as a need for interoperability with existing Java code and the need to deploy in environments where Java has been "blessed" by the operations department and Ruby has not.

I-Phone VM for Android

I'm considering opening up a project to create an i-phone virtual machine for android 2.0 (read motorola droid) before i do so i have some questions:
Does one already exist that i just missed?
Can the the Droid's Arm Cortex A8 down-clocked to 550MHz (thanks wikipedia) handle an I-Phone abstraction layer?
Performance wise the best thing to do is write the app in C++, but for the health of the system, would it be better to put the iphone vm on top of the dalvik vm? Which approach would be better and why.
Does one already exist that i just
missed?
No.
Can the the Droid's Arm Cortex A8
down-clocked to 550MHz (thanks
wikipedia) handle an Iphone?
No, but the CPU is not strictly the issue.
Performance wise the best thing to do
is write the app in C++, but for the
health of the system, would it be
better to put the iphone vm on top of
the dalvik vm? Which approach would be
better and why.
It is conceivable you could create an Objective-C implementation in C/C++ that could run on Android via the Android NDK, but NDK libraries have limited system access, meaning you would not be able to do much in Objective-C.
It is conceivable that your Objective-C implementation could run as a standalone application on rooted hardware, and therefore have access to more of the system, but then you pretty much aren't running Android anymore.
It is inconceivable to create an Objective-C implementation that will run on the Dalvik VM and have performance similar to a native implementation of Objective-C on the iPhone.
Note that I have not even discussed implementing the Cocoa libraries and such, as I have no idea how you could do that in reasonable time without copyright infringement, which will get you sued into oblivion (see: Apple v. Pystar). The only way to avoid this is a total cleanroom implementation, and the WINE folk will point out how they have been trying to do this for Windows for around 17 years and have had incomplete success.
If your goal is to write applications once that run across Android and iPhone, consider PhoneGap, Appcelerator Titanium Mobile, and similar toolkits.
No
No, not even close
Its moot, frankly regardless to the language you write it in, you won't even get close to a usable speed. I suppose to actually answer the question, as close to the metal as possible. Again, its a fools errand anyways.

Categories

Resources