I'm interested in programming languages well suited for embedded programming.
In particular:
Is it possible to program embedded systems in C++?
Or is it better to use pure C?
Or is C++ OK only if some features of the language (e.g. RTTI, exceptions and templates) are excluded?
What about Java in this domain?
Thanks.
Is it possible to program embedded
systems in C++?
Yes, of course, even on 8bit systems. C++ only has a slightly different run-time initialisation requirements than C, that being that before main() is invoked constructors for any static objects must be called. The overhead (not including the constructors themselves which is within your control) for that is tiny, though you do have to be careful since the order of construction is not defined.
With C++ you only pay for what you use (and much that is useful may be free). That is to say for example, a piece of C code that is also C++ compilable will generally require no more memory and execute no slower when compiled as C++ than when compiled as C. There are some elements of C++ that you may need to be careful with, but much of the most useful features come at little or no cost, and great benefit.
Or is it better to use
pure C?
Possibly, in some cases. Some smaller 8 and even 16 bit targets have no C++ compiler (or at least not one of any repute), using C code will give greater portability should that be an issue. Moreover on severely resource constrained targets with small applications, the benefits that C++ can bring over C are minimal. The extra features in C++ (primarily those that enable OOP) make it suited to relatively large and complex software construction.
Or is C++ OK only if some
features of the language (e.g. RTTI,
exceptions and templates) are
excluded?
The language features that may be acceptable depend entirely on the target and the application. If you are memory constrained, you might avoid expensive features or libraries, and even then it may depend on whether it is code or data space you are short of (on targets where these are separate). If the application is hard real-time, you would avoid those features and library classes that are non-deterministic.
In general, I suggest that if your target will be 32bit, always use C++ in preference to C, then cut your C++ to suit the memory and performance constraints. For smaller parts be a little more circumspect when choosing C++, though do not discount it altogether; it can make life easier.
If you do choose to use C++, make sure you have decent debugger hardware/software that is C++ aware. The relative ease with which complex software can be constructed in C++, make a decent debugger even more valuable. Not all tools in the embedded arena are C++ aware or capable.
I always recommend digging in the archives at Embedded.com on any embedded subject, it has a wealth of articles, including a number of just this question, including:
Poor reasons for rejecting C++
Real men program in C
Dive in to C++ and survive
Guidelines for using C++ as an alternative to C in embedded designs
Why C++ is a viable alternative to C in embedded systems design
Better even at the lowest levels
Regarding Java, I am no expert, but it has significant run-time requirements that make it unsuited to resource constrained systems. You will probably constrain yourself to relatively expensive hardware using Java. Its primary benefit is platform independence, but that portability does not extend to platforms that cannot support Java (of which there are many), so it is arguably less portable than a well designed C or C++ implementation with an abstracted hardware interface.
[edit] Concidentally I just received this in the TechOnline newsletter: Using C++ Efficiently in Embedded Applications
More often than not in embedded systems, the language you're programming in is determined by which compiler is actually available.
If you're hardware only has a C compiler, that's what you're going to use. IF it has a decent C++ compiler than there is really no reason to prefer C over C++.
I would dare say that Java isn't a very popular choice in embedded systems.
Embedded programming these days spans a large range of applications.
Roughly, it goes from sensors/switches up to complete security systems.
You should base your language on the complexity and the hardware resources.
It is 1 of the choices next to HW (CPU,...), OS, protocols,...
possible choices:
switches: assembler
router-like devices: C and/or C++
handhelds: Java or QT/C++
complete systems: combinations C and/or C++ with python
Or is C++ OK only if some features of the language (e.g. RTTI, exceptions and templates) are excluded?
It's good to be thinking along these lines. Compile-time complexity is not a big deal, but runtime complexity has a resource cost.
C++ facilitates class/namespace modularity (e.g. method foo() in more than one context) and instance modularity (member field bar belonging to more than one object), both of which are a big advantage in software design. There are also features like const, references, static casts, and templates, which can help enforce constraints and have little or no runtime cost.
I would not exclude templates. They're complex to think about and you need a compiler that handles them well, but the resource cost is almost all compile time -- what's going to "cost" you is the fact that each time you use a template with different class parameters, you produce a new set of code to instantiate member functions. But you would almost certainly have to do the same thing without templates. Furthermore, templates allow you to design and test libraries for general circumstances, in separate files that are instantiated at compile time rather than link time. Just to clarify that: templates allow you to have a file A.h that you test. Then you use it with file B.h or B.c to instantiate it at compile time. (A library would be linked in rather than compiled, and this makes it less flexible -- template methods can be optimized out so they do not incur a function call.) I've used templates in embedded systems to implement CRC code and fixed-point math: I can test the general code, put it in version control, and then reuse it multiple times by writing a simple class that derives from a template or has a template member field. The classic example of course is STL.
RTTI and exceptions: these add run-time complexity. I don't have a good idea of the resource cost but I expect RTTI would be fairly simple (just adds a type tag, costing extra space) whereas exceptions are probably beastly, involving stack unwinding.
virtual functions: I used to rule these out because of the memory + executiontime costs (minimal but still there), as well as the complexity of debugging, but they allow you to decouple objects from each other. If you don't use virtual functions, when an instance of one class (e.g. Foo) needs to execute code associated with an instance of another class (e.g. Bar), then the first class needs to know everything about the second (to compile Foo you need to have static linkage to all the methods in Bar) -- this adds a lot of tight coupling.
dynamic memory allocation: this is another big thing (that is in C as well), which we avoid like the plague at my company -- not only are there all sorts of errors that can arise, but the big runtime cost is the allocator/deallocator, and you've got to be willing and able to know what that cost is and accept it.
edit: I would love to use Java instead of C++ in the embedded world. Unfortunately my choices are limited and the runtime resource costs (code size, memory size, garbage-collecting time constraints) are too high in the space that I work in. My reason for using Java is less because of its runtime goodies and more for the fact that its software design is much cleaner, and the tools are much better (OMG! refactoring! woohoo!)... the key to me seems to lie with two things, which make C/C++ feel very clunky in comparison:
that everything is an object and all the methods are virtual, so you can lean heavily on the abstractions of interfaces.
the interface/implementation separation in Java is not this clunky ugly .c/.h file splitting thing that makes compilers so slow. In constrast, I use Java in Eclipse and it automatically compiles the code right as I edit it. This is huge! I find most of my errors right away. In C/C++ I have to wait for a whole compile cycle.
Someday I hope there will be a language between C/C++ and Java that provides the advantages of Java for software development, without requiring the bells and whistles that make Java so attractive for desktop/server applications but unattractive in most of the embedded world.
Both C and C++ can be used on embedded systems. If you do limit the features of C++ that you use, then it is going to use roughly the same speed and space as C. As for using these additional features, it really depends on your constraints. For example, if you are making a real-time system, then exceptions might not be a good idea, simply because considering the propagation time for exceptions and all the paths through which exceptions can possibly propagate can make hard real-time guarantees quite tough (although, then again, the ACE / Tao real-time CORBA implementation uses exceptions). While templates and RTTI can lead to larger programs, there is a lot of variability in embedded systems, and so depending on your resource constraints, this could be perfectly fine or unacceptable. The same goes for Java. Java (well, technically, the Java programming language with a subset of the Java API) is running on Android, for example, using the Dalvik VM. J2ME runs on a variety of embedded devices. Whether it will or will not work for your application, depends on your particular constraints.
c is the most common language used in embedded systems.
However, I think C++'s future lies in the embedded system domain. I think the C++0x standards will help in that resepect. So I wouldn't be surprised to see C++ used a lot more in this field.
I think you already have a great answer by Clifford, but I'll add my experience to it. As was pointed out, the type of embedded system is the main driver. In Defense/Aerospace, C and Ada are the most popular embedded languages I encounter. Although as time goes on, I am seeing more C++ as the model based development becomes popular with the tools such as Rhapsody. In the jobs listing, seeing requirements for Object Oriented Design experience also leads me to believe that the market is slowly shifting to follow the trends of mainstream development.
If you're really interested in embedded development, I would start with C. It is pretty universal. Almost every OS, including real time OS's like Integrity, Nucleus, VxWorks, and embedded Linux, has a compiler and tools for it. The things you'll learn about pointers, memory management, etc... will translate into C++ development well in the embedded world at least.
As for Java, if you're interested in development for mobile platforms like smart phones, this is a solid choice (Android comes to mind). But in the world of Real Time Operating Systems, I haven't seen it.
On that note, that brings me to my last point and advice. If I wanted to jump into embedded programming (which I did just 4 years ago), I would focus on learning C from a low level point of view as mentioned above. Then, I would also learn what makes real time programming so difficult/different. I found the book below quite good at teaching you to think like an embedded programmer vs. an application developer. Good luck!
An Embedded Software Primer
It really boils down to what hardware platform you're operating on and hence what software platforms are open to you. For a lot of recent embedded kit - designed around a system-on-chip, a megabyte or two of RAM, a few devices - you really want a small operating system to manage the low level hardware while you concentrate on your application. Your choice of OS then constrains your available language set.
It's certainly possible to use C++ in the embedded space, but the full feature set of the language takes a lot of work to port correctly. For example, eCos is implemented in a mixture of C and what you might call the structural aspects of C++; support for RTTI, exceptions and the STL are lacking in the free version, though there are people working on this (and a commercial port available).
Similarly, it's possible to use Java - I know, I've ported a JVM to an embedded environment; it wasn't fun - though you usually get a cut-down Java language configuration, often something based on J2ME.
Related
I am an embedded programmer and working with an embedded JVM.
This enables running Java files on constrained devices.
These Java files are first compiled to bytecode into .class files which are then further optimized and uploaded to the device which has a micro JVM to run the optimized bytecode.
The micro JVM does not support all features, e.g., no reflection.
The main benefit is obvious: this allows programming in Java for constrained devices.
However, I was thinking that plenty of languages compile to bytecode, some are listed https://en.wikipedia.org/wiki/Java_bytecode.
So in theory these languages could also be used to program.
I'd like to obtain a list of common languages that compile down to bytecode and was wondering if you could help.
For example, Python has special implementations that reduce to Java Bytecode, if I'm not mistaken, and stuff like C to Java virtual machine compilers also exist.
So what languages would you think are logical to try and run on the devices? Any pointers on how to or similar experiences?
Also, I'm not clear what the difference is from reading Wikipedia between (Python) bytecode and Java bytecode, could anybody help explain that?
I'm agree with you about the overall idea and it would be nice to develop an embedded application using any language that can run on a JVM. But there are some practical issues that you should consider and I think that's why none of major vendors or open source initiatives have any active/serious project on this (as far as I know).
As you mentioned, a JVM implementations that can run on embedded devices, each have their own constraints and limitations. The most obvious one is that some packages may not be available at runtime. In order to apply such a constraint, you should either control it in the compile process or have a toolchain (sort of an SDK) which accepts the bytecode and checks such constraints.
This situation would be worth when a developer tries to use a third party library that is available for that specific language. It's not easy to guess if a library is safe for use against such a framework or not.
One great facility for developers would be to have their IDE check such issues on the fly (something like inspection in IntelliJ Idea). This makes it much more smoother to move toward using such a solution. But again the problem is that for each such languages there need to be a specific plugin compatible with their own syntax.
Also some of JVM languages that are actually implementation of an existing language (e.g. Jython or JRuby) are most of the time out of sync with the original language in case of supporting libraries/syntax changes of that language.
Anyway, I think in order to have a list of JVM languages you could easily find them on Wikipedia. Maybe you mean those who may worth considering in this regard by having a large community and tools support. In my opinion, you should focus on the following JVM languages as those who may worth to include in your final list:
Groovy
Kotlin
Scala
These are all pure JVM languages and are only using different syntax than Java.
Regarding the topic in general, I should say that when you search for embedded JVM implementations, you'll notice that it's also a fairly academic concepts and they're so many publications in this topics regarding the overall architecture, threading support, toolchain, error handling, memory management, etc. This means that you should have a very great experiences/background on both embedded systems and programming language concepts and implementation to be able to devise a proper architecture for such a platform.
About your last question regarding the difference between Python bytecode and Java bytecode (if I understand your question correctly), these are both conceptually the same but each has its own syntax and constraints. The bytecode concept refers to the piece of software that is the output of the compiler and is the platform independent representation of the original code and can be run/interpreted at runtime by another software component which is the virtual machine. In Java world, this software is called the Java Virtual Machine (JVM). I'm from the Java world so I don't know what it's called in Python vocabulary but it should be something similar (e.g. Python virtual machine).
I think due to the complexity of developing such a toolchain and also considering the unprecedented development of new IoT and SoC devices, many of them capable of running a more higher level operating systems, maybe in a long run most developers prefer to develop for a more high end devices using more high level APIs and SDKs. Who knows! In that case, we would have a same situation that we're in today for PCs. Languages like C and Assembly are still in use, but they have their own domain of applications. I mean throughout the time, layers of abstraction are being added on top of the previous one. The same thing can happen for embedded devices.
It is great that using Intermediate Language (.Net: MSIL, Java: Bytecode) we can achieve platform independence. But when an application is supposed to run on a single platform only (e.g. Windows), in that case is there any simple way to specify that "I don't need JIT every time just give me the native code."?
Single platform (Windows) doesn't really mean single target. I'm currently running on Windows - and some binaries are x86, and some are x64. Even within the same processor family, different specific chips have different abilities that the JIT could take care of.
On .NET you can use NGEN - but personally I would see how much benefit there is before you actually use it in production. I believe the main benefit is in terms of start-up time rather than performance when actually executing. In fact, I believe there are some optimizations that the "normal" JIT can make which NGEN won't.
One point to note is that although the Hotspot JIT for Java is adaptive as Dolda2000 mentions, the .NET JIT is currently "once only" - it won't re-JIT code putting in more effort if it turns out to be very heavily used, or make assumptions around subclassing and then "undo" them later.
I cannot speak for .Net, but there certainly are native Java compilers, such as GNU GCJ.
More importantly, however, are you really sure that you want to avoid JITing? A JIT compiler, operating as it is on knowledge of the global state of the code, can often make optimizations that static compilers cannot. For example, a JIT compiler can inline virtual methods when it knows that no subclass currently exists that overrides it (whereas a static compiler couldn't know if such a class would be (statically or dynamically) linked in later on). There are many other examples as well, but I don't think the scope of this answer is to list them. :)
Another point:
Many current frameworks manipulate the bytecode during classloading. This means that the code on disk is not the code executed. Any Java Framework doing annotation based dependency injection will use this. JPA/Hibernate use this. AOP (Aspect Oriented Programming) usually use this, although the AOP frameworks usually also provide a way to manipulate the class files during the build.
Compiling the code upfront into native code would render these frameworks useless.
I have read a few articles mentioning converters from one language to another.
I'm a bit more than skeptical about the use of such kind of tools. Does anyone know or have experiences let's say about Visual Basic to Java or vs converters? Just one example to pick
http://www.tvobjects.com/products/products.html, claims to be the "world leader" or so in that aspect, However if read this:
http://dev.mysql.com/tech-resources/articles/active-grid.html
There the author states:
"The consensus of MySQL users is that automated conversion tools for MS Access do not work. For example, tools that translate existing Access applications to Java often result in 80% complete solutions where finishing the last 20% of the work takes longer than starting from scratch."
Well we know we need 80% of the time to implement the first 80% functionality and another 80% of the time for the other 20 %....
So has anyone tried such tools and found them to be worthwhile?
Tried? No, actually built (more than one) language convertor.
Here's one I (and my coworkers) built for the B2 Spirit Stealth Bomber to convert the mission software, coded in a legacy language, JOVIAL, into maintainable C code, with 100% automated conversion. One of the requirements was that we were NOT allowed to see the actual source code. No joke.
You are right: if you get only a medium high conversion rate (e.g., 70-80%), the effort to finish the conversion is still very significant if indeed you can do it at all. We target 95%+ and do better when told to try harder as was the case for the B2. The only reason people accept medium high rate converters is because they can't find (or won't fund!) a better one, insist on starting now, and accept the fact that converting it this way may be painful (usually they don't know how much) but is in fact less painful than rebuilding it from scratch. (I happen to agree with this assessment: in general, projects that try to recode a large system from scratch usually fail and conversions using medium high conversion rate tools don't have as high a failure rate.)
There are lots of bad conversion tools out there, something slapped together with a mountain of PERL code doing regexes on text strings, or some YACC-based parser with code generation essentially one-to-one for each statement in the compilation unit. The former are built by people who had a conversion dropped on them out of the sky. The latter are often built by well-intentioned engineers that don't have decent compiler background.
For a singularly bad example, see my response to this SO question about COBOL migration: Experience migrating legacy Cobol/PL1 to Java, which is exactly a direct statement translator... producing the stuff that gave rise to the term "JOBOL".
To get such high-accuracy conversion rates, you need high-quality parsers, and means to build high-quality translation rules that preserve semantics, and optimize for target-language properties and special cases. In essence, you need what amounts to configurable compiler technology. The reason we succeed, IMHO, is our DMS Software Reengineering Toolkit, which was designed to do this job. (I'm the architect; check out my SO icon/bio).
Lots of careful testing helps, too.
DMS "knows" what the compiler knows about code, by virtue of having a compiler-like front end for the language of interest, and having the ability to build ASTs, symbol tables, control and data flows, call graphs. It uses much of the compiler technology that the compiler community spent the last half-century inventing, because that stuff has been proven to be useful in translation!
DMS knows more than most compilers know, because it can read/analyze/transform the entire application at once; most compilers stick to single compilation units. Thus one can code translation rules that depend on the entire application as opposed to just the current statement. We often add problem- or application-specific knowledge to improve the translation. This often shows up when converting special features of a language, or calls on libraries, where one must recognize the library calls as special idioms, and translate them to calls on compositions of target libraries and language constructs.
This capability is used to build translators (e.g., the JOVIAL translator), or domain-specific code generators.
More often we build complex automated software engineering tools that solve problems specific to customers, such as program analysis tools (dead code, duplicate code, style-broken code, metrics, architecture extraction, ...), and mass change tools (platform [not langauge] migrations, data layer insertion, API replacement, ...)
It seems to me, as is almost always the case with MS-ACCESS questions having tags that attract the wider StackOverflow population, that the people answering are missing the key question here, which I read as:
Are there any tools that can successfully convert an Access application to any other platform?
And the answer is
ABSOLUTELY NOT
The reason for that is simply that tools in the same family that use similar models for the UI objects (e.g., VB6) lack so many things that Access provides by default (how do you convert an Access continuous subform to VB6 and not lose functionality?). And other platforms don't even share the same core model as VB6 and Access, so those have even more hurdles to clear.
The cited MySQL article is quite interesting, but it really confuses the problems that come with incompetently-developed apps vs. the problems that come with the development tools being used. A bad data schema is not inherent to Access -- it's inherent to [most] novice database users. But the articles seems to attribute this problem to Access.
And entirely overlooks the possibility of fixing the schema, upsizing it to MySQL and keeping the front end in Access, which is by far the easiest approach to the problem.
This is exactly what I expect from people who just don't get Access -- they don't even consider that Access as front end to a securable, large-capacity server database engine can be a superior solution to the problem.
That article doesn't even really consider conversion of an Access app, and there's good reason for that. All the tools that I've seen that claim to convert Access applications (to whatever platform) either convert nothing but data (in which case they don't convert the app at all -- morons!), or convert the front end structure slavishly, with a 1:1 correspondence between UI objects in the Access application and in the target app.
This doesn't work.
Access's application design is specific to itself, and other platforms don't support the same set of features. Thus, there has to be translation of Access features into a working substitute for the original feature in the converted application. This is not something that can be done in an automated fashion, in my opinion.
Secondly, when contemplating converting an Access app for deployment in the web browser, the whole application model is different, i.e., from stateful to stateless, and so it's not just a matter of a few Access features that are unsupported, but of a completely different fundamental model of how the UI objects interact with the data. Perhaps a 100% unbound Access app could be relatively easily be converted to a browser-based implementation, but how many of those are there? It would mean an Access app that uses no subforms whatsoever (since they can't be unbound), and an app that uses only a handful of events from the rich event model (most of which work only with bound forms/controls). In short, a 100% unbound Access app would be one that fights against the whole Access development paradigm. Anyone who thinks they want to build an unbound app in Access really shouldn't be using Access in the first place, as the whole point of Access is the bound forms/controls! If you eliminate that, you've thrown out the majority of Access's RAD advantage over other development platforms, and gained almost nothing in return (other than enormous code complexity).
To build an app for deployment in the web browser that accomplishes the same tasks as an Access applications requires from-the-ground-up redesign of the application UI and workflow. There is no conversion or translation that will work because the successful Access application model is antithetical to the successful web application model.
Of course, all of this changes with Access 2010 and Sharepoint Server 2010 with Access Services. In that case, you can build your app in Access (using web objects) and deploy on Sharepoint for users to run it in the browser. The results are functionally 100% equivalent (and 90% visually), and run on all browsers (no IE-specific dependencies here).
So, starting this June, the cheapest way to convert an Access app for deployment in the browser may very well be to upgrade to A2010, convert the design to use all web objects, and then deploy with Sharepoint. That's not a trivial project, as Access web objects have a limited set of features in comparison to client objects (and no VBA, for instance, so you have to learn the new macros, which are much more powerful and safe than the old ones, so that's not the terrible hardship it may seem for those familiar with Access's legacy macros), but it would likely be much less work than a full-scale redesign for deployment on the web.
The other thing is that it won't require any retraining for end users (insofar as the web-object version is the same as the original client version), as it will be the same in the Access client as in the web browser.
So, in short, I'd say conversion is a chimera, and almost always not worth the effort. I'm agreeing with the cited sentiment, in fact (even if I have a lot of problems with the other comments from that source). But I'd also caution that the desire for conversion is often misguided and misses out on cheaper, easier and better solutions that don't require wholesale replacement of the Access app from top to bottom. Very often the dissatisfaction with Jet/ACE as data store confuses people into thinking they have to replace the Access application as well. And it's true that many user-developed Access apps are filled with terrible, unmaintainable compromises and are held together with chewing gum and bailing wire. But a badly-designed Access application can be improved in conjunction with the back-end upsizing andrevision of the data schema -- it doesn't have to be discarded.
That doesn't mean it's easy -- it's very often not. As I tell clients all the time, it's usually easier to build a new house than to remodel an old one. But one of the reasons we remodel old houses is because they have irreplaceable characteristics that we don't want to lose. It's very often the case that an Access app implicitly includes a lot of business rules and modelling of workflows that should not be lost in a new app (the old Netscape conundrum, pace Joel Spolsky). These things may not be obvious to the outside developer trying to port to a different platform, but for the end user, if the app produces results that are off by a penny in comparison to the old app, they'll be unhappy (and probably should be, since it may mean that other aspects of the app are not producing reliable results, either).
Anyway, I've rambled on for too long, but my opinion is that conversion never works except for the most trivial apps (or for ones that were designed to be converted, e.g., a 100% unbound Access app). I'm all for revision in place of replacment.
But, of course, that's how I make my living, i.e., fixing Access apps.
A couple of issues that effect the success or failure of cross-language conversion are the relative semantic richness of the languages, and their semantic models.
Translation from C++ to C should be relatively easy, but translation of C to idiomatic C++ would be next to impossible because that would be next to impossible to automatically turn a procedural program into an OO program.
Translation of Java to C would be relatively simple, though handling storage management would be messy. Translation of C into Java would be next to impossible if the C program did funky pointer arithmetic or casting between integers and different kinds of pointer.
Translation of a functional language to an imperative language would be much easy though the result would probably be inefficient, an non-idiomatic. Translation of an imperative language to a functional language is probably beyond the state of the art .... unless you implement an interpreter for the imperative language in the functional language.
What this means is that some translators are necessarily going to be more successful than others in terms of:
completeness and accuracy of translation, and
readability and maintainability of the resulting code.
Things You Should Never Do, Part I by Joel Spolsky
"....They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch."
I have a list of MS Access converters on my website. I've never heard anything good about any of them in any postings in the Access related newsgroups I read on a daily basis. And I read a lot of postings on a daily basis.
Also note that there is a significant amount of functionality in Access, such as bound continuous forms or subforms, that is more work to reproduce in other systems. Not necessarily a lot of work but more work. And more troubles when it comes time to distribute and install the app.
I've used an automated converter from C# to Visual Basic.NET. It worked pretty well except for adding some unnecessary If True statements.
I've also attempted to use Shed Skin to convert Python-to-C++, but it didn't work because of its lack of support for new-style division.
I've used tools for converting a VB6 Project into VB.Net - which you would hope would be perhaps one of the simpler examples of this sort of thing. My experience was that everything had to be checked, in fine detail, and half the stuff was missing / wrong.
Certainly I would recommend a migration by hand, or depending on the language you're targetting, I would consider a complete rewrite if this gives you a chance to make major improvements to your codebase.
Martin
I have only tried free and basic paid for converters. But the main problem is that it is very very hard to have confidence that the conversion is entirely successful.
Usually they are best used to hand convert code section at a time, where you review each piece of code. Often in my experience a rewrite instead of a conversion turns out to be a better option.
I am learning Java.
I have learned and used Ruby. The Ruby books always tell the advantages of Ruby over Java. But there must be some advantages, that's why lots of people (especially companies) use Java and not Ruby.
Please tell the absolute(not philosophical!) advantages of Java over Ruby.
Many more developers experienced with
Java than with Ruby.
Many existing libraries in Java (That
helps JRuby too).
Static typechecking (can be seen as
advantage and as disadvantage).
Existing codebase that has to be
maintained.
Good tool-support.
More and deeper documentations and
tutorials.
More experiences with good practices
and pitfalls.
More commercial support. That's
interesting for companies.
Many of these advantages are the result, that the Java-ecosystem is more matured, than that around Ruby. Many of these points are subjective, like static vs. dynamic typing.
I don't know Ruby very well, but I can guess the following points:
Java has more documentation (books, blogs, tutorial, etc.); overall documentation quality is very good
Java has more tools (IDEs, build tools, compilers, etc.)
Java has better refactoring capabilities (due to the static type system, I guess)
Java has more widespread adoption than Ruby
Java has a well-specified memory model
As far as I know, Java has better support for threading and unicode (JRuby may help here)
Java's overall performance is quite good as of late (due to hotspot, G1 new garbage collector, etc.)
Nowadays, Java has very attractive and cheap server hosting: appengine
Please tell the absolute … advantages of Java over Ruby
Programmers should rarely deal in absolutes.
I'll dare it, and say that as a rule, static typing (Java) is an advantage over dynamic typing (Ruby) because it helps recognize errors much quicker, and without the need to potentially difficult unit tests1).
Harnessed intelligently, a strong type system with static type checking can be a real time-saver.
1) I do not oppose unit testing! But good unit testing is hard and the compiler can be a great help at reducing the sheer number of necessary test cases.
Reason #1. There's a lot of legacy Java code out there. Ruby is new, there's not so many programmers who know it and even fewer who are good at it. Similarly, there is a lot more library code available for Java than Ruby.
So there may be Technical reasons Ruby is better than Java, but if you're asking for Business reasons, Java still beats it.
The Java Virtual Machine, which has had over a decade of improvements including:
just in time compilation in the HotSpot compiler (JIT - compiling byte code to native code)
a plethora of garbage collection algorithms and tuning parameters
runtime console support for profiling, management etc. of your application (JConsole, JVisualVM etc)
I like this Comparison(Found on link Given by Markus!Thanks!)... Thanks to all... i am also expecting some more discrete advantages
And its Great!!
The language.
My opinion is that the particular properties of the Java language itself lead us to the powerful capabilities of the IDEs and tools. These capabilities are especially valuable when you have to deal with very large code-base.
If I try to enumerate these properties it would be:
of course strong static typing
the grammar of language is a LALR(1) grammar - so it is easy to build a parser
fully qualified names (packages)
What we've got in the IDE so far, for example Eclipse:
great capabilities of exploring very large code bases. You can unambiguously find all references, call hierarhy, usages of classes or public and protected members - it is very valuable when you studying the code of the project or going to change something.
very helpful code editor. I noticed that when I writing code in the Eclipse's java editor I'm actually typing by hand only names of calsses or methods and then I press Ctrl+1 and editor generates a lot of things for me. And especially good that eclipse encourage you to write the usage of piece of code first and even before the code is aclually writen. So you do the method call before you create the method and then editor generates the method stub for you. Or you add extra arguments to the method or constructor in the place when you're invoking it - and editor change the signature for you. And enev more complicated things - you pass some object to the method that accept some interface - and if the object's class do not implement this interface - editor can do it for you... and so on. There's a lot of intresting things.
There is a LOT of tools for Java. As an example of a one great tool I want to mention Maven. Actually, my opinion is that the code reuse is really possible only when we have such a tool like Maven. The infrastructure built around it and integration with IDE make feasible very intresting thinsg. Example: I have m2eclipse plugin installed. I have new empty project in the Eclipse. I know that there is a class that I need to use (reuse actually) somewhere in the repositories, let say StringUtils for example. I write in my code 'StringUtils', Eclipse's editor tell me that there is no such class in the project and underlines it with red. I press Ctrl+1 and see that there is an ability to search this class in the public repository (actually in the index, not the repository itself). Some libs were found, I choose one of them at particular version and the tool downloads the jar, configures my project's calsspath and I alredy got all that I need.
So it's all about programmer's productivity.
The JVM.
My opinion is that the JVM (Sun's HotSpot particularly) is a one of the most intresting pieces of software nowadays. Of course the key point here is a performance. But current implementation of HotSpot JVM explores very cutting edge ways to achieve such really great performance. It explores all possible advantages of just-in-time compiling over static, collects statistics of the usage of code before JIT-compile it, optimise when it possible virtual calls, can inline a lot more things that static compiler can, and so on. And the great thing here that all this stuff is in the JVM, but not in the language itself (as contrary with C# as example). Actually, if you're just learning the Java language, I strongly encourage you to learn the details of modern implementations of JVM, so you know what is really hurt performance and what isn't, and do not put unnecessary optimizations in the Java code, and do not afraid to use all possibilities of the language.
So...
it's all about IDEs and tools actually, but by some reason we have them for Java not for any other language or platform (.NET of course is a great competitor in the Windows world).
This has probably been beaten to death, but my personal opinion is that Ruby excels at quickly created web apps (and frameworks) that are easy to learn, beautiful to read, and are more than fast enough for web apps.
Where Java is better suited for raw muscle and speed.
For example, I wrote a Ruby program to convert a 192 MB text file to a MongoDB collection. Ruby took hours to run. And the Ruby code was as simple/optimized as you could get (1.9.2).
I re-wrote it in Java and it runs in 4 minutes. Yes. Hours to 4 minutes. So take that for what it's worth.
Network effect. Java has the advantage of more people using Java. Who themselves use Java because more people use Java.
If you have to build a big software, you'll need to collaborate. By having a lot of programmers out there, you are sure that there will be someone that can be asked to maintain your software even if the original developers have left the company.
Static type checking and good Java IDE offer no magic and this is good for a lot of maintainer instead of Ruby.
It is not sufficient to indicate that java is statically typed and ruby is dynamically typed.
Correct me if I'm wrong, but does this cover the fact that in ruby you can add to and even
change the program (class definitions, method definitions etc) at runtime? AFAIK you can have dynamically typed languages that are not "dynamic" (can be changed at runtime).
Because in Ruby you can change the program at runtime you don't know until you've actually run the program how it is going to behave, and even then you don't know if it will behave the same next time because your code may have been changed by some other code that called the code you're writing and testing.
This predictability is, depending on the context, the advantage of Java - one of the contexts where this is an advantage is when you have a lot of developers of varying skill levels working on a fairly large enterprise application.
IMHO, what one person considers an advantage might be a disadvantage for someone else. Some people prefer static typing while others like dynamic. It is quite subjective and depends largely upon the job and the person doing it.
I would say just learn Java and decide for yourself what its strong points are. Knowing both languages yourself beats any comparisons/advice some other person can give. And its usually a good thing to know another language, so you're not wasting your time.
Negatives for Java:
There is a lot of duplication in libraries and frameworks available for Java.
Java developers/communities tend to create over complicated solutions to simple problems.
There is a lot more legacy in Java to maintain.
Too much pandering to business users has introduced cruft that makes middle managers feel better. In other words, some philosophies in Java are more concerned with BS instead of getting the job done. This is why companies like to use Java.
You'll generally need to write more code in Java than Ruby.
It takes a lot more configuring/installing/setup to get a fully working Java development environment over Ruby.
Positives for Java:
Speed.
Documentation.
Lower level language than Ruby, which could be a good thing or a bad thing, depending on your needs.
None of my points are very scientific, but I think the differences in philosophy and personalities behind Java and Ruby is what makes them very different to each other.
Better performances
There are more choices:
Developers - lots to hire
Libraries - lots of wheels already invented.
IDE's - lots of development environments to choose from. Not only just vi/emacs + a shell.
Runtimes - if you for some reason do not like the JVM you use on the system, you can either download or buy another implementation and it will most likely Just Work. How many Ruby implementations are there?
Please note that this has nothing to do with the LANGUAGES as such :)
Reading up on this : Is Ruby as cross-platform as Java? made me realize at least one factual advantage of java over ruby:
The J2ME-compatible subest of java is more portable than ruby
as long as JRuby won't run on J2ME which may be forever
Like the subject of this post suggests, I am looking at developing a suite like nero which helps burn bluray discs. I am kind of clueless as to where to start. Is there anything in Java API that lets you do this? If I were to start from scratch, would I need to start with the bluray disc spec? Are there any open source tools which are already doing this? I tried searching at sourceforge.net and found nothing useful. Any help is much appreciated.
To start with the obvious: Know your requirements and tools. I try guessing here, maybe.
Requirements:
Should burn BluRay discs
Graphical user interface
Preferred tool:
Java
Now, Java, being perhaps the prime example of a VM language from the 90es, achieves its relatively good platform-agnosticism by virtue of its VM. It's a language designed to run on a virtual hardware to ease portability to real hardware.
Now, what comes with this fact is that you abstract away many things you would have to care about, like memory-management details and architecture or platform-specifics. Among those things you can't reliably get access to is hardware. After all, you abstracted most of that away.
Now, to burn a BluRay disc you have to access hardware, in particular the BluRay writer. Not that it's impossible but Java is, in my humble opinion, not the right tool for this. You can go out of your way by implementing a library in C or C++ and using JNI/JNA to access that but looking at that, what do you really gain?
Java is usually a choice when you need a fairly modern high-level language with a large standard library and you also need your programs to run on more than one platform. Those are the primary use cases. It's not impossible with other technologies, but perhaps harder to achieve, depending on what exactly you need.
If you implement a native library to talk to the BluRay writer and talk to that from Java, then you necessarily need to re-implement it for other platforms as well (assuming that's what you want—if not, then again: Why Java?).
TL/DR version: My point is that it's not too surprising that you can't find much on exactly that topic. For one, Java wasn't really designed to do that sort of things. Most of the Java/native interop lies in the JVM and that's already an awful lot of code. Don't expect Java to natively support very rare usage scenarios such as CD/DVD/BluRay burning. Secondly, BluRay is a relatively new technology with writers not yet common hardware in computers such as CD/DVD writers, so the lack of libraries and tools may also be a mirror of the current demands of the market.
Low-level hardware access is simply not possible in pure Java unless it's in the standard API, which Bluray isn't.
Therefore, you will have to use non-Java code to access the hardware; at that point you lose the platform-independance of Java, and necessarily have a multi-language system, which is always more painful to program than using just a single language.
However, if you can find (or, I guess, develop) a multi-platform Bluray writing API or command line tool in (most likely) C, then it might still make sense to write the rest of the app in Java as a GUI wrapper with added functionality.