Our company uses an IBM iSeries for a majority of our data processing. All of our internal apps are written in RPG. According to IBM's roadmap, IBM is pushing companies to move to Java/J2EE. We're looking to modernize our internal apps to a more GUI interface. We provide an external web presence by using Asp.Net webs, although perhaps greenfield projects could be Java. One option is to use a screen scraper app while staying on RPG but I think it may be better to slowly go the way of IBM's roadmap and move to Java. Our goal is to migrate to a GUI interface and to be inline with IBM's roadmap.
Have you been involved with an RPG to Java migration, even if only greenfield projects were Java and the brownfield projects remained RPG?
My management is afraid that:
1) updating JRE on workstations, particularly thin clients, could cause an administrative nightmare (Our company uses 80% thin clients and 20% PCs) and
2) Java demands too much overhead of the workstation to run effectively
3) Incompatibility between JRE clients as we update, potentially breaking other apps requiring the JRE.
Can you shed some light on this? Are there any huge benefits? Any huge gotchas?
CLARIFICATION: I am only interested in a migration to Java. What is the difficulty level and do I lose anything when going from RPG to Java? Are the screens very responsive when migrated to Java?
My company is also attempting to migrate to Java from RPG.
We're not attempting to use a JRE on a thin-client, we're moving to web applications delivered through a browser. This may entail (eventually) replacing our old POS-scanners with some of the newer PC-based ones.
I have been informed (by company architects) that the JVM on the iSeries OS does have some performance issues. I do not personally know what these limitations are. In our case the migration has involved allocating an AIX resource, which is supposed to be much better - talk to your IBM rep about whether you just need to purchase the OS license (I just program on the thing, I don't get involved in hardware).
See reponse to question 1. In a larger context, where you're trying to update the browser (or any other resource), this is usually handled by having enterprise licenses - most will have options to allow forced, remote updates.
Some other notes:
You should be able to move to just using .NET, although you may need different hardware/partitions to run the environment. You can talk to DB2 that way, at least. The only benefit Java has there is that it will run on the same OS/hardware as the database.
I've seen a screenscraper application here (it was in VB.NET, but I'm fairly sure the example applies). Screen-scraping was accomplished by getting/putting characters to specific positions on the screens (the equivalent of substring()). That could be just the API we were using - I think I've heard of solutions that were able to read the field names. However, it also relied on the RPG program flow for it's logic, and was otherwise not maintainable.
Most of the RPG programs I've seen and written tend to be a violation of MVC, meaning you can't do anything less than integration testing - the history and architecture of the language itself (and some developers) prefers that everything (file access to screen display) be in one file. This will also make attempting to wrap RPG for calling remotely effectively impossible. IF you've properly seperated everything into Service Programs, you should be able to wrap them up (as the equivalent of a native method call, almost) neatly - unfortunately I haven't seen anything here that didn't tend to rely on one or more tricks that wouldn't hold up for typical Web use (for example, using a file in QTEMP for controlling program execution - the session on the iSeries effectively disappears every time a new page is requested...).
Java as a language tends to promote better seperation of code (note that it can be misused just as badly), as it doesn't have quite the history of RPG. In general, it may be helpful to think of Java as a language where everything is a service program, where all parameters are passed with VALUE set, OPTIONS(*nopass : *omit) is disallowed, CONST is generally recommended, and most parameters are of type DS (datastructure - this is a distinct type in RPG) and passed around by pointer. Module level parameters are frowned upon, if favor of encapsulating everything either in passed datastructures or the service program procedures themselves. STATIC has somewhat different use in Java, making variable global, and is not available inside of procedures.
RPG is quite a bit more simple than Java, generally, and OO-programming is quite a different paradigm. Here are some things that are likely to trip up developers migrating to Java:
Arrays in RPG start at 1. Arrays in Java start at 0.
Java doesn't have 'ouput' parameters, and all primitive types are passed by value (copied). This means that editing an integer won't be visible in calling methods.
Java doesn't have packed/signed encoding, and so translating to/from numbers/strings is more involved. The Date type in Java also has some serious problems (it includes time, sort of), and is far more difficult to meaningfully change to/from a character representation.
It's harder to read/write files in Java, even when using SQL (and forget about using native I/O directly with Java) - this can be mitigated somewhat with a good framework, however.
There are no ENDxx operators in Java, everything uses brackets ({}) to specify the start/end of blocks.
Everything in Java is in freeformat, and there are no columnar specifications of any sort (although procedure signatures are still required). There is no hardlimit on line length, although ~80 characters is still recommended. The tools (the free ones, even) are better, period, and generally far more helpful (although they may take some getting used to for those exposed to SEU). There are also huge, free libraries available for download.
The = sign is not context-sensitive in Java the way it is in RPG, it is always used for assignments. Use the double-equals, == operator for comparisons of values in Java.
Objects (datastructures) cannot be meaningfully compared with == - you will often need to implement a method called equals() instead.
Strings are not mutable, they cannot be changed. All operations performed on strings (either on the class/datastructure itself, or from external libraries) return brand new references. And yes, strings are considered datastructures, not value types, so you can't compare them with == either.
There are no built-in equivalents to the /copy pre-compiler directives. Attempting to implement them is using Java incorrectly. Because these are usually used to deal with 'boilerplate' code (variable definitions or common code), it's better to deal with this in the architecure. Variable(ALL D-specs, actually) definitons will be handled with import or import static statements, while common-code variants are usually handled by a framework, or defining a new class.
I'm sure there are a number of other things out there, let me know if you have any other questions.
Distributing and managing a fat client would be an absolute nightmare.
The ideal solution is a Java based web application hosted on the iSeries. The workstations access your applications through a web browser just like ASP.NET.
I've been using the Grails Framework to modernize and create new applications and it is working wonderfully.
When IBM says you should move to Java/J2EE then you should probably move your applications to web applications like your asp.net web apps. You should probably use a feature rich interface like JSF or GWT.
Web applications don't have to worry about JRE problems as you just need a standard browser.
However I don't know RPG and I don't know the suggested migration strategy.
I am a developer involved in as400 modernization. So far, from my experiences, I can give you my insights.
In addition to Java EE based websites, you can probably go for jax-ws based web services, which provide services for different flat and grid screens.
The clients can consume them in whichever technology they desire. Some lag is there, but the overall usability is good as in the normal web based applications.
Related
I'm a pretty skilled java programmer that has dabbled in web development but I find that I'm much better at doing desktop based stuff than I am at anything related to web development. I've been trying to find an easy way of porting some of my desktop apps to run in browser but can't seem to find anything. I guess what I'm looking for is something similar to an applet but they a largely unsupported and get more buggy by the day. Is there anything similar that would allow me to keep my desktop style mindset and still run in browser or should I just break down and rewrite the whole thing in rails or another common web platform.
Java WebStart has been mentioned by others - It's a technology that aids redistribution of Java applications that then have the full rights of desktop applications, but they also have auto-update support built in. It's basically a launcher that fetches a JAR from the internet and runs it as a desktop application. These don't run within the browser.
Applets are an old technology that can be embedded directly into the web-page. They are not buggy, but they have several security restrictions. Also, the support is steadily declining because of the amount of critical bugs found in the technology. Desktop users that want applet support typically don't have trouble ensuring it, however. Currently, both the Chrome and the Java platform itself issue a warning before an applet is allowed to run - and that assumes the Java Runtime Environment is already installed.
Google Web Toolkit is a framework that allows creating single-page applications in Java, which are then compiled to Javascript. GWT handles multiple things behind the scenes, including server-client communication, localisation and internationalisation, and its own layout engine.
When translating an existing application to GWT, you need to:
separate the code into a part that runs on the client and a part that runs on the server. The server does not have direct access to the user, and the client does not have direct access to the database. If your application does not use centralised storage, it probably can run entirely within the web browser. Since client-server communication happens over the internet, you should reduce it to the minimum.
translate the front-end to GWT widgets. Forget Swing or AWT - they are impossible to compile efficiently to Javascript.
remove dependency on other Java classes that the GWT does not know how to translate into Javascript in the client part of the application. A large part of java.util. is supported but none of javax. (as of Jan 2014). The GWT site hosts the list of supported Java classes. Also, Javascript's regexes are less powerful than those of Java. Lookbehinds, in particular, are not supported. The server-side is a full-blown Java environment, but remember - you want to reduce the server-client communication to the minimum.
But, the most common strategy is to code the client side directly in Javascript.
Javascript is a language very similar in syntax to C/C++ and Java. It uses curly braces to denote blocks of code, and it uses semicolons to separate statements (though Javascript features automatic semicolon insertion, sometimes it understands two lines as a single statement if the first line is not terminated by a semicolon. Its data types include numbers (double-precision floating point), strings, booleans, two types of null, plain objects (which are basically hash-maps [string -> x]), arrays (untyped and dynamically extensible), regexes and functions (named or anonymous), all of which have their own literal syntax.
When coding in Javascript, your mindset should be:
Javascript is single-threaded and event-driven. You don't have to worry about concurrency issues, but you cannot say "now wait for x" either. Since Your Java code should be event-driven as well, this should not be an issue.
Lots of things in Javascript are asynchronous. Want to know something from the user? You should paint a dialog, and attach event handlers to its components. Want to get the user's GPS position? Ask for permission, passing it an event handler for when the user decides if the permission should be granted, from which you ask for the position, which also takes an event handler as an argument. Talking to the server? Asynchronous. Do you want to display something before doing a long calculation? You have to actually wait a little before you start computing. Ecmascript 6 improves the syntax a lot, but it's not yet supported in modern browsers.
Browsers only let you do so much. Disk access? Only to a file or folder the user explicitly points to. Clipboard access? The only reliable way is copy/paste into a textbox. Talking to a foreign webserver? Only if that webserver explicitly lets you (and lot of them don't even know how to). Of course, "foreign" includes a different sub-domain, different port number or a different protocol (http:// or https://). Desktop notifications? Geolocation? Ask for permissions first. Java applets have comparable security restrictions, and for the very same reason.
In Java, everything is a class. In Javascript, you can enjoy bare functions without any class. A typical event handler is just an anonymous function that you pass as an argument to a library function. Also, you can have anonymous objects using a very conscise syntax. This makes Javascript code much denser than that of Java and with very few classes, if any. Object Oriented Programming is still possible in Javascript, but much less pronounced.
When layouting your display, you need to think in terms of HTML and CSS. The best approach is to modify only the document structure (adding/removing elements or HTML classes) using the Document Object Model (DOM), and leave all CSS in an external file. In any case, you need to know CSS enough to be able to layout your page. Modern browsers support canvas, but it has no built-in layouting engine - its closest Java relative is JCanvas - just a blank area where you can draw graphics primitives - or a WebGL canvas - where you can place triangles in a 3D space.
When designing your own API, you need to know which operations might need to be asynchronous. If they are, either take a function as an argument (a callback), or return an object that does (a promise).
Except for the this variable, Javascript is function-scoped and lexically scoped and has closures. If a variable exists in a surrounding scope, it can be read from and written to - even from within a function that is only defined in that scope and called much later. In Java, you can't close over non-final function-local variables.
However, you need to be careful about timing - don't think you can just assign to a variable within a callback and use it outside. When you try to use it, it won't have been assigned to yet. Many have tried to cheat the time this way, and failed.
When the user leaves your page, it's a game over. If you want to remember anything past that point, you need to store it somewhere, be it cookies (very little space, outdated API), localStorage (decent amount of space, not supported by very old browsers) or the server (lots of space, but talking to the server when your page is being shut down is tricky).
the DOM API is often criticised, but there are several frameworks and libraries that ease the usage of it, of which the most popular is jQuery, which also handles browser inconsistencies, improves the AJAX API, event delegation (you can't attach an event handler to an element that doesn't yet exist) and includes an animation engine (though modern CSS is almost as powerful and often easier to use).
I think java web start could help you
http://www.java.com/it/download/faq/java_webstart.xml
I suggest you to take a look at Java Web Start. It offers you a possibility to start your application software for the Java Platform directly from the Internet using a web browser.
For more details see: Java Web Start
Nowadays Web Start is not a good option since, the user needs to have JVM installed and with all the vulnerabilities buzz around Java is more difficult to convince users to download it. The latest versions of JDK 1.8+ include scripts to pack your application along the jvm runtime in just one installer: https://docs.oracle.com/javase/8/docs/technotes/guides/deploy/
For using your application in a browser like an applet, you can use Bck2Brwsr or TeaVM both can run java applications in a browser without Java Plugin. Bck2Brwsr also uses Java Plug in if it is available.
You can also use GWT to compile your Java application to JavaScript. Note: Swing is not supported.
Regards
This is a general "noob" question about software design, so I apologise if it seems vague,
but I would really appreciate the advice. Note the system described below is purely an example, not a specific product I have in mind.
I often have a need to combine the functionality of several libraries or utilities, written in different languages. For example, if I want to code a high-performance audio processing application for the desktop, I will write it in C / C++. Then, I want to add a nice GUI. But I don't want to learn Qt. I like the look and feel of Adobe Air, and would like to use that. Later, I have a need to access a USB device. But the USB library I have only has an API in Java. How can I combine all these elements together, to take advantage of their relative strengths?
Clearly, I cannot compile these various elements into one single executable. So I need to create and run them seperately, and give them a means to communicate. The most common way to do this seems to be using IPC (Inter Process Communication), eg shared memory or sockets. I prefer the idea of sockets, as the programs could potentially run on seperate machines on a network.
So I decide to create a local client / server system, with a custom API, to allow these elements to communicate. For example, the Air application will receive a message from the C application, telling it to update it's UI. The USB application running in Java will use the sockets to stream audio from the USB hardware, into the C application.
My question : is using local sockets in this way a typical way to design such a system?
Will the performance be much worse than a truly native application (e.g. everything in Java or C, in a single executable) ? It also seems likely that such an approach would be prone to bugs, and difficult to maintain?
I frequently find myself coming up against the limits of existing software libraries (e.g. a graphics library with a pretty, flexible UI but no way to access low-level hardware, or a media library that can mix many audio streams, but has no support for video playback), and find it very frustrating. If anyone could advise the best way to combine arbitrary software libraries like this, I would really appreciate it.
Thanks in advance!
As you have correctly identified, combining libraries from different language or platforms is hard. There are several ways to do it, but none are ideal. Examples:
Native call interfaces (e.g. JNI / JNA) - very fast but tricky to make work correctly, and you have the problem that the data types used typically don't map cleanly across different platforms. Adds native dependencies.
Socket based IPC with text protocol (XML, JSON, etc) - works OK and common formats are likely to be supported at both ends, but adds a lot of overhead. Can be a pain to maintain custom schema mappings etc.
Socket based IPC with binary protocol (e.g. Google protocol buffers) - quite efficient, needs a lot of work to get a custom protocol working correctly on both ends
Communication via a 3rd system (e.g. database, message queue, filesystem) - lots of overhead, can get fragile, introduces a major dependency on a 3rd system.
In my experience, it usually isn't worth integrating a new language / platform just to get one specific library or feature. Take your user interface example - no matter how nice Adobe Air looks, I doubt it is worth trying to integrate it with an existing C/C++ application.
Even if you get it to work, it will significantly complicate the future maintenance and devlopment of your application. Builds become more complex. You need to maintain additional communication / "glue" code. You need to manage more dependencies. Your users will get hit by many more configuration issues. Testing becomes more difficult. It becomes harder to teach someone new about how the whole system works. You need to maintain your skills in more languages / frameworks etc.
I'd recommend the following strategy:
Pick a primary platform
Whenever you need a new library or feature, look for something on your primary platform first. Hopefully (usually?) there is something good available - but even if not then it might be worth coding something yourself if the requirement is quite small.
Only if there is no reasonable option on the primary platform, then you can start to think about integrating a new language/platform
In terms of primary platform, I'd normally suggest a JVM language like Java, Scala or Clojure since the JVM is very well engineered, offers great performance, is highly portable and has the largest / most cohesive library ecosystem (most of which is open source). The JVM is therefore probably the best "general purpose" choice unless you have some very specific requirement which is unlikely to be possible on the JVM, e.g.:
If you are doing lots of embedded / realtime / systems programming wthat requires hardware access you probably need to go for C/C++
If you are coding purely for web-based clients, you probably want to use JavaScript (if you are also writing code on the server side you can consider JavaScript code generation frameworks/libraries that can work on the JVM, e.g. Vaadin or ClojureScript)
the answer is pretty much depends on the technologies you're using and there is no silver-bullet solution for this.
In general, this solutions will fall into one of the following categories:
Some interprocess communication techniques
Integrations provided by the language/platform itself
Database/some common storage (even files :) )
Example of the first:
Sockets/pipes/whatever you operating system allows.
CORBA - allows to write distributed code in different languages.
Google protobuf - allows serialization/deserialization of data-objects and its language agnostic
For the second it really depends on language/ecosystem you're using.
Examples for java:
JNI - Java Native interface - allow to execute code (dlls/so) outside the JVM.
JCA - if you're in the enterprise environment - you can write the integration with the legacy systems in this.
For languages that are compiled into the native code its less tricky - you can write and compile some code, say in Pascal, and then use the DLL in C.
Sometimes when we're talking about Java there is a plethora of languages that have their own syntax and compiler, but their compiler compiles into java binary code that can be run inside the jvm. So if your solution is based on these languages the integration will be easier. Languages like Scala, Groovy, Closure, Jython and so on are falling into this category.
The last but not the least technology to be mentioned is Web Services. This is a very popular tool for integration of different system, although its more used in enterprise environment.
Basically its an abstraction over the sockets layer that allows to send data objects in XML/JSON format between the processes/servers. Both of XML and JSON are language agnostic, so its not an issue to create an XML in a program written in C++ and then consume it in JAVA.
Hope this helps
I'm performing a thought-experiment which, judging by other questions, isn't so novel after all, but I think I'll proceed anyway. I want to sandbox a user-supplied server-side script by, among other things, confining it to a virtual filesystem and setting the root directory, and further mapping certain virtual directories to specific physical ones, inconsistent with the actual directory structure. For example (using PHP string parsing), my preconception is "~$user/..." but the less-semantic "/$user/..." would work fine too; either might map to "users/$user/$script_name/data/...". Yes, under certain circumstances multiple users can be affected by the script.
Since this is a thought-experiment and I therefore don't consider the implementation language an issue, I'm expecting to do it on my localhost and would rather use PHP than install something else. (I also have Python 2 available, and could get mod_wsgi to use it instead. I'd install Python 3 if I had to.) Ideally, I wish a PEAR module would do this - but from what I can see none does.
I tried and failed to find a server module, such as SSJS, that could accomplish this. The closest things to answers that I found were << Looking for a locked down script interpreter >> and << Allowing users to script inline, what inline scripting engines are there for either .net or java? >>. I'll move to Java or, less likely, Mono if I absolutely have to, but I'm not enthusiastic to the idea. (I'm extremely rusty on Java, and have hardly used it server-side at all. Mono is totally alien to me.)
Since they're the most promising options so far, I also wonder how extensive the sandboxing facilities are in Java and Mono. For example, can they do virtual filesystems? Entering APIs from Java user-code into the engine? Are any standard APIs offered to the script, and if so can they be removed?
Clarification
I don't really care which way this goes, but I was actually expecting Java/Mono to be the implementation platform rather than the sandboxed one, based on the questions & answers I linked. I'm a little surprised to see that flipped in the answers, but either way works.
The Java sandbox (in the way implemented for browser applets) does not offer file access at all.
In general, the Java security model has only "allow or not allow" decisions for the security manager in most cases.
Of course you could design another API instead of the normal File IO api (and similar), and have your sandboxed script access then this way (and forbid the normal way by a security manager). (I suppose some of this is already implemented in the Java application engines on the market, but I do know about nothing about this).
I have never tried to truly sandbox Mono but this might give you a starting point:
http://www.mono-project.com/MonoSandbox
File system access in the sandbox is touched on in that link.
Popular choices for Mono scripting seem to be Boo and Python. Both ship with the latest version of Mono (2.10). Visual Basic, Ruby and F# (OCaml-ish) do as well.
The Mono C# compiler can be easily embedded as a service for scripting. Here is a nice article about it.
If you are partial to PHP, you should check out Phalanger.
There are many other choices. There are new .NET based scripting languages all the time. I came across this one earlier today.
I work with .Net professionally in a lot of different contexts, so it's easy for me to read about new frameworks, runtime internals, advanced techniques/design and put them to use and understand them. In the Java world, I have limited experience and am really only working with it for Android development these days. I've been able to learn the language well enough to build out the functionality I'm looking for, but I want to learn more about good practices and design that the Java guys agree on, whatever modern frameworks everyone's using, and more about the internals of the VM and how my programming choices affect how my code is compiled and executed.
Examples from the .Net world of what I'm looking for are
There's a series of books called Effective C# that outlines 50 items per book of subtle changes to your programming style and how they will make your code cleaner and more performant in specific scenarios.
Entity Framework is a framework from Microsoft for hooking up directly to a data source and building out a configurable entity model automatically
Managed Extensibility Framework is a new framework from Microsoft for writing extensible applications and pluggable libraries by exposing extension points on both ends
There is documentation galore on the internet about how the .Net garbage collector works and how your programming choices affect how this interacts with your applications
What kinds of resources, books, tutorials and frameworks exist like this in the Java world?
There's a book called Effective Java too.
There are different categories of data binding in Java. The most advanced are the Object models, like JDO, JPA, etc. They basically use a map to move data from objects to tables, and you never touch database directly as it is all handled transparently. Another is the typical "object binds to a row" technique, of which JDO is a good example. Finally, there is handling the database directly, which you use JDBC. Use the tool most appropriate to your code logic.
In general, you'll find that with Java it's not a "one solution only" environment. Some of the problems have been solved multiple times in different ways to achieve different results.
It sounds like "Managed Extensibility Framework" is a subtle copy of the J2EE server concept. J2EE has undergone at least three major revisions over the past decade. If you want to use J2EE, remember that it provides services to items within wrappers called "containers". This means you will have to adapt your code to meet the container service agreements. There is a bit of up-front learning involved, but once you understand the environment it isn't hard. You also don't need to use the entire J2EE environment and you can embed your own solutions to those provided by the J2EE server. It's a pick and choose type arrangement, precious little is forced on you.
J2EE also describes a lot of corporate technologies that may live independently of a J2EE server, so if you don't like the J2EE environment (for whatever reason) you can always include the JAR files and use the libraries without the J2EE server.
Some people have decided that the initial J2EE servers were too restrictive, so you have an almost-J2EE server called Spring. The J2EE web containers arrived pretty early on the scene in Java, so you can get "web container only" servers, like Tomcat or Jetty.
With Java, there is probably even more documentation about the garbage collector, but you have to deal with it's behaviour less. Java's garbage collector is generally much better behaved, and it doesn't have to deal with pointer support which partially makes .net's garbage collector something you do need to tend to from time to time.
That said, dereference anything you want collected. If the logic stores items in a HashMap as a cache, consider using SoftReferences, which will not be considered as references in garbage collection. Java doesn't reference count, so don't worry about circular references, you can dereference a cycle of references and they will all be collected.
The algorithm the GC uses changes depending on memory availability. In low memory utilization situations, it will copy live objects to a new page and free the old page (so compaction is obtained nearly for free. In higher memory situations, it uses a mark, sweep, and compact cycle typical of other garbage collectors. It also stages it's memory in three generational segments to order object by the frequency they should be checked for usage in the current running program.
All of that said, the real kicker is that Android uses the Java language, but it doesn't run a JVM. It runs an "I-can't-believe-it's-not-Java!" JVM-work alike that makes significant changes to the class loader and class file format. That means that you need to learn how the Davlik Virtual Machine operates and differs from the JVM.
Have fun! There is a lot more choice in Java land that you're probably accustomed to; however, many of the most popular Java tools have been ported to .net land, so you won't find the entire landscape foreign.
I have read a few articles mentioning converters from one language to another.
I'm a bit more than skeptical about the use of such kind of tools. Does anyone know or have experiences let's say about Visual Basic to Java or vs converters? Just one example to pick
http://www.tvobjects.com/products/products.html, claims to be the "world leader" or so in that aspect, However if read this:
http://dev.mysql.com/tech-resources/articles/active-grid.html
There the author states:
"The consensus of MySQL users is that automated conversion tools for MS Access do not work. For example, tools that translate existing Access applications to Java often result in 80% complete solutions where finishing the last 20% of the work takes longer than starting from scratch."
Well we know we need 80% of the time to implement the first 80% functionality and another 80% of the time for the other 20 %....
So has anyone tried such tools and found them to be worthwhile?
Tried? No, actually built (more than one) language convertor.
Here's one I (and my coworkers) built for the B2 Spirit Stealth Bomber to convert the mission software, coded in a legacy language, JOVIAL, into maintainable C code, with 100% automated conversion. One of the requirements was that we were NOT allowed to see the actual source code. No joke.
You are right: if you get only a medium high conversion rate (e.g., 70-80%), the effort to finish the conversion is still very significant if indeed you can do it at all. We target 95%+ and do better when told to try harder as was the case for the B2. The only reason people accept medium high rate converters is because they can't find (or won't fund!) a better one, insist on starting now, and accept the fact that converting it this way may be painful (usually they don't know how much) but is in fact less painful than rebuilding it from scratch. (I happen to agree with this assessment: in general, projects that try to recode a large system from scratch usually fail and conversions using medium high conversion rate tools don't have as high a failure rate.)
There are lots of bad conversion tools out there, something slapped together with a mountain of PERL code doing regexes on text strings, or some YACC-based parser with code generation essentially one-to-one for each statement in the compilation unit. The former are built by people who had a conversion dropped on them out of the sky. The latter are often built by well-intentioned engineers that don't have decent compiler background.
For a singularly bad example, see my response to this SO question about COBOL migration: Experience migrating legacy Cobol/PL1 to Java, which is exactly a direct statement translator... producing the stuff that gave rise to the term "JOBOL".
To get such high-accuracy conversion rates, you need high-quality parsers, and means to build high-quality translation rules that preserve semantics, and optimize for target-language properties and special cases. In essence, you need what amounts to configurable compiler technology. The reason we succeed, IMHO, is our DMS Software Reengineering Toolkit, which was designed to do this job. (I'm the architect; check out my SO icon/bio).
Lots of careful testing helps, too.
DMS "knows" what the compiler knows about code, by virtue of having a compiler-like front end for the language of interest, and having the ability to build ASTs, symbol tables, control and data flows, call graphs. It uses much of the compiler technology that the compiler community spent the last half-century inventing, because that stuff has been proven to be useful in translation!
DMS knows more than most compilers know, because it can read/analyze/transform the entire application at once; most compilers stick to single compilation units. Thus one can code translation rules that depend on the entire application as opposed to just the current statement. We often add problem- or application-specific knowledge to improve the translation. This often shows up when converting special features of a language, or calls on libraries, where one must recognize the library calls as special idioms, and translate them to calls on compositions of target libraries and language constructs.
This capability is used to build translators (e.g., the JOVIAL translator), or domain-specific code generators.
More often we build complex automated software engineering tools that solve problems specific to customers, such as program analysis tools (dead code, duplicate code, style-broken code, metrics, architecture extraction, ...), and mass change tools (platform [not langauge] migrations, data layer insertion, API replacement, ...)
It seems to me, as is almost always the case with MS-ACCESS questions having tags that attract the wider StackOverflow population, that the people answering are missing the key question here, which I read as:
Are there any tools that can successfully convert an Access application to any other platform?
And the answer is
ABSOLUTELY NOT
The reason for that is simply that tools in the same family that use similar models for the UI objects (e.g., VB6) lack so many things that Access provides by default (how do you convert an Access continuous subform to VB6 and not lose functionality?). And other platforms don't even share the same core model as VB6 and Access, so those have even more hurdles to clear.
The cited MySQL article is quite interesting, but it really confuses the problems that come with incompetently-developed apps vs. the problems that come with the development tools being used. A bad data schema is not inherent to Access -- it's inherent to [most] novice database users. But the articles seems to attribute this problem to Access.
And entirely overlooks the possibility of fixing the schema, upsizing it to MySQL and keeping the front end in Access, which is by far the easiest approach to the problem.
This is exactly what I expect from people who just don't get Access -- they don't even consider that Access as front end to a securable, large-capacity server database engine can be a superior solution to the problem.
That article doesn't even really consider conversion of an Access app, and there's good reason for that. All the tools that I've seen that claim to convert Access applications (to whatever platform) either convert nothing but data (in which case they don't convert the app at all -- morons!), or convert the front end structure slavishly, with a 1:1 correspondence between UI objects in the Access application and in the target app.
This doesn't work.
Access's application design is specific to itself, and other platforms don't support the same set of features. Thus, there has to be translation of Access features into a working substitute for the original feature in the converted application. This is not something that can be done in an automated fashion, in my opinion.
Secondly, when contemplating converting an Access app for deployment in the web browser, the whole application model is different, i.e., from stateful to stateless, and so it's not just a matter of a few Access features that are unsupported, but of a completely different fundamental model of how the UI objects interact with the data. Perhaps a 100% unbound Access app could be relatively easily be converted to a browser-based implementation, but how many of those are there? It would mean an Access app that uses no subforms whatsoever (since they can't be unbound), and an app that uses only a handful of events from the rich event model (most of which work only with bound forms/controls). In short, a 100% unbound Access app would be one that fights against the whole Access development paradigm. Anyone who thinks they want to build an unbound app in Access really shouldn't be using Access in the first place, as the whole point of Access is the bound forms/controls! If you eliminate that, you've thrown out the majority of Access's RAD advantage over other development platforms, and gained almost nothing in return (other than enormous code complexity).
To build an app for deployment in the web browser that accomplishes the same tasks as an Access applications requires from-the-ground-up redesign of the application UI and workflow. There is no conversion or translation that will work because the successful Access application model is antithetical to the successful web application model.
Of course, all of this changes with Access 2010 and Sharepoint Server 2010 with Access Services. In that case, you can build your app in Access (using web objects) and deploy on Sharepoint for users to run it in the browser. The results are functionally 100% equivalent (and 90% visually), and run on all browsers (no IE-specific dependencies here).
So, starting this June, the cheapest way to convert an Access app for deployment in the browser may very well be to upgrade to A2010, convert the design to use all web objects, and then deploy with Sharepoint. That's not a trivial project, as Access web objects have a limited set of features in comparison to client objects (and no VBA, for instance, so you have to learn the new macros, which are much more powerful and safe than the old ones, so that's not the terrible hardship it may seem for those familiar with Access's legacy macros), but it would likely be much less work than a full-scale redesign for deployment on the web.
The other thing is that it won't require any retraining for end users (insofar as the web-object version is the same as the original client version), as it will be the same in the Access client as in the web browser.
So, in short, I'd say conversion is a chimera, and almost always not worth the effort. I'm agreeing with the cited sentiment, in fact (even if I have a lot of problems with the other comments from that source). But I'd also caution that the desire for conversion is often misguided and misses out on cheaper, easier and better solutions that don't require wholesale replacement of the Access app from top to bottom. Very often the dissatisfaction with Jet/ACE as data store confuses people into thinking they have to replace the Access application as well. And it's true that many user-developed Access apps are filled with terrible, unmaintainable compromises and are held together with chewing gum and bailing wire. But a badly-designed Access application can be improved in conjunction with the back-end upsizing andrevision of the data schema -- it doesn't have to be discarded.
That doesn't mean it's easy -- it's very often not. As I tell clients all the time, it's usually easier to build a new house than to remodel an old one. But one of the reasons we remodel old houses is because they have irreplaceable characteristics that we don't want to lose. It's very often the case that an Access app implicitly includes a lot of business rules and modelling of workflows that should not be lost in a new app (the old Netscape conundrum, pace Joel Spolsky). These things may not be obvious to the outside developer trying to port to a different platform, but for the end user, if the app produces results that are off by a penny in comparison to the old app, they'll be unhappy (and probably should be, since it may mean that other aspects of the app are not producing reliable results, either).
Anyway, I've rambled on for too long, but my opinion is that conversion never works except for the most trivial apps (or for ones that were designed to be converted, e.g., a 100% unbound Access app). I'm all for revision in place of replacment.
But, of course, that's how I make my living, i.e., fixing Access apps.
A couple of issues that effect the success or failure of cross-language conversion are the relative semantic richness of the languages, and their semantic models.
Translation from C++ to C should be relatively easy, but translation of C to idiomatic C++ would be next to impossible because that would be next to impossible to automatically turn a procedural program into an OO program.
Translation of Java to C would be relatively simple, though handling storage management would be messy. Translation of C into Java would be next to impossible if the C program did funky pointer arithmetic or casting between integers and different kinds of pointer.
Translation of a functional language to an imperative language would be much easy though the result would probably be inefficient, an non-idiomatic. Translation of an imperative language to a functional language is probably beyond the state of the art .... unless you implement an interpreter for the imperative language in the functional language.
What this means is that some translators are necessarily going to be more successful than others in terms of:
completeness and accuracy of translation, and
readability and maintainability of the resulting code.
Things You Should Never Do, Part I by Joel Spolsky
"....They did it by making the single worst strategic mistake that any software company can make:
They decided to rewrite the code from scratch."
I have a list of MS Access converters on my website. I've never heard anything good about any of them in any postings in the Access related newsgroups I read on a daily basis. And I read a lot of postings on a daily basis.
Also note that there is a significant amount of functionality in Access, such as bound continuous forms or subforms, that is more work to reproduce in other systems. Not necessarily a lot of work but more work. And more troubles when it comes time to distribute and install the app.
I've used an automated converter from C# to Visual Basic.NET. It worked pretty well except for adding some unnecessary If True statements.
I've also attempted to use Shed Skin to convert Python-to-C++, but it didn't work because of its lack of support for new-style division.
I've used tools for converting a VB6 Project into VB.Net - which you would hope would be perhaps one of the simpler examples of this sort of thing. My experience was that everything had to be checked, in fine detail, and half the stuff was missing / wrong.
Certainly I would recommend a migration by hand, or depending on the language you're targetting, I would consider a complete rewrite if this gives you a chance to make major improvements to your codebase.
Martin
I have only tried free and basic paid for converters. But the main problem is that it is very very hard to have confidence that the conversion is entirely successful.
Usually they are best used to hand convert code section at a time, where you review each piece of code. Often in my experience a rewrite instead of a conversion turns out to be a better option.