We have two code bases, one written in C++ (MS VS 6) and another in Java (JDK 6).
Looking for creative ways to make the two talk to each other.
More Details:
Both applications are GUI applications.
Major rewrites or translations are not an option.
Communications needs to be two-way.
Try to avoid anything involving writing files to disk.
So far the options considered are:
zero MG
RPC
CORBA
JNI
Compiling Java to native code, and then linking
Essentially, apart from the last item, this boils down to a choice between various ways to achieve interprocess communication between a Java application and a C++ application. Still open to other creative suggestions!
If you have attempted this, or something similar before please chime in with your suggestions, lessons learnt, pitfalls to avoid, etc.
Someone will no doubt point out shortly, that there is no one correct answer to this question. I thought I would tap on the collective expertise of the SO community anyway, and hope to get many excellent answers.
Well, it depends on how tightly integrated you want these applications to be and how you see them evolving in the future. If you just want to communicate data between the two of them (e.g. you want one to be able to open a file written by the other, or read a stream directly from the other), then I would say that protocol buffers are your best bet. If you want the window rendered by one of these GUI apps to actually be embedded in a panel of the other GUI app, then you probably want to use the JNI approach. With the JNI approach, you can use SWIG to automate a great deal of it, though it is dangerously magical and comes with a number of caveats (e.g. it doesn't do so well with function overloading).
I strongly recommend against CORBA, RMI, and similarly remote-procedure-call implementations, mostly because, in my experience, they tend to be very heavy-weight and consume a lot of resources. If you do want something similar to RMI, I would recommend something lighter weight where you pass messages, but not actual objects (as is the case with RMI). For example, you could use protocol buffers as your message format, and then simply serialize these back and forth across normal sockets.
Kit Ho mentioned XML or JSON, but protocol buffers are significantly more efficient than either of those formats and also have notions of backwards-compatibility built directly into the definition language.
Use Jacob ( http://sourceforge.net/projects/jacob-project ), JCom ( http://sourceforge.net/projects/jcom ), or j-Interop ( http://j-interop.org ) and use COM for communication.
Since you're using Windows, I'd suggest using DDE (Dynamic Data Exchange). There's a Java library available from Java Parts.
Dont' know how much data and what type of data you wanna transfer and communicate.
But to simplify the way, I suggest using XML or Json based on HTTP protocol.
Since there are lots of library for both applications and you won't spend too much effort to implement and understand.
More, if you have additional applications to talk with, it is not hard since both tech. are cross-languages.
correct me if i am wrong
Related
I've been doing some programming in Java and some in C but now I need to sort of use both together.
Here's the situation, I'm using Hadoop/Hbase to process and store a lot of data but I'm using C/Cuda to do number crunching on the data. Is there a stable/mature/common way to take data (it's basically a log file) in Java and pass it to a C program, which C processes the data it stores it as a linked list that is then accessible by the Java app?
I might not be searching for the right thing, but so far I found JavaCPP, which is good but seems to involve both programs together. Because Java handles the data flow and C handles the processing of the data, I thought it might be better to keep them as independent programs that can communicate to each other as opposed to a single program that may become confusing. But I'm totally flexible so any suggestions/solutions are welcomed.
You may find it easier to keep the programs testable and clear if you leave them separate and then use a client-server approach, or simply choose a common file format and have the latter steps poll the output directory for new files to process.
To make it easier to define file formats across different languages, consider a package like Apache Thrift or Google Protocol Buffers.
Here what I have on the top of my head
1. run C program using command line from java app.
2. Use JNI/JNA
3. Implement your own "client-server" architecture. It sounds complicated but in some cases it may be the best and the simplest solution.
4. Communicate using Web service, SOAP, REST, whatever.
I hope this is helpful for the beginning.
You are welcome to ask more specific questions once you have.
I am thinking to work on a programming problem for which, I suppose, I will need to know a lot of advanced programming concepts. For some reasons I have decided to code it in Java - even though I am not proficient in it.
So I want you to help me with suggestions, guidance, pointers to resources, books, tutorials or any generic advises that you think is pertinent.
Here is the basic nature of my problem:
I need to create a client-server architecture. Server supports multiple concurrent clients. Clients send it simple instructions (may be server exposes some kind of API/ runs listener on specific port), server executes the instructions and send result back to client.
The main job of the server is to do huge volume of data processing based on the instructions given to it. It takes data from backend database/ file systems. Data volume can easily surge up to ~ 200GB - 700GB. Data will be usually streamed to it, but it may require to hold huge volume of data in memory cache during processing (and if RAM is not enough, then page it to disk). Computations are generally numerically intensive in nature (let's say taking the inverse of a matrix)
The server should be able to do multithreading (I don't know what this term mean in Java, what I wish is, the server should be able to distribute the job in multiple parallel sub-processes.)
The server itself should be very lightweight. I Do NOT need any GUI Interface.
It will be great if I design it in a way so that I can integrate it later with HPC frameworks like Hadoop.
Now if I got to do this, what kind of programming do I need to learn? By the way, I have good understanding on OOP, I am somewhat familiar with Data Structures and algorithms, I know basic Java (never done any network or multithreaded programming in Java before, but have used typical oop concepts, generics, comparable interfaces etc.). I basically work in database programming, but have also done lot of C, C++, C#, Python in the past.
Given the requirement and my background, please suggest,
How should I begin to work on this project? What is the way to architect the project?
Should I create some basic API definitions first and then start working on the details?
Should I follow any particular design pattern? Where to learn them from?
What are the things I need to learn in Java and where to learn them from?
What is the best way to read huge data in memory? Is Java nio good solution?
If I instantiate a class with huge amount of data, would it work? (example, let's say I have a Vector class to represent a matrix with millions of elements and the constructor of the class reads huge data set in the memory). What's the best way to handle that?
You will want to define how the client and server will talk to eachother. The easiest way is to use established protocols such as HTTP by creating REST services that the client can call without much coding.
Most frameworks that support HTTP create several listeners that run in different threads. This gives you multi threading out of the box.
I'd suggest looking into I prefer Spring Controllers. Spring is fairly light weight.
If you want to use these frameworks, you will want to quickly find, and incorporate them into your application for compilation and packaging.
I would suggest looking into Maven for this. It's a big time saver. In particular using archetypes to create your project's folder structure, and auto download dependencies, and their dependencies.
Finally my words of wisdom. Ensure your services are singleton stateless services. This means you only create the objects once, and each thread uses the same objects. There is lots less garbage collection happening. This makes a huge difference when processing large amounts of requests.
Be careful not to use class level variables to hold state, in these services. If you do, different threads will over write each others data.
First thing I would like to say that as per your explanation of the things you seem to be in a pretty good shape to use java as your server side language.
The kind of client server architecture you choose may depend on what kind of clients actually you are serving to. Would they be typical GUI or CUI based desktop clients or the web clients.
In the latter case you could use Spring Framework in a normal fashion and for the former one you could go further to explore Spring's support for Restful Web services. I would advise not to go with socket or TCP based networking solutions or use java networking.
Spring's RESTful API gives you a very cool abstraction over things like networking and multi threading even for a desktop based client. In case of a desktop client you can use JSON/XML as response and can use HttpClient library for making calls to server, which is a very cool abstraction of the underlying networking stuff.
Further up Spring's design patterns follow a very linear flow of data. A lot of your fundamental design considerations are catered by the Spring itself using Dependency Injection and Inversion of Control which are extremely simple to incorporate.
For a detailed analysis of design patterns related to specific requirements I would suggest you to read the book called Java Design Patterns: A Tutorial of Addison Wesley publications and the author is James W. Cooper.
One more thing about the API design. It would be preferable for you to first create a API specification and then go further to implement them.
This is a general "noob" question about software design, so I apologise if it seems vague,
but I would really appreciate the advice. Note the system described below is purely an example, not a specific product I have in mind.
I often have a need to combine the functionality of several libraries or utilities, written in different languages. For example, if I want to code a high-performance audio processing application for the desktop, I will write it in C / C++. Then, I want to add a nice GUI. But I don't want to learn Qt. I like the look and feel of Adobe Air, and would like to use that. Later, I have a need to access a USB device. But the USB library I have only has an API in Java. How can I combine all these elements together, to take advantage of their relative strengths?
Clearly, I cannot compile these various elements into one single executable. So I need to create and run them seperately, and give them a means to communicate. The most common way to do this seems to be using IPC (Inter Process Communication), eg shared memory or sockets. I prefer the idea of sockets, as the programs could potentially run on seperate machines on a network.
So I decide to create a local client / server system, with a custom API, to allow these elements to communicate. For example, the Air application will receive a message from the C application, telling it to update it's UI. The USB application running in Java will use the sockets to stream audio from the USB hardware, into the C application.
My question : is using local sockets in this way a typical way to design such a system?
Will the performance be much worse than a truly native application (e.g. everything in Java or C, in a single executable) ? It also seems likely that such an approach would be prone to bugs, and difficult to maintain?
I frequently find myself coming up against the limits of existing software libraries (e.g. a graphics library with a pretty, flexible UI but no way to access low-level hardware, or a media library that can mix many audio streams, but has no support for video playback), and find it very frustrating. If anyone could advise the best way to combine arbitrary software libraries like this, I would really appreciate it.
Thanks in advance!
As you have correctly identified, combining libraries from different language or platforms is hard. There are several ways to do it, but none are ideal. Examples:
Native call interfaces (e.g. JNI / JNA) - very fast but tricky to make work correctly, and you have the problem that the data types used typically don't map cleanly across different platforms. Adds native dependencies.
Socket based IPC with text protocol (XML, JSON, etc) - works OK and common formats are likely to be supported at both ends, but adds a lot of overhead. Can be a pain to maintain custom schema mappings etc.
Socket based IPC with binary protocol (e.g. Google protocol buffers) - quite efficient, needs a lot of work to get a custom protocol working correctly on both ends
Communication via a 3rd system (e.g. database, message queue, filesystem) - lots of overhead, can get fragile, introduces a major dependency on a 3rd system.
In my experience, it usually isn't worth integrating a new language / platform just to get one specific library or feature. Take your user interface example - no matter how nice Adobe Air looks, I doubt it is worth trying to integrate it with an existing C/C++ application.
Even if you get it to work, it will significantly complicate the future maintenance and devlopment of your application. Builds become more complex. You need to maintain additional communication / "glue" code. You need to manage more dependencies. Your users will get hit by many more configuration issues. Testing becomes more difficult. It becomes harder to teach someone new about how the whole system works. You need to maintain your skills in more languages / frameworks etc.
I'd recommend the following strategy:
Pick a primary platform
Whenever you need a new library or feature, look for something on your primary platform first. Hopefully (usually?) there is something good available - but even if not then it might be worth coding something yourself if the requirement is quite small.
Only if there is no reasonable option on the primary platform, then you can start to think about integrating a new language/platform
In terms of primary platform, I'd normally suggest a JVM language like Java, Scala or Clojure since the JVM is very well engineered, offers great performance, is highly portable and has the largest / most cohesive library ecosystem (most of which is open source). The JVM is therefore probably the best "general purpose" choice unless you have some very specific requirement which is unlikely to be possible on the JVM, e.g.:
If you are doing lots of embedded / realtime / systems programming wthat requires hardware access you probably need to go for C/C++
If you are coding purely for web-based clients, you probably want to use JavaScript (if you are also writing code on the server side you can consider JavaScript code generation frameworks/libraries that can work on the JVM, e.g. Vaadin or ClojureScript)
the answer is pretty much depends on the technologies you're using and there is no silver-bullet solution for this.
In general, this solutions will fall into one of the following categories:
Some interprocess communication techniques
Integrations provided by the language/platform itself
Database/some common storage (even files :) )
Example of the first:
Sockets/pipes/whatever you operating system allows.
CORBA - allows to write distributed code in different languages.
Google protobuf - allows serialization/deserialization of data-objects and its language agnostic
For the second it really depends on language/ecosystem you're using.
Examples for java:
JNI - Java Native interface - allow to execute code (dlls/so) outside the JVM.
JCA - if you're in the enterprise environment - you can write the integration with the legacy systems in this.
For languages that are compiled into the native code its less tricky - you can write and compile some code, say in Pascal, and then use the DLL in C.
Sometimes when we're talking about Java there is a plethora of languages that have their own syntax and compiler, but their compiler compiles into java binary code that can be run inside the jvm. So if your solution is based on these languages the integration will be easier. Languages like Scala, Groovy, Closure, Jython and so on are falling into this category.
The last but not the least technology to be mentioned is Web Services. This is a very popular tool for integration of different system, although its more used in enterprise environment.
Basically its an abstraction over the sockets layer that allows to send data objects in XML/JSON format between the processes/servers. Both of XML and JSON are language agnostic, so its not an issue to create an XML in a program written in C++ and then consume it in JAVA.
Hope this helps
I'm currently considering using java in one of my projects(for reasons unrelated to networking). At the moment I'm using C++ and a custom protocol built on top of UDP. My problem here is that while the added efficiency is nice for sending large amounts of realtime-data, I'd rather have something along the lines of RPCs for pure "logic actions" such as login. RPC's in C++ are hard to do though, since standard C++ itself has no notion of serialization.
In another answer, I found Java's RMI, which seems to be similar to RPCs, but I couldn't find how efficient/responsive it is, nor whether it could be plugged into my existing UDP socket, since I don't want to have two ports open on my server program.
Alternatively, since I think Java has serialization, I could implement RPC's myself, depending on how straightforward deserializing an arbitrary stream of objects in java is. Still, if this would require me to spend days on learning the intrinsics of java, this wouldn't be an option for me.
If you're interested in RPC, there is always XML-RPC and JSON-RPC, both of which have free/open-source C++ implementations. Unfortunately, most of my development has been in Java, so I can't speak to how usable or effective they are, but it might be something to look into since it sounds like you have already done some work in C++ and are comfortable with it. They also have Java implementations, so you might even be able to support both Java and C++ applications with XML-RPC or JSON-RPC, if you want to go down that route.
The only downside is that it looks like most of these use HTTP connections. One of the things you wanted to do was to reuse the existing connection. Now, I haven't looked at all of the implementations, but the two that I looked at might not meet that requirement. Worst case is that perhaps you can get some ideas. Best case if that there might be another implementation out there somewhere that does what you need and you now have a starting point to find it.
The use of RPCs as an abstraction do not preclude the use of UDP as the transport layer: RMI is an RPC abstraction that generally used TCP under the hood (last time I looked).
I'd suggest just coding up a Java layer to talk your UDP protocol: you can use any one of many libraries to do it and you don't have to discard all your existing work. If you want to wrap an RPC layer around your protocol no reason why you can't do that: create a login method that sends the login UDP packet and receives the appropriate response and returns it.
If it's a remotely serious project, you should probably take a look at Netty.
It's a great library for developing networked systems, has a lot of proven production usage and is well suited for things like TCP or UDP client-server communication. I wouldn't go reinventing this wheel unless you really have to :-)
As a bonus they have some good examples and documentation too.
I know of at least one post which has same words like this. But this is not exactly same as that post. I'm trying to work a way to "share" data between a .NET and Java application. I'm not concerned about objects, but just plain strings if u like.
I have a .NET application capturing real-time data and a Java application which has capability to analyze and work on this data. I'm looking for ways to re-use this same java app without coding it entirely in .NET.
My problem is that the data is "fairly" REAL-Time (.NET), and so has to be the analysis (Java). I can live with microsecond delays but I can't afford one second delay. WebServices, Queues (as in Messaging Queues), RDBMS are some of the options I can think of. Is there any better way?
Or has anybody got some real performance numbers for the solutions I mentioned above to select one of them? And just to get started: RDBMSs' are not "THAT" good for concurrent (connections doing) insertion/updation/reading, at least with the crude way of doing DBMS stuff. (Deadlocks?)
What are "objects" if not a mechanism for describing "data"? But I digress - I suspect I would look at a TCP socket between the two. If the data is very basic, then fine - just write directly to the stream; if there is any complexity, perhaps use something like "protocol buffers" to provide an easy way of reading/writing dense data to a stream without having to write every last byte yourself.
I think microsecond delays are going to be a challenge for any approach here... will millisecond delays do?
For completeness:
Another possible is to use Named pipes, it should be pretty quick, and I'd imagine (being a java guy I can only imagine) that .NET has native support for them. The down side is that on windows you'll have to either write a JNI extension or use a library like JNA to poke around at the Win32 API from Java.
Sounds like a local socket could do. The latency should be in low ms or less.
Depending on your program you may get some milage out of what #Cowan reports in answer to 'Any Concept of shared memory in java', his answer is: Any concept of shared memory in Java
In summary: he say's that you can use memory mapped files between two processes on the same machine. This in theory could work between .NET and java assuming .NET has some memory mapped file support.
Different machines communicate with each other by sending messages into sockets. Please check the below link for example.
Socket programming in the real world
Answers provided here are great. One idea that might be of interest, but is probably asking for more trouble than it's worth is to load both VMs in a single process (both the JVM and the CLR can be loaded within a native Windows application) and give them access to native code. Java via JNI and .Net via the mapping functions to native code that they allow.
You could also leverage native queue semaphores to wake up a thread on one side or the other when data is updated.
While JNI transitions are expense, they would probably still be faster than the native local socket implementation.
How is your Java application currently deployed? It sounds to me like you're willing to make some modification to it, so I'm assuming you have access to the source code.
I know this is a little out there, but could you compile the Java application in the J# compiler, so that your .NET app has native access to it?
You can convert your compiled java application to .NET by IKVM. After that you can change logic of your .NET application so it will not make data transfers to Java application, but just call data processing code written in Java as it were written and compiled for .NET.
There are a number of JMS servers which support .NET and Java clients. These can perform messages in under a millisecond.
However you might like to try an RPC solution like Hessian RPC or Protobuf RPC. These can achieve lower latencies and can give the appearance of direct calls between platforms. These support .NET and Java as well.