According to this Wikipedia entry:
"Protocol Buffers is very similar to Facebookâs Thrift protocol, except it does not include a concrete RPC stack to use for defined services. Since Protocol Buffers was open sourced, a number of RPC stacks have emerged to fill this gap."
However, there are no examples of RPC stacks cited. Can anyone suggest a Java-based implementation of an RPC stack?
If you want Java-based RPC stack, it's RMI. However, it doesn't work well cross platform.
I've been using ProtoBuf to do RPC. You can pretty much simulate an RPC stack by wrapping a protobuf message inside another protobuf, which defines the services or calls. Find my answer to this question for details,
Google Protocol Buffers and HTTP
Thrift looks like a very good alternative if you want support more platforms like PHP, Ruby, C# etc. However, it looks very complex to me compared to ProtoBuf.
Google has open sourced their RPC framework gRPC, which uses Protocol Buffers to define the service and messages. gRPC is cross-platform with support for C, C++, C#, Java, Go, Node.js, Python, Ruby, Objective-C and PHP.
gRPC is based on the HTTP/2 standard that enables new capabilities such as bidirectional streaming, flow control, header compression and multiplexed connections.
Related
I'm writing an application for mobile phones on Java. It's goal is to send and receive Vector objects to and from the server. But here I've got a problem: there's no ObjectOutputStream supported at J2ME.So I have to convert my Vector to byte array or do something of that kind.
I've been thinking about converting the Vector to string, transmitting it over the network and rebuilding the original vector back from the string, but it hardly seems to work in appropriate forms.
Also, I've looked at some frameworks, like J2ME Polish, but unfortunately I failed to find the jar-files with API in the installation folder.
So, any ideas?
There are two relatively easy ways to serialize and deserialize Java objects to their binary representation to facilitate network communication between a server and a Java ME device:
Protocol Buffers
Protocol Buffers are Google's creation to quickly implement very efficient exchange of data over a network. You define a "protobuf" file that is a specification of the object you want to exchange, and then the protobuf tools create client and server-side stubs to handle the serialization and de-serialization of the objects over a network connection.
Based on the "Third-Party plugin page", there are several Java ME projects for handling protocol buffers on Java ME. There are also a number of other language libraries, so this approach should give you plenty of server-side implementation options too. I personally haven't used this approach on Java ME.
Netbeans' "Mobile Client to Web Application" tools
If your server-side implementation is in Java, you can use Netbeans' "Mobile Client to Web Application" tools to generate server-side and client-side stubs to send Java objects over a binary data stream. The above link is a good tutorial for detailed implementation.
The general steps are:
a) Define your server-side web service in Java EE, including the object you'd like to pass over the network connection.
b) Run the "Mobile Client to Web Application" tool to generate client and server-side stubs.
I've used this approach in several Java ME apps and it works very well. The pros for this approach are that you can see and edit all the source code for the generated stubs, as its right there in your project. The possible con is that it requires a Java implementation of the server-side code, and isn't as portable to other platforms as protocol buffers are.
At the moment I have a solution that uses ZeroMQ to exchange protocol buffer payloads.
The protocol buffer method of serialization is bound to stay as it is, but I can replace ZMQ with a more convenient option.
The things I am not happy about in ZMQ are:
It uses JNI on the Java side,and I've been bitten before by JNI, in complex, multi thread scenarios. I try to eliminate it whenever I can.
I don't need queuing, I just need rpc.
My requirements (which are mostly covered by ZeroMQ) are:
Support for 32/64 bit *nix, Windows, MacOS.
Support for Java, C++ and C# primarily, and Python, Ruby etc. would be nice.
Language support must be provided by native implementations in the language, not via wrapping native code.
High performance.
Non Viral license, no GPL, AGPL etc.
I've been thinking about using Thrift as the transport layer over TCP (I guess it supports that) with protocol buffers payloads, if its Java implementation for messaging is not using JNI.
What options can you think of other than ZMQ for this setup?
You should probably have a look at Netty. It's a high performance Java NIO server framework with built-in support for Protocol Buffer which is released under the terms of the Apache License. The framework is well documented and some examples show how to prototype protocols with Protocol Buffers.
Have you considered something like Storm or Spread?
The original question was asked about a year after JeroMQ was put onto github. It is the pure-java implementation of ZeroMQ. It has seen constant development throughout the intervening years and seems to be comparable in speed to the C-implementation.
What is the "official" Java API for client/server or P2P communication? Java RMI? Some other networking API??
Is this official networking API the standard for both SE and EE?
I'm sure the answer is very context-specific, so let's take a look at a few instances:
You have 2 swing clients installed on 2 machines and connected to the same network (or the Internet), and you want either of them to send the other a primitive, such as the integer 4, or some POJO, like a "Widget" object
Same as #1 above, but between a Swing client and a fully-compliant Java EE back-end (implementing managed beans, app servers, the whole nine yards)
I don't have a specific application in mind, I'm just wondering what are the "norms" for client-client and client-server communication in the world of Java.
If being bound by Java isn't a problem, RMI is a pretty abstracted solution when it comes to the client and server solution "exchanging" data (especially when the data is Java classes which might be difficult/too much effort to represent as textual data). Just make sure your object implements Serializable and almost anything can be transmitted over the wire.
If this doesn't fit your bill and you want to drop down the raw networking stuff, the client-server socket framework Netty is a pretty good choice.
There's no such thing as the most official networking API in J2SE, all J2SE APIs are official in the sense they are supported by Sun (now Oracle).
That said, you should choose your API based on following criteria:
Do you (or your team) know how to use particular API;
How simple/complex is this API to use;
What throughput are you aiming for? For performance-sensitive applications you may be forced to use binary protocol. For the rest of cases, you can use text-based protocol.
For example, between two clients simple text-based protocol will suffice for passing POJOs, for example using Apache MINA or Google protocol buffers.
This will work between client and server as well.
Response to Zac's questions in comment:
Binary protocols performance gain comes from the fact you don't need to convert everything to text form and back -- you just can pass binary presentation of you application memory with minimal changes, like, in case of BSD Sockets API, converting from host byte-order to network byte-order. Unfortunately, I don't know details about how RMI/Java serialization processes objects, but I'm sure, it still much faster than passing all data in readable form;
Yes, MINA and protocol buffers have Java APIs. They just not part of Java SE bundle, you have to download them separately. By the way, MINA can use both binary and readable serialization, depending on how you use it.
You should define notion of 'good' somehow, for example, answering to questions I mentioned above. If you want to use objects over network, use RMI. If you don't, Netty or MINA will suffice, whatever you'll find easier to master.
For P2P, Sun at one point pushed JXTA pretty hard.
I wouldn't dare to use RMI for P2P communication.
rmi is pretty much the standard java to java protocol. it's built in and very simple to use. most j2ee backends also communicate using rmi, although that's not the only possibility.
J2SE the most common is probably RMI or raw sockets.
J2EE uses a messaging bus that everyone (servers and clients) subscribes to which is quite different from rmi style solutions (although at the lowest level an implementation may still rely on RMI). It helps automate redundancy and failover. If you need this functionality I believe it can be used in SE as well.
I haven't used J2EE for quite a while now, so this may have changed, but I doubt it. The messaging system was a core component of J2EE.
I'm developing a Java application that consists of a server and a client (possibly multiple clients in future) which may run on different hosts.
For communication between these two I currently use a custom protocol which consists of JSON messages that are sent over network sockets and that are converted back to Java Bean objects on both sides. However the more complex the application gets I notice that this method doesn't meet my standards and is too complex.
I'm looking for a well established, possibly standardized alternative.
I've looked at Remote Method Invocation (RMI) but read that the protocol is slow (big network overhead).
The technology I'm looking for should be lightweight (protocol and library wise), robust, maybe support compression (big plus if it does!), maybe support encryption, well document and well established (e.g. an Apache project). It should be as easy as calling a method on a remote object with RMI but without its disadvantages.
What can you recommend?
Avro is an Apache project that is designed for cross-language RPC (see Thrift for its spiritual predecessor). It is fairly new (less than two years old), so it isn't as well-established as RMI, for example. You should still give it a chance, though; large projects like Cassandra are moving to Avro. Avro is also a sub-project under Hadoop and has been receiving healthy support from that community.
It designed to be fast and support multiple languages, so you will probably need to introduce another step during compilation in which you translate an Avro IDL file into Java, although it isn't strictly necessary. The rest is typical RPC.
One nice thing about Avro is that its transport layers are independent of how data is represented. For example, it comes with various "transceivers" (their base communication class) for raw sockets, HTTP, and even local intra-process calls. HTTPS and SASL transceivers can provide security.
For representing data, there are encoders and decoders of various types, although the default BinaryEncoder generally suffices since Hadoop, Cassandra, etc... focus on efficiency. There is also a JsonEncoder in case you find that useful.
This really all depends on what kind of compatibility you require between client and server. CORBA is a well established and standardized way of communicating between different languages, but it requires a bit more effort to use than Java RMI. If the clients are running from some external, untrusted source, then an HTTP based protocol makes more sense. If you follow a REST approach, then it becomes easier to scale out later as you need to add more servers.
If both client and server are Java, and they are running within a trusted network, RMI meets your requirements for being "well established". Performance overhead of RMI is exaggerated, but very early versions did not pool connections.
If you're willing to toss away both "well established" and "standardized", you can use Dirmi as a substitute for RMI. It's faster, easier, has more features, and it doesn't have the firewall problems RMI has. Like RMI, it supports TLS (encryption), but neither supports built-in compression.
Whatever you choose, beware of lock-in. Try to design your server such that the remote access layer is a thin layer over the core code. This allows you to easily support multiple protocols, perhaps at the same time.
Mybe CORBA?
Would you consider HTTP/REST?
If so, you can leverage something like a Tomcat/Spring, and still support all the requirements you listed ( robust, lightweight, well documented, well established )
The RPC based protocols are simply antiquated.
Seriously, unless you're doing a web app that already requires the web baggage, you really do want RMI or, even better, CORBA. I recommend JacORB (www.jacorb.org).
Ignore general claims of slow/fast and perform your own performance tests.
Keep in mind that a software project is successful because it performs the useful function for which it was designed and intended, not because it uses the latest cool buzzword tech.
Good luck.
Apache MINA library for client-server communication and EJB3 will suit best
I am looking to use a RPC framework for internal use. The framework has to be cross language. I am exploring Apache Thrift right now. Google protocol Buffers does not provide RPC capabilities exactly. What are the choices I have got apart from Thrift. (my servers will be primarily Java and the clients will be Java, Python, PHP).
There is also MessagePack
which claims to be faster than Protocol Buffers and have more features than Thrift.
I would look at REST as a first option because it is ubiquitous and no-nonsense.
If performance and representation really needs to be compact, I have heard good things about Apache AVRO and my fingers are twitching to try it out in anger.
There also seems to be ICE:
which uses Google Protocol Buffers for RPC.