I want to use ActiveMQ as a message broker communicating between a C++ component and a Java component in two process. Eg. C++ component is the publisher and the Java component is the subscriber(there maybe multiple subscribers). I look at ActiveMQ website and it mentions the tool OpenWire and ActiveMQ-CPP. However, all the examples on the websites are using the same language for both producer and consumer.
My questions are:
1.Can ActiveMQ work for producer/consumer in different languages?
2.In different processes? How?
OpenWire is a protocol and hence can theoretically be implemented anywhere, but that doesn't mean full implementations exist for every language. The fine print of the C++ client says:
"As of version 2.0, ActiveMQ-CPP supports the OpenWire v2 protocol, with a few exceptions.
ObjectMessage - We cannot reconstruct the object(s) contained in an ObjectMessage in C++, so if your application is subscribed to a queue or topic that has an ObjectMessage sent to it, you will receive the message but will not be able to extract an Object from it."
So if you want to send data across processes, you write your C++ and Java components to use the API (making sure not to use ObjectMessage types if you're using ActiveMQ-CPP). Then run the ActiveMQ server... tell your programs to connect to it, and it should work.
But if you're really just trying to do interprocess communication when you control both clients, this could be a bit heavy-handed. You might be interested in the responses to What is the best approach for IPC between Java and C++? and Good alternative to shared memory IPC for Java/C++ apps on Linux
Directly from ActiveMQ's front page :
Supports a variety of Cross Language Clients and Protocols from Java, C, C++, C#, Ruby, Perl, Python, PHP
* OpenWire for high performance clients in Java, C, C++, C#
* Stomp support so that clients can be written easily in C, Ruby, Perl,
Python, PHP, ActionScript/Flash, Smalltalk to talk to ActiveMQ as well
as any other popular Message Broker
We have tested it with PHP (using Stomp) and Java (using OpenWire).
Regarding processes : the various producers and consumers can of course be in totally different processes, communicating over e.g TCP or SSL.
Related
What is the difference between the 2 because both of them are based on the same methodology of publishers and subscribers who exchange messages between them via topics/subjects?
I recently came across this post and, as an employee of 60East Technologies, felt that it deserved a more complete response.
If you're asking in terms of "what role in an architecture do these serve", then you're right: both of these fall in the broad category of message-oriented middleware. They're both ways to coordinate and exchange data between processes based on the concept of messages as the units of data exchanged.
JMS is a standard API for Java, and one of the more popular ways of handling messaging. There are multiple implementations from multiple vendors. Since it's a standard, these are all similar in interface and have distinct implementations. Products that support JMS can also support wire standards such as AMQP to provide a level of interoperability for components that aren't written in Java.
AMPS (Advanced Message Processing System) is a bit less widely-known. It's a messaging product developed by 60East Technologies, Inc. Since it's a product rather than a standard, there's one implementation. It's a broker-based system, so in AMPS all message traffic passes through the broker. AMPS supports multiple programming languages (right now, there are clients available for Java, C#/.Net, Python, JavaScript, and C++). AMPS supports a variety of message payload formats (FIX, JSON, XML, Protocol Buffers, MessagePack, etc.). AMPS also supports a few different styles of message delivery: message queues (as JMS does), fan-out publish and subscribe, "query and subscribe" where an application gets current values for a set of records and then receives push updates when the records change, and historical replay ("bookmark subscribe") that exactly replays a stream of messages any number of times. AMPS also provides things like inline message transformation/enrichment, the ability to aggregate messages and project views (similar to the way an RDBMS can project a view of an underlying table).
AMPS was initially designed for very high-volume and low-latency applications (things like crossing engines/crossing networks in the financial sector). AMPS emphasizes performance, and takes a "whole-system" view of performance. That is, performance is considered from the point at which a producer starts to send a message to the point at which a consumer can act on the message, not just with regards to time in the broker.
To sum it up: AMPS is a product rather than a standard, supports multiple programming languages, provides a wide variety of capabilities and beyond message queues, and is designed for very high performance
Ryan
60East Technologies
JMS is a Java-based API for asynchronous messaging supporting both point-to-point and pub-sub semantics. It can be implemented by anyone. Apache ActiveMQ is probably the most popular and well-known JMS implementation, although there are numerous implementations.
AMPS is a proprietary messaging system developed by 60East Technologies which appears to only support pub-sub semantics.
I'm building a multiplayer card game using Flex on the client side and Java on the server side and I wanted to know if I must sockets and the accept method in order to connect users to the server for in order for them to join a game room or create one or to chat.
In the past I've learned how to build a game server which both sides are JAVA and connection was in sockets but now days the client side will be in FLEX which has few ways to connect to a Java server (XML,SOAP,BLAZEDS(AMF)) and I find it hard to understand how to write the Java server in order to do all the features of a game server , especially managing the rooms and sending data back to the users.
In the sockets way, when a user was connecting to the server and he had opened a room, this room was opened on a thread and who ever was joined that room then he was connected to the same thread and sending the messages to the right place was easy, so the problem is understanding how to do the same using SOAP or BLAZEDS.
Any help would be appreciated.
Thanks.
Please make your questions concise, it is difficult to know what is being asked for.
If you ask the difference between socket and webservice, sockets are used to manage the basic networking communications. Over them, you can receive/send bytes in whatever format / protocol you chose to.
SOAP / Webservices is just one of such formats, its advantage is that it is a standard way of encoding messages so you can easily write code that connects to your service in most platforms, and those messages are human-readable. The main disavantage is performance, both in bandwith and processing power (specially to parse it at the receiving end).
If you are starting, I would advise designing a format related to the application you are using to simplify things.
Take a look at RED5 and remoteSharedObjects. Using this tech, you can essentially put you "game" object in a remote shared object, and all the clients will have the same object with real time updates. Then on top of that you can use AMF (the protocol behind BlazeDS) for your less dynamic data.
Using raw sockets gives you the benefit of control. Control of your protocol format ( how your message data is structured). And because of this you can tweak your messaging to be more secure, or faster, or more robust, depending on your application requirements. All that control comes at the cost of complexity and maintenance. Because you get to say exactly what you want to send and how you want to send it you need to write and debug alot more code. Another issue with raw socket communication is that it has a significantly greater chance of being blocked by firewalls.
Using web services removes some of the complexity of deciding on a message format (with that being it's main benefit). You don't have to worry about things like byte endian-ness, string encodings, or data conversions (as much). As such web services really excel at data communications amongst heterogeneous clients and servers where inter-operability is key. The cost being that it's relatively complicated to serialize/deseserialize, and as such, slower than binary messaging formats. Web services are good to use when you have to communicate with client applications that you have little control of (not really your case). Web services are traditionally tunneled through HTTP, so there is an additional advantage in being able to worry less about a firewall blocking access to your game.
BlazeDS attempts to bridge both worlds - it gives you some of the robust features of web services (fallback communication options, firewall interoperability etc), but uses it's own binary format for serialization/ deserialization. This gives it some of the speed of using raw sockets without a lot of the downsides. I think it's a great candidate to explore, but if you find yourself needing more speed then raw sockets would be worth messing around with.
Good luck.
Sockets are the programmatic interface to OSI Level 4 Transport Layer. Everybody uses them, ie Webservices is a Level 7 Application Layer interface that hides the lower levels.
If you need real-time bidirectional data exchange between your client and server you're better off managing your own TCP sockets. Flex still supports sockets.
What is the "official" Java API for client/server or P2P communication? Java RMI? Some other networking API??
Is this official networking API the standard for both SE and EE?
I'm sure the answer is very context-specific, so let's take a look at a few instances:
You have 2 swing clients installed on 2 machines and connected to the same network (or the Internet), and you want either of them to send the other a primitive, such as the integer 4, or some POJO, like a "Widget" object
Same as #1 above, but between a Swing client and a fully-compliant Java EE back-end (implementing managed beans, app servers, the whole nine yards)
I don't have a specific application in mind, I'm just wondering what are the "norms" for client-client and client-server communication in the world of Java.
If being bound by Java isn't a problem, RMI is a pretty abstracted solution when it comes to the client and server solution "exchanging" data (especially when the data is Java classes which might be difficult/too much effort to represent as textual data). Just make sure your object implements Serializable and almost anything can be transmitted over the wire.
If this doesn't fit your bill and you want to drop down the raw networking stuff, the client-server socket framework Netty is a pretty good choice.
There's no such thing as the most official networking API in J2SE, all J2SE APIs are official in the sense they are supported by Sun (now Oracle).
That said, you should choose your API based on following criteria:
Do you (or your team) know how to use particular API;
How simple/complex is this API to use;
What throughput are you aiming for? For performance-sensitive applications you may be forced to use binary protocol. For the rest of cases, you can use text-based protocol.
For example, between two clients simple text-based protocol will suffice for passing POJOs, for example using Apache MINA or Google protocol buffers.
This will work between client and server as well.
Response to Zac's questions in comment:
Binary protocols performance gain comes from the fact you don't need to convert everything to text form and back -- you just can pass binary presentation of you application memory with minimal changes, like, in case of BSD Sockets API, converting from host byte-order to network byte-order. Unfortunately, I don't know details about how RMI/Java serialization processes objects, but I'm sure, it still much faster than passing all data in readable form;
Yes, MINA and protocol buffers have Java APIs. They just not part of Java SE bundle, you have to download them separately. By the way, MINA can use both binary and readable serialization, depending on how you use it.
You should define notion of 'good' somehow, for example, answering to questions I mentioned above. If you want to use objects over network, use RMI. If you don't, Netty or MINA will suffice, whatever you'll find easier to master.
For P2P, Sun at one point pushed JXTA pretty hard.
I wouldn't dare to use RMI for P2P communication.
rmi is pretty much the standard java to java protocol. it's built in and very simple to use. most j2ee backends also communicate using rmi, although that's not the only possibility.
J2SE the most common is probably RMI or raw sockets.
J2EE uses a messaging bus that everyone (servers and clients) subscribes to which is quite different from rmi style solutions (although at the lowest level an implementation may still rely on RMI). It helps automate redundancy and failover. If you need this functionality I believe it can be used in SE as well.
I haven't used J2EE for quite a while now, so this may have changed, but I doubt it. The messaging system was a core component of J2EE.
I'm developing a Java application that consists of a server and a client (possibly multiple clients in future) which may run on different hosts.
For communication between these two I currently use a custom protocol which consists of JSON messages that are sent over network sockets and that are converted back to Java Bean objects on both sides. However the more complex the application gets I notice that this method doesn't meet my standards and is too complex.
I'm looking for a well established, possibly standardized alternative.
I've looked at Remote Method Invocation (RMI) but read that the protocol is slow (big network overhead).
The technology I'm looking for should be lightweight (protocol and library wise), robust, maybe support compression (big plus if it does!), maybe support encryption, well document and well established (e.g. an Apache project). It should be as easy as calling a method on a remote object with RMI but without its disadvantages.
What can you recommend?
Avro is an Apache project that is designed for cross-language RPC (see Thrift for its spiritual predecessor). It is fairly new (less than two years old), so it isn't as well-established as RMI, for example. You should still give it a chance, though; large projects like Cassandra are moving to Avro. Avro is also a sub-project under Hadoop and has been receiving healthy support from that community.
It designed to be fast and support multiple languages, so you will probably need to introduce another step during compilation in which you translate an Avro IDL file into Java, although it isn't strictly necessary. The rest is typical RPC.
One nice thing about Avro is that its transport layers are independent of how data is represented. For example, it comes with various "transceivers" (their base communication class) for raw sockets, HTTP, and even local intra-process calls. HTTPS and SASL transceivers can provide security.
For representing data, there are encoders and decoders of various types, although the default BinaryEncoder generally suffices since Hadoop, Cassandra, etc... focus on efficiency. There is also a JsonEncoder in case you find that useful.
This really all depends on what kind of compatibility you require between client and server. CORBA is a well established and standardized way of communicating between different languages, but it requires a bit more effort to use than Java RMI. If the clients are running from some external, untrusted source, then an HTTP based protocol makes more sense. If you follow a REST approach, then it becomes easier to scale out later as you need to add more servers.
If both client and server are Java, and they are running within a trusted network, RMI meets your requirements for being "well established". Performance overhead of RMI is exaggerated, but very early versions did not pool connections.
If you're willing to toss away both "well established" and "standardized", you can use Dirmi as a substitute for RMI. It's faster, easier, has more features, and it doesn't have the firewall problems RMI has. Like RMI, it supports TLS (encryption), but neither supports built-in compression.
Whatever you choose, beware of lock-in. Try to design your server such that the remote access layer is a thin layer over the core code. This allows you to easily support multiple protocols, perhaps at the same time.
Mybe CORBA?
Would you consider HTTP/REST?
If so, you can leverage something like a Tomcat/Spring, and still support all the requirements you listed ( robust, lightweight, well documented, well established )
The RPC based protocols are simply antiquated.
Seriously, unless you're doing a web app that already requires the web baggage, you really do want RMI or, even better, CORBA. I recommend JacORB (www.jacorb.org).
Ignore general claims of slow/fast and perform your own performance tests.
Keep in mind that a software project is successful because it performs the useful function for which it was designed and intended, not because it uses the latest cool buzzword tech.
Good luck.
Apache MINA library for client-server communication and EJB3 will suit best
According to this Wikipedia entry:
"Protocol Buffers is very similar to Facebookâs Thrift protocol, except it does not include a concrete RPC stack to use for defined services. Since Protocol Buffers was open sourced, a number of RPC stacks have emerged to fill this gap."
However, there are no examples of RPC stacks cited. Can anyone suggest a Java-based implementation of an RPC stack?
If you want Java-based RPC stack, it's RMI. However, it doesn't work well cross platform.
I've been using ProtoBuf to do RPC. You can pretty much simulate an RPC stack by wrapping a protobuf message inside another protobuf, which defines the services or calls. Find my answer to this question for details,
Google Protocol Buffers and HTTP
Thrift looks like a very good alternative if you want support more platforms like PHP, Ruby, C# etc. However, it looks very complex to me compared to ProtoBuf.
Google has open sourced their RPC framework gRPC, which uses Protocol Buffers to define the service and messages. gRPC is cross-platform with support for C, C++, C#, Java, Go, Node.js, Python, Ruby, Objective-C and PHP.
gRPC is based on the HTTP/2 standard that enables new capabilities such as bidirectional streaming, flow control, header compression and multiplexed connections.