I'm new to RabbitMQ and am trying to implement an app where RpcClient and RpcServer seems to be a good fit. This is how the app works: When a request comes, it'll call RpcClient to enqueue the request and then wait for the response. On the server side, a listener would dequeue the request and process it and then enqueue using RpcServer. In theory, this should work. I also found a page on Rabbit MQ that explains how to improve the performance by using a direct reply-to.https://www.rabbitmq.com/direct-reply-to.html. However, I could not tell how to apply this to use the com.rabbitmq.client.RpcClient and com.rabbitmq.client.RpcServer to implement my app. Could someone shed some lights on this? Thanks!
com.rabbitmq.client.RpcClient and com.rabbitmq.client.RpcServer are two convenience class to implement easy the RPC pattern.
You can also implement it with the standard class.
Read this post and also this(using standard class)
Related
I want to use RabbitMQ to communicate between multiple applications which are deployed on different networks and are maintained by different people. As a receiver of a message (consumer) I want to be convinced that the sender of the message (producer) is who he claims to be. Best approach I can think for this would be message signing and verification of those signatures. As this is my first time doing something with RabbitMQ, I am kind of stuck on how to implement this.
Message senders and receivers are Java applications. I've decided to use Spring AMQP template to make things somewhat easier for me. In a perfect scenario I would like to somehow intercept the message when it's already a byte array/stream, sign this blob and attach the signature as a message header. On the receiving end I would again like to intercept the message before it's deserialized, verify the signature from header against the blob and if everything is OK then deserialize it. But I havent found any means in Spring-Rabbit for doing this.
There is a concept of MessagePostProcessor in Spring-Rabbit, but when this is invoked, the message is still not fully serialized. It seems like something that I imagined would be solved somewhere by someone as it feels like a common problem to have, but my research has left me bare handed.
Currently I am using AmqpTemplate.convertAndSend for message sending and #RabbitListener for message receiving. But I am not stuck with Spring. I can use whatever I like. It just seemed like an easy way to get going. I am using Jackson for message serialization to/from JSON. Problem is how to intercept sending and receiving in the right place.
Backup plan is to put both data and signature in body and joint them with a wrapper but this would mean double serialization and is not as clean as I would like the solution to be.
So has anyone got experience with this stuff and can perhaps can advise me on how to approach this problem?
There is a concept of MessagePostProcessor in Spring-Rabbit, but when this is invoked, the message is still not fully serialized.
I am not sure what you mean by that; the MessagePostProcessor is exactly what you need the body is the byte[] that will be sent to RabbitMQ. You can use an overloaded convertAndSend method that takes an MPP, or add your MPP to the template (in the beforeSendMessagePostProcessors).
On the receiving side, the listener container factory can be configured with afterReceiveMessagePostProcessors. Again; the body is the byte[] received from RabbitMQ.
This question might be a little abstract, but I'm trying to do something with Apache Camel and I'm stuck.
The basic scenario is this, I expose a webservice A through Camel, in this service, there is a content-based routing to decide if I have to invoke B or C, and I'd like to invoke the right one, and have the response from B or C to be the response of my service A
I have exposed the webservice already with camel-cxf and it works very well, but I don't know how to go about the routing after that, I have thought of this:
from("cxf:bean:myServiceA").choice()
.when(new PredicateForServiceB())
.process(new ProcessorForServiceB())
.when(new PredicateForServiceC())
.process(new ProcessorForServiceC())
.otherwise()
.endChoice()
.to("log:output");
I'm not sure if this is the best way or even if this is correct, but it's what I came up with.
Now I don't know how I would implement those processors, I could just create a normal invocation to the services and build the output, but I'd like to do it with the Camel infrastructure.
does anyone have any pointers on this? I'd be glad to provide more information if necessary
Camel provides the Bean binding, which you don't need to touch much of Camel API and focus on the business logic you need to in the POJO bean.
If you use Processor API, you can handle the Exchange yourself, then the response can be send back to client if you setup out message on the exchange.
I followed the following tutorial of Netbeans on creating the Enterprise Application using the IDE. I just wanted to know why the usage of Message driven bean is preferred here for the save or persist method? And why not for the other database operations such as findAll?
https://netbeans.org/kb/docs/javaee/maven-entapp.html
Message Driven Beans are asynchronous components, to illustrate the concept, asynchronous communication works pretty much like email communication, you send the email and that's it, you can only hope for the best, and expect that the recipient processes your mail as soon as possible and reply back if necessary (in a different communication), on the other hand, synchronous communication works pretty much like a phone call, you get your response during the same communication, without the need to start a new one.
In your case, when a client invokes findAll he's quite likely expecting to get a list of results in the same communication (synchronously: 'server, give me right now all the customers in the system'), in which case an MDB (asynchronous) is simply useless, on the other hand, when a client invokes save he might not want to wait for an answer (asynchronously: 'server, just try to save this info, i don't need to know right now if you succeeded or not').
There's a lot more info here.
Learning how to use the Java PlayFrameWork and it talks about how you can do asynchronous server programming - by that I mean, if a result takes a long time to produce, you can return a promise of a result - informing the browser that a result will be returned.
Can I ask what in HTTP terms this does and how browsers commonly deal with it?
Also, can a result promise be returned to an AJAX call?
Nothing is returned to the browser before HTTP response is created by server. This asynchronicity is purely inside Play application and is invisible from client. It's a bit complicated to explain here. This could help you to understand what's going on: http://www.playframework.com/documentation/2.1.x/ThreadPools
If you'd like to learn more, take a look at Akka (Play is based on it): http://akka.io/ or I can also recommend perfect course: https://www.coursera.org/course/reactive
To answer your second question, yes of course you can handle AJAX requests asynchronously as well.
In my app I have, for example, 3 logical blocks, created by user in such order:
FirstBlock -> SecondBlock -> ThirdBlock
This is no class-inheritance between them (each of them doesn't extends any other), but logical-inheritance exists (for example Image contains Area contains Message). Sorry, I'm not strong in terms - hope you'll understand me.
Each of blocks sends requests to server (to create infromation about it on server side) and then handles responses independently (but using same implementation of http client). Just like at that image (red lines are responses, black - requests).
http://s2.ipicture.ru/uploads/20120121/z56Sr62E.png
Question
Is it good model? Or it's better to create a some controller-class, that will send requests by it's own, and then handle responses end redirect results to my blocks? Or should implementation of http client be controller itself?
P.S. If I forgot to provide some information - please, tell me. Also if there a errors in my English - please, edit question.
Here's why I would go with a separate controller class to handle the HTTP requests and responses:
Reduce code duplication (do you really need three separate HTTP implementations?)
If/when the communication protocol between your app and server changes, you have to rewrite all your classes. Say for example you add another field to your response payload and your app isn't built to handle it, you now have to rewrite FirstBlock, SecondBlock, and ThirdBlock. Not ideal.
Modify your Implementation of HTTP client controller class such that:
All HTTP requests/responses go through it
It is responsible for routing the responses to the appropriate class.
Advantages?
If/when you change the communication protocol, all the relevant code is in this controller class and you don't have to touch FirstBlock, SecondBlock, or ThirdBlock
Debugging your HTTP requests!
I would suggest that your 3 blocks not deal with HttpClient directly. They should each deal with some interface which handles the remote connection sending of the request and processing of the results. For example:
public interface FirstBlockConnector {
public SomeResultObject askForSomeResult(SomeRequestObject request);
}
Then the details of the HTTP request and response will be in the connector implementations. You may find that you only need one connector that implements all 3 RPC interfaces. Once you separate out the RPC mechanisms then you can find common code in the implementations that actually deal with the HttpClient object. You can also swap out HTTP with another RPC mechanism without changing your block code.
In terms of controllers, I think of them being a web-server side term and not for the client but maybe you meant a connector like the above.
Hope this helps.