I am trying to implement the following scenario and I could really use and appreciate some help. I am using ActiveMQ 5.14 with camel 2.21.
In the queue, each message corresponds to a single machine. The machines connect to the queue through a single polling consumer and are indistinguishable to the consumer. The messages should be kept in the queue until one machine acknowledges that it has reached the correct machine via a separate request. After each fetch of a message said message should be locked for a certain time.
I could not find any ActiveMQ functionality that translates to my problem. My approach would be to send the message after each fetch to a second queue, which serves as a lock mechanism and send it back to the fetchable queue after the specified timeout.
Maybe a better approach would be to rollback the session after each fetch if the message has not been acknowledged by the machine.
Do you have any suggestions what a viable solution to this problem would look like?
edit: more details to clarify the situation
The application communicates to the clients via exposing a REST API to the web with two calls: GET and DELETE.
GET fetches the next message from the queue and DELETE deletes the message from the queue. I need to make sure that a message is only fetched once in a given time period and that it makes its way back to the queue if the client doesn't send a DELETE request. Currently I have a route from the rest service to a bean which fetches a message from the queue returns it to the GET request and sends it back to the queue after. On a DELETE request I dequeue the message from the queue with the given id.
I still need to find a way to ensure that the last fetched message cant be accessed for a specified time period.
I am a bit confused about the part with the indistinguishable machines, but I understood the following:
You have 1 queue with messages
You have 1 consumer
The consumer takes a message and calls a service or similar
If the call is successful the message can be deleted
If the call fails the message must be reprocessed
If these assumptions are correct, you can build a Camel route that consumes messages from the queue (transacted) and calls the service.
If the Camel route fails to complete (service returns error) and the error is not handled, the broker does a rollback and the message is redelivered (immediately)
If the route fails multiple times (when max redelivery value is reached), the message is sent to the dead letter queue (by the broker) to move it out of the way but save it
If the route succeeds the message consumption is committed to the broker and the message deleted
In such a setup you could also configure more consumers to process the messages in parallel (if the called service allows this)
This behaviour is more or less the default behaviour if
Your broker is configured as persistent (avoid message loss)
You consume messages transacted (a local transaction with the broker is enough)
Camel does not handle the errors (when they are handled, the message is committed because the broker does not "see" any error)
You get an error from the service or you can at least decide if there was a problem and throw the error yourself. The broker must get an Exception so that a rollback is done
EDIT
Based on your clarification I understand that it is the other way round than I assumed.
Well then I would probably see the two request types as "workflow steps" since they are triggered from the clients.
GET
Consume a message, send it to requestor
Add a timestamp to the message header
Send the message to another queue (let's call it delievered)
DELETE
Dequeue the message from the delievered queue
Not deleted messages
Use the timestamp header and message selectors to consume not deleted messages after a certain amount of time
Move them back to the source queue
With a second queue you have various advantages
Messages in processing cannot be consumed again and therefore need no "lock"
The source queue contains only waiting messages, the delievered queue only messages in processing
You could increase message priority when sending not deleted messages back to the source queue so they are re-consumed fast
You could also add a counter header when sending not deleted messages back to the source queue to identify messages that are failed multiple times and process them in another way.
Sometimes, RabbitMQ might become unavailable or the queue you are trying to put messages into won't take the message for any reason.
I'm wondering if there is a way to check the 'transportability' of a RabbitMQ queue without having to actually attempt to put a real message in the queue and clutter it with potentially bad or useless messages.
Is there a trick I might be able to use in order to test that I can transport to a queue without actually transporting a message?
I assume you are using a channel to publish messages into the queue, if so, have you tried to check if the channel is open? with channel.isOpen()?, if you get a false, you can try to get another channel or implement a reconnection, I also use a scheduled task that runs every 5 seconds checking if the channel is open
I'm developing a websocket application by using Netty. I'd like to know if a message is really delivered from a source to a destination. In particular, let's assume that a client and a server have an open channel and exchange some messages for a while. At a certain point, the client goes down, but the channel is still active in Netty. I tried to use isReachable() before sending the message, but this method seems to be buggy in some scenarios (e.g. a machine with Win7 is up, but isReachable() returns false). Now, my idea is to implement a mechanism using ACKs, namely the server sends the message and the client sends back an ack. To do that, I need a timeout to see if, after a certain interval, the corresponding ack does not arrive. Is there something similar in Netty?
Regarding isReachable() - it's only a best effort API. The documentation points out that it tries to send an ICMP echo request or create a TCP connection to port 7 on the destination host, both of which are highly likely to be blocked by a firewall. Is this happening in your case?
As for the acknowledgement, there's nothing in Netty that provides this as standard, but it shouldn't be too difficult to implement. Firstly each message needs to be uniquely identifible by some sort of identifier, possibly a sequence number but a globally unique identifier means you can potentially recover across disconnections. Then you want to create a combined handler that implements both ChannelInboundHandler and ChannelOutboundHandler (assuming Netty 4). When a message is sent
add the message to a map indexed by its id
create a timer associated with the message id. Add it to another map indexed by message id
forward the message
When the ACK is received cancel the timer and remove the timer and message from their respective maps. If the timer fires use the associated id to decide what to do with the timer and message (possibly retransmit and reset the timer).
Netty provides a HashedWheelTimer for efficiently managing lots of timers with a resolution suitable for this kind of activity.
You may also want to consider putting a limit on the number of retries so you can stop and raise an error rather than continually indefinitely.
What is the best technology solution (framework/approach) to have a Request Queue in front of a REST service.
so that i can increase the no of instances of REST service for higher availability and by placing Request queue in front to form a service/transaction boundary for the service client.
I need good and lightweight technology/framework choice for Request Queue (java)
Approach to implement a competing consumer with it.
There's a couple of issues here, depending on your goals.
First, it only promotes availability of the resources on the back end. Consider if you have 5 servers handling queue requests on the back end. If one of those servers goes down, then the queued request should fall back in to the queue, and be redelivered to one of the remaining 4 servers.
However, while those back end servers are processing, the front end servers are holding on to the actual, initiating requests. If one of those front end servers fails, then those connections are lost completely, and it will be up to the original client to resubmit the request.
The premise perhaps is that simpler front end systems are at a lower risk for failure, and that's certainly true for software related failure. But networks cards, power supplies, hard drives, etc. are pretty agnostic to such false hopes of man and punish all equally. So, consider this when talking about overall availability.
As to design, the back end is a simple process waiting upon a JMS message queue, and processing each message as they come. There are a multitude of examples of this available, and any JMS server will suit at a high level. All you need is to ensure that the message handling is transactional so that if a message processing fails, the message remains in the queue and can be redelivered to another message handler.
Your JMS queue's primary requirement is being clusterable. The JMS server itself is a single point of failure in the system. Lost the JMS server, and your system is pretty much dead in the water, so you'll need to be able to cluster the server and have the consumers and producers handle failover appropriately. Again, this is JMS server specific, most do it, but it's pretty routine in the JMS world.
The front end is where things get a little trickier, since the front end servers are the bridge from the synchronous world of the REST request to the asynchronous world of the back end processors. A REST request follows a typically RPC pattern of consuming the request payload from the socket, holding the connection open, processing the results, and delivering the results back down the originating socket.
To manifest this hand off, you should take a look at the Asynchronous Servlet handling the Servlet 3.0 introduced, and is available in Tomcat 7, the latest Jetty (not sure what version), Glassfish 3.x, and others.
In this case what you would do is when the request arrives, you convert the nominally synchronous Servlet call in to an Asynchronous call using HttpServletRequest.startAsync(HttpServletRequest request, HttpServletResponse response).
This returns an AsynchronousContext, and once started, allows the server to free up the processing thread. You then do several things.
Extract the parameters from the request.
Create a unique ID for the request.
Create a new back end request payload from your parameters.
Associate the ID with the AsyncContext, and retain the context (such as putting it in to a application wide Map).
Submit the back end request to the JMS queue.
At this point, the initial processing is done, and you simply return from doGet (or service, or whatever). Since you have not called AsyncContext.complete(), the server will not close out the connection to the server. Since you have the AsyncContext store in the map by the ID, it's handy for safe keeping for the time being.
Now, when you submitted the request to the JMS queue, it contained: the ID of the request (that you generated), any parameters for the request, and the identification of the actual server making the request. This last bit is important as the results of the processing needs to return to its origin. The origin is identified by the request ID and the server ID.
When your front end server started up, it also started a thread who's job it is to listen to a JMS response queue. When it sets up its JMS connection, it can set up a filter such as "Give me only messages for a ServerID of ABC123". Or, you could create a unique queue for each front end server and the back end server uses the server ID to determine the queue to return the reply to.
When the back end processors consume the message, they're take the request ID, and parameters, perform the work, and then take the result and put them on to the JMS response Queue. When it puts it the result back, it'll add the originating ServerID and the original Request ID as properties of the message.
So, if you got the request originally for Front End Server ABC123, the back end processor will address the results back to that server. Then, that listener thread will be notified when it gets a message. The listener threads task is to take that message and put it on to an internal queue within the front end server.
This internal queue is backed by a thread pool who's job is to send the request payloads back to the original connection. It does this by extracting the original request ID from the message, looking up the AsyncContext from that internal map discussed earlier, and then sending results down to the HttpServletResponse associated with the AsyncContext. At the end, it call AsyncContext.complete() (or a similar method) to tell the server that you're done and to allow it to release the connection.
For housekeeping, you should have another thread on the front end server who's job it is to detect when requests have been waiting in the map for too long. Part of the original message should have been a time the request started. This thread can wake up every second, scan the map for requests, and for any that have been there too long (say 30 seconds), it can put the request on to another internal queue, consumed by a collection of handlers designed to inform the client that the request timed out.
You want these internal queues so that the main processing logic isn't stuck waiting on the client to consume the data. It could be a slow connection or something, so you don't want to block all of the other pending requests to handle them one by one.
Finally, you'll need to account that you may well get a message from the response queue for a request that no longer exists in your internal map. For one, the request may have timed out, so it should not be there any longer. For another, that front end server may have stopped and been restarted, so it internal map of pending request will simply be empty. At this point, if you detect you have a reply for a request that no longer exists, you should simply discard it (well, log it, then discard it).
You can't reuse these requests, there's not such thing really as a load balancer going back to the client. If the client is allowing you to make callbacks via published end points, then, sure you can just have another JMS message handler make those requests. But that's not a REST kind of thing, REST at this level of discussion is more client/server/RPC.
As to which framework support Asynchronous Servlets at a higher level than a raw Servlet, (such as Jersey for JAX-RS or something like that), I can't say. I don't know what frameworks are supporting it at that level. Seems like this is a feature of Jersey 2.0, which is not out yet. There well may be others, you'll have to look around. Also, don't fixate on Servlet 3.0. Servlet 3.0 is simply a standardization of techniques used in individual containers for some time (Jetty notably), so you may want to look at container specific options outside of just Servlet 3.0.
But the concepts are the same. The big takeaway are the response queue listener with the filtered JMS connection, the internal request map to the AsyncContext, and the internal queues and thread pools to do the actual work within the application.
If you relax your requirement that it must be in Java, you could consider HAProxy. It's very lightweight, very standard, and does a lot of good things (request pooling / keepalives / queueing) well.
Think twice before you implement request queueing, though. Unless your traffic is extremely bursty it will do nothing but hurt your system's performance under load.
Assume that your system can handle 100 requests per second. Your HTTP server has a bounded worker thread pool. The only way a request pool can help is if you are receiving more than 100 requests per second. After your worker thread pool is full, requests start to pile up in your load balancer pool. Since they are arriving faster than you can handle them, the queue gets bigger ... and bigger ... and bigger. Eventually either this pool fills too, or you run out of RAM and the load balancer (and thus the entire system) crashes hard.
If your web server is too busy, start rejecting requests and get some additional capacity online.
Request pooling certainly can help if you can get additional capacity in time to handle the requests. It can also hurt you really badly. Think through the consequences before turning on a secondary request pool in front of your HTTP server's worker thread pool.
The design we use is a a REST interface receiving all the request and dispatching them to a message queue (i.e. Rabbitmq)
Then workers listen to the messages and execute them following certain rules. If everything goes down you would still have the request in the MQ and if you have a high number of request you can just add workers...
Check this keynote, it kind of shows the power of this concept!
http://www.springsource.org/SpringOne2GX2012
I'm trying to implement a stateful, multi-client server application and have some questions about the networking/threading design. The problem I'm currently facing is how to exchange messages between the communication layer and the logic layer.
The server handles multiple clients, where each of them can be active in multiple "channels", where each channel has multiple stages and may have multiple clients acting in it. Think of it to something similar as a chat program with multiple rooms.
I have already implemented the receiving of messages on the server side. Each client has his own thread that blockingly reads the data and decodes into a message. Now how to proceed? In my oppinion, each channel should also have this own thread to easily maintain its state. I could use a BlockingQueue to exchange the received messages with the channel thread, who's blockingly waiting for new messages on that queue.
But then how to send messages to the clients? The logic in the channel will handle the message, and produce some messages to be sent to one/some/all of the clients. Is it safe to use the channel thread to directly write to the socket? Or should I use another BlockingQueue to transmit the messages to the client handler thread? But how to wake it then, since it's waiting on the socket to read? Or should I use a separate send-thread per client, or even a separate send-socket?
BTW: I know I could use existing libraries for the networking layer, but I want do do it from scratch on plain sockets.
Put a send message method on the communication object that wraps the socket. Synchronize this method so that only one thread can be calling it at once. Then, it doesn't make any difference how many threads call this method. Each message will only be sent one at a time. You also don't have to disturb the thread that's blocking to read. This send method will be a quick enough operation that you don't have to worry about other threads blocking while a thread sends.
As long as the channel has a reference to the communication objects for each connected client, it can send messages and not worry about it.
If it ever caused problems, you could always modify that send message to enqueue the object to be sent. Then you could have a specific send thread to block on the queue and write the contents to the socket. But from my experience, this won't be necessary.
What about a event mechanism? When you ready with processing the request and there is a data for client available, then simply send it with an event for the client socket handler thread. as because the transmission from client is ended, you can send reply normally - if i think correctly.