Atmosphere: Multiple subscriptions over single HttpConnection - java

I'm using Atmosphere in my Spring MVC app to facilitate push, using a streaming transport.
Throughout the lifecycle of my app, the client will subscribe and unsubscribe for many different topics.
Atmosphere seems to use a single http connection per subscription - ie., every call to $.atmosphere.subscribe(request) creates a new connection. This quickly exhausts the number of connections allowed from the browser to the atmosphere server.
Instead of creating a new resource each time, I'd like to be able to add and remove the AtmosphereResource to broadcasters after it's initial creation.
However, as the AtmosphereResource is a one-to-one representation of the inbound request, each time the client sends a request to the server, it arrives on a new AtomsphereResource, meaning I have no way to reference the original resource, and append it to the topic's Broadcaster.
I've tried using both $.atmosphere.subscribe(request) and calling atmosphereResource.push(request) on the resource returned from the original subscribe() call. However, this made no difference.
What is the correct way to approach this?

Here's how I got it working:
First, when the client does their initial connect, ensure that the atmosphere-specific headers are accepted by the browser before calling suspend():
#RequestMapping("/subscribe")
public ResponseEntity<HttpStatus> connect(AtmosphereResource resource)
{
resource.getResponse().setHeader("Access-Control-Expose-Headers", ATMOSPHERE_TRACKING_ID + "," + X_CACHE_DATE);
resource.suspend();
}
Then, when the client sends additional subscribe requests, although they come in on a different resource, they contain the ATMOPSHERE_TRACKING_ID of the original resource. This allows you to look it up via the resourceFactory:
#RequestMapping(value="/subscribe", method=RequestMethod.POST)
public ResponseEntity<HttpStatus> addSubscription(AtmosphereResource resource, #RequestParam("topic") String topic)
{
String atmosphereId = resource.getResponse().getHeader(ATMOSPHERE_TRACKING_ID);
if (atmosphereId == null || atmosphereId.isEmpty())
{
log.error("Cannot add subscription, as the atmosphere tracking ID was not found");
return new ResponseEntity<HttpStatus>(HttpStatus.BAD_REQUEST);
}
AtmosphereResource originalResource = resourceFactory.find(atmosphereId);
if (originalResource == null)
{
log.error("The provided Atmosphere tracking ID is not associated to a known resource");
return new ResponseEntity<HttpStatus>(HttpStatus.BAD_REQUEST);
}
Broadcaster broadcaster = broadcasterFactory.lookup(topic, true);
broadcaster.addAtmosphereResource(originalResource);
log.info("Added subscription to {} for atmosphere resource {}",topic, atmosphereId);
return getOkResponse();
}

Related

Using ActiveMQ to handle errors during webservice call

I have a Java process which is listening for messages from a queue hosted by ActiveMQ and calls webservices if received status is COMPLETE.
I was also thinking of handling webservices calls using ActiveMQ as well. Is there a way I could make best use of ActiveMQ using maybe another queue?
This might help me in handling a scenario if one or more webservice calls fails in first attempt. But I'm trying to think what could I do to achieve something like this.
Do I need to forward webservice parameters to ActiveMQ queue and then look for something?
#Autowired
private JavaMailSender javaMailSender;
// Working Code with JMS 2.0
#JmsListener(destination = "MessageProducer")
public void processBrokerQueues(String message) throws DaoException {
...
if(receivedStatus.equals("COMPLETE")) {
// Do something inside COMPLETE
// Need to call 8-10 webservices (shown only 2 below for breviety)
ProcessBroker obj = new ProcessBroker();
ProcessBroker obj1 = new ProcessBroker();
// Calling webservice 1
try {
System.out.println("Testing 1 - Send Http POST request #1 ");
obj.sendHTTPPOST1();
} finally {
obj.close();
}
// Calling webservice 2.
try {
System.out.println("Testing 2 - Send Http POST request #2");
obj1.sendHTTPPOST2();
} finally {
obj1.close();
}
} else {
// Do something
}
}
What I'm looking for:
Basically If there is a possibility of creating an additional message queue, submit 8 messages to the queue, one corresponding to each webservice end point and have a queue listener dedicated to the message queue. If the message could not be delivered successfully, it could be put back on the queue for later delivery.

Match response handler with request in VertX

Let's say I have a Load Balancer (LB) in front of 1..n VertX (V) instances, each VertX instance is connected to a queue (Q), and I have 1..m Backends (BE).
A user clicks on a button which makes a post request or even opens a web socket, the load balancer forwards the request to one of the VertX instances, which fires a request to the queue, one of the Backends consumes the message and sends a response back; if the correct VertX instance consumes it, it can lookup the response handler and write a response to the user, if the wrong VertX instance consumes it, there won't be a response handler to write a response to and the user will wait indefinitely for a response.
See this sketch:
Alternatively, V2 dies and the load balancer reconnects the user to V1 which means even if I could send it back to the exact same one that made the request, it's not guaranteed to still be there once the response comes back, but the user might still be there awaiting a response via another VertX instance.
What I'm currently doing is to generate a GUID for each new connection, then as soon as the websocket connects, store the websocket handler inside a hashmap against the GUID and then when the BE wants to respond, it does a fanout to all 1..n VertX instances, the one that currently has the correct GUID in its hashmap can then write a response to the user.
Same for handling POST / GET in this manner.
Pseudocode:
queue.handler { q ->
q.handler {
val handler = someMap.get(q.guid)
// only respond if handler exists
if (handler != null){
handler.writeResponse(someresponsemessagehere)
}
}
}
vertx.createHttpServer().websocketHandler { ws ->
val guid = generateGUID()
someMap.put(guid, ws)
ws.writeFinalTextFrame("guid=${guid}")
ws.handler {
val guid = extractGuid(it)
// send request to BE including generated GUID
sendMessageToBE(guid, "blahblah")
}
}.requestHandler { router.accept(it) }.listen(port)
This does however mean that if I have a 1000 VertX applications running, that the backend will need to fanout its message to a 1000 frontend instances of which only one will make use of the message.
VertX seems like it already takes care of async operations very well, is there a way in VertX to identify each websocket connection instead of having to maintain a map of GUIDs mapped to websocket handlers / post handlers?
Also, referring to the picture, is there a way for V3 to consume the message, but still be able to write a response back to the websocket handler that's currently connected to V2?
What you're missing from your diagram is the Vertx EventBus.
Basically you can assume that your V1...Vn are interconnected:
V1<->V2<->...<->Vn
Let's assume that Va receives your outbound Q message (the red line), that is intended for Vb.
It then should send it to Vb using EventBus:
eventBus.send("Vb UUID", "Message for Vb, also containing WebSocket UUID", ar -> {
if (ar.succeeded()) {
// All went well
}
else {
// Vb died or has other problems. Your choice how to handle this
}
});

Spring Servlet 3.0 Async Controllers - what thread handles response?

I'm fairly new to Java (I'm using Java SE 7) and the JVM and trying to write an asynchronous controller using:
Tomcat 7
Spring MVC 4.1.1
Spring Servlet 3.0
I have a component that my controller is delegating some work to that has an asynchronous portion and returns a ListenableFuture. Ideally, I'd like to free up the thread that initially handles the controller response as I'm waiting for the async operation to return, hence the desire for an async controller.
I'm looking at returning a DeferredResponse -- it seems pretty easy to bridge this with ListenableFuture -- but I can't seem to find any resources that explain how the response is delivered back to the client once the DeferredResponse resolves.
Maybe I'm not fully grok'ing how an asynchronous controller is supposed to work, but could someone explain how the response gets returned to the client once the DeferredResponse resolves? There has to be some thread that picks up the job of sending the response, right?
I recently used Spring's DeferredResponse to excellent effect in a long-polling situation that I recently coded. Focusing on the 'how' of the response getting back to the user is, I believe, not the correct way to think about the object. Depending upon where it's used, it returns messages to the user in exactly the same way as a regular, synchronous call would only in a delayed, asynchronous manner. Again, the object does not define nor propose a delivery mechanism. Just a way to 'insert' an asynchronous response into existing channels.
Per your query, yes, it does so by creating a thread that has a timeout of the user's specification. If the code completes before the timeout, using 'setResult', the object returns the code's result. Otherwise, if the timeout fires before the result, the default, also set by the user, is returned. Either way, the object does not return anything (other than the object itself) until one of these mechanisms is called. Also, the object has to then be discarded as it cannot be reused.
In my case, I was using a HTTP request/response function that would wrap the returned response in a DeferredResponse object that would provide a default response - asking for another packet from client so the browser would not time out - if the computation the code was working on did not return before the timeout. Whenever the computation was complete, it would send the response via the 'setResult' function call. In this situation both cases would simply use the HTTP response to send a packet back to the user. However, in neither case would the response go back to the user immediately.
In practice the object worked flawlessly and allowed me to implement an effective long-polling mechanism.
Here is a snippet of the code in my example:
#RequestMapping(method = RequestMethod.POST, produces = "application/text")
#ResponseBody
// public DeferredResult<String> onMessage(#RequestBody String message, HttpSession session) {
public DeferredResult<String> onMessage(InputStream is, HttpSession session) {
String message = convertStreamToString(is);
// HttpSession session = null;
messageInfo info = getMessageInfo(message);
String state = info.getState();
String id = info.getCallID();
DeferredResult<String> futureMessage =
new DeferredResult<>(refreshIntervalSecs * msInSec, getRefreshJsonMessage(id));
if(state != null && id != null) {
if(state.equals("REFRESH")) {
// Cache response for future and "swallow" call as it is restocking call
LOG.info("Refresh received for call " + id);
synchronized (lock) {
boolean isReplaceable = callsMap.containsKey(id) && callsMap.get(id).isSetOrExpired();
if (isReplaceable)
callsMap.put(id, futureMessage);
else {
LOG.warning("Refresh packet arrived on a non-existent call");
futureMessage.setResult(getExitJsonMessage(id));
}
}
} else if (state.equals("NEW")){
// Store response for future and pass the call onto the processing logic
LOG.info("New long-poll call received with id " + id);
ClientSupport cs = clientSupportMap.get(session.getId());
if(cs == null) {
cs = new ClientSupport(this, session.getId());
clientSupportMap.put(session.getId(), cs);
}
callsMap.put(id, futureMessage);
// *** IMPORTANT ****
// This method sets up a separate thread to do work
cs.newCall(message);
}
} else {
LOG.warning("Invalid call information");
// Return value immediately when return is called
futureMessage.setResult("");
}
return futureMessage;
}

Non-blocking reverse proxy with netty

I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.

Why does async-http-client does not throttle my requests?

I have an Akka actor that owns an AsyncHttpClient. This actor must handles a lot of asynchronous requests. Because my system cannot handle thousands of requests simultaneously, I need to limit the number of concurrent requests.
Right now, I'm doing this :
AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder().setAllowPoolingConnection(true)
.addRequestFilter(new ThrottleRequestFilter(32))
.setMaximumConnectionsPerHost(16)
.setMaxRequestRetry(5)
.build();
final AsyncHttpClient httpClient = new AsyncHttpClient(new NettyAsyncHttpProvider(config));
When my actor receives a message, I use the client like this :
Future<Integer> f = httpClient.prepareGet(url).execute(
new AsyncCompletionHandler<Integer>() {
#Override
public Integer onCompleted(Response response) throws Exception {
// handle successful request
}
#Override
public void onThrowable(Throwable t){
// handle failed request
}
}
);
The problem is that requests are never put in the client queue and are all processed like the configuration doesn't matter. Why doesn't this work as it should?
From the maintainer:
setMaxConnectionsPerHost only caps the number of connections that can be open to a given host. There's no built-in queuing mechanism for requests that might need a connection while there's none available.
So basically, it's a hard limit. Also, in versions of the library prior to, I believe, 1.9.10, the maximumConnectionsPerHost field was not being properly utilized by the code to limit the number of concurrent connections per host. Instead, there was a bug where the client only looked at the maximumConnectionsTotal field.
Link to issue referenced on GitHub

Categories

Resources