long polling using JSON not working - java

I am trying to build chat application with long polling mechanism on Google app engine server.
HTTPRequest has default time out of 30 seconds, so I am sending polling request to server every 28 seconds if there is no update from server (so that I wont miss any message from other clients).
First request gets registered, but second request sent after 28 seconds is not reaching server.
function loadPage(query){
$.get({ url: query, success: function(events){
updated = 1;
//events data processing
createServerChannel();
});
}
function createServerChannel(){
var query='/ChatController?&user='+userName+'&sessionName='+sessionName+'&register=true';
loadPage(query);
updated = 0;
setInterval(function() { poll(query); }, 28000);
};
function poll(query){
if(updated==0){
loadPage(query);
}
}
I am using thread.wait() for request to wait on server. Is there any way to consume first pending request when next request from same client is available.
Please help.

I think web sockets might be a better approach as this keeps a continuous connection open to the server and waits for the server to push data to the client.
http://www.html5rocks.com/en/tutorials/websockets/basics/

Related

Stay subscribed to Flux/Mono after client disconnect in Spring WebFlux

I have a service where a couple requests can be long running actions. Occasionally we have timeouts for these requests, and that causes bad state because steps of the flux stop executing after the cancel is called when the client disconnects. Ideally we want this action to continue processing to completion.
I've seen WebFlux - ignore 'cancel' signal recommend using the cache method... Are there any better solutions and/or drawbacks to using cache to achieve this?
there are some solutions for that.
One could be to make it asyncron. when you get the request from the customer you can put it in a processor
Sinks.Many<QueueTask<T>> queue = Sinks.many().multicast().onBackpressureBuffer()
and when the client comes from the customer you just push it to the queue and the queue will be in background processing the items.
But in this case customer will not get any response with the progress of item. only if you send it by socket or he do another request after some times.
Another one is to use Chunked http request.
#GetMapping(value = "/sms-stream/{s}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> streamResponse(#PathVariable("s") String s) {
return service.streamResponse(s);
}
In this case the connection will be open and you can close it automatically in server when processing is done

Jersey SSE service takes long time to establish connection with Chrome browser as a client

I implemented Jersey SSE broadcasting service similar to the Jersey example:
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput listenToBroadcast() {
final EventOutput eventOutput = new EventOutput();
this.broadcaster.add(eventOutput);
return eventOutput;
}
The first client connects without issues, but for all additional clients it takes at least 10 seconds to connect to the service. The delay occurs before client request reaches my code. After clients finally connects it starts to receive events normally.
There is no connection delay only if there is no already connected clients during registration.
Then using Java client the connection is established immediately, but Chrome for some reason takes a very long time to connect and start receiving messages.
Any ideas how to solve this issue?
I am using Jersey 2.24, Tomcat 8.5.9 and Java 8

Match response handler with request in VertX

Let's say I have a Load Balancer (LB) in front of 1..n VertX (V) instances, each VertX instance is connected to a queue (Q), and I have 1..m Backends (BE).
A user clicks on a button which makes a post request or even opens a web socket, the load balancer forwards the request to one of the VertX instances, which fires a request to the queue, one of the Backends consumes the message and sends a response back; if the correct VertX instance consumes it, it can lookup the response handler and write a response to the user, if the wrong VertX instance consumes it, there won't be a response handler to write a response to and the user will wait indefinitely for a response.
See this sketch:
Alternatively, V2 dies and the load balancer reconnects the user to V1 which means even if I could send it back to the exact same one that made the request, it's not guaranteed to still be there once the response comes back, but the user might still be there awaiting a response via another VertX instance.
What I'm currently doing is to generate a GUID for each new connection, then as soon as the websocket connects, store the websocket handler inside a hashmap against the GUID and then when the BE wants to respond, it does a fanout to all 1..n VertX instances, the one that currently has the correct GUID in its hashmap can then write a response to the user.
Same for handling POST / GET in this manner.
Pseudocode:
queue.handler { q ->
q.handler {
val handler = someMap.get(q.guid)
// only respond if handler exists
if (handler != null){
handler.writeResponse(someresponsemessagehere)
}
}
}
vertx.createHttpServer().websocketHandler { ws ->
val guid = generateGUID()
someMap.put(guid, ws)
ws.writeFinalTextFrame("guid=${guid}")
ws.handler {
val guid = extractGuid(it)
// send request to BE including generated GUID
sendMessageToBE(guid, "blahblah")
}
}.requestHandler { router.accept(it) }.listen(port)
This does however mean that if I have a 1000 VertX applications running, that the backend will need to fanout its message to a 1000 frontend instances of which only one will make use of the message.
VertX seems like it already takes care of async operations very well, is there a way in VertX to identify each websocket connection instead of having to maintain a map of GUIDs mapped to websocket handlers / post handlers?
Also, referring to the picture, is there a way for V3 to consume the message, but still be able to write a response back to the websocket handler that's currently connected to V2?
What you're missing from your diagram is the Vertx EventBus.
Basically you can assume that your V1...Vn are interconnected:
V1<->V2<->...<->Vn
Let's assume that Va receives your outbound Q message (the red line), that is intended for Vb.
It then should send it to Vb using EventBus:
eventBus.send("Vb UUID", "Message for Vb, also containing WebSocket UUID", ar -> {
if (ar.succeeded()) {
// All went well
}
else {
// Vb died or has other problems. Your choice how to handle this
}
});

How do I simulate a client aborting request?

I'm tasked with solving a reported bug, where the logs show a
org.apache.catalina.connector.ClientAbortException: java.io.IOException
...
Caused by: java.io.IOException
at org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:205)
There are several questions here about the ClientAbortException, and my understanding from reading them, and also the Tomcat javadoc, is that the exception is thrown by Tomcat when the client aborts the HTTP request.
I'm having trouble reproducing the error. How can I simulate a client abort?
What I've tried
Adding a Thread.sleep(10000) in the request handler, and then closing the browser while the request is running - but that doesn't do it.
Cancelling the HTTP request from the client side using this technique with angular.
Ok, with a bit of experimenting - I've found a way to do it.
What it looks like - is that if an http request is cancelled/timed out by the client while the server is writing/flushing the output then the error will be thrown. (NB. It appears as if the size of the response also matters - see my note at the end).
There are three things that can happen:
Condition 1: Server writes and flushes output before client timeouts.
Response is sent back to client.
Condition 2: Client times out before server writes and flushes output.
Client does not receive response, no server error.
Condition 3: Client times out while server is writing output.
Client does not receive response. Server throws ClientAbortException (java.io.IOException).
To simulate these three conditions, we play with three variables:
The time client takes to timeout
Time server burns getting its result.
The size of the server response.
Here is the test code to simulate it:
Server side (This is a Spring MVC controller).
#RequestMapping(value = { "/debugGet" }, method = RequestMethod.GET)
#ResponseBody
public List<String> debugGet(#RequestParam int timeout, int numObjects) throws InterruptedException {
Thread.sleep(timeout);
List<String> l = new ArrayList<String>();
for (int i =0; i< numObjects; i++){
l.add(new String());
}
return l;
}
Client side (Angular)
this.debugGet = function(server, client, numObjects){
var httpProm = $http({
method: "GET",
url: "debugGet",
timeout: client,
params : {
timeout: server,
numObjects: numObjects}
});
httpProm.then(function(data){
console.log(data);
}, function(data){
console.log("error");
console.log(data);
});
};
Using this I can simulate the three conditions with the following params:
Client Timeout Server Burn Time Num Strings
Condition 1: 1000 900 10000
Condition 2: 1000 2000 100
Condition 3: 1000 950 10000
NB It appears as if the size of response also matters.
For example:
Client Timeout Server Burn Time Num Strings
Condition 2: 1000 2000 100
Condition 3: 1000 2000 10000
Here for the 10000 Strings, we get the java.io.IOException even though the flush occurs well after the client has timed out, whereas it doesn't for the 100 Strings.

Spring MVC, best practice how to often polling server

Im working on web application using the following stack of technologies: Spring, Hibernate, JSP. I have a task to make one of user social element - messages. As standard of implementation message system i take facebook system. On of the problem i faced is a polling server every 1-5 seconds (what period i have to take?) to retrieve an information about unread messages. Also i want to polling server to retrieve new messages at conversation page (like a chat). What i did:
Example code of get count unread messages.
Server side:
#RequestMapping(value = "/getCountUserUnreadMessages", method = RequestMethod.POST)
public #ResponseBody Callable<Integer> getCountUserUnreadMessages(#ActiveUser final SmartUserDetails smartUserDetails) {
// TODO add additional security checks using username and active user
return new Callable<Integer>() {
#Override
public Integer call() throws Exception {
Integer countUserUnreadMessages = messageService.findCountUserUnreadMessages(smartUserDetails.getSmartUser());
while (countUserUnreadMessages == 0) {
Thread.sleep(1000);
countUserUnreadMessages = messageService.findCountUserUnreadMessages(smartUserDetails.getSmartUser());
}
return countUserUnreadMessages;
}
};
}
Client side:
(function poll(){
setTimeout(function () {
$.ajax({
type: "post",
url: "/messages/getCountUserUnreadMessages",
cache: false,
success: function (response) {
$("#countUnreadMessages").text(response);
}, dataType: "json", complete: poll, timeout: 1000 });
}, 3000);
})();
So client send a request to retrieve count unread messages every second with a timeout in 3 seconds (is it good decision?).
But i think that is the worst callable code ever :-P
Tell me how to do better? What technique use?
Additional information:
Yeah, i supposed that it would be highload, many users service in the Internet.
Try Spring 4 WebSocket support:
http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html
WebSockets support full duplex communication over a dedicated TCP connection that you establish over HTTP.
If you are expecting this application to have to scale at all I would make that timing interval more like every 30 - 90 seconds. Otherwise you are basically going to be designing your own built in DOS attack on your self.
You might look into Spring Sockets. It sounds like it's long polling option might work better for you.

Categories

Resources