Play framework running long blocking tasks, without blocking the client - java

The web client will be blocked while waiting for the response, but nothing will be blocked on the server, and server resources can be used to serve other clients.
Some of the client requests require my server to execute long blocking tasks. I understand that I can execute them in a separate thread pool.
But I also do not want the client to be blocked. I just want to return an immediate response to the client (e.g. OK got your thick long blocking task). The client does not care about getting the result of the task execution it just needs to know that I am working on executing it.
How I can implement this behavior in play?
I think I can create a job queue and use another thread to process the job queue. Where the play controller only adds the job to the queue and the other thread execute the jobs from the queue. Should I do that? Should I use Akka actor? (I do not know Akka I will need to learn it)

Callbacks
It all started with the callbacks.
You have surely seen this:
Something.save(function(err) {
if (err) {
//error handling
return;
}
console.log('success');
});
This is defining a callback in JavaScript - something which is going to be executed async. Thanks to their syntax, implementation and what-not, callbacks are not really your friend. Overusing them can lead to the dreaded callback hell
Promises
In this context: Promises in ES6
Something.save()
.then(function() {
console.log('success');
})
.catch(function() {
//error handling
})
Promises are not an 'ES6-thing', they have existed since many years, ES6 is bringing them to you. Promises are nice, you can even chain them:
saveSomething()
.then(updateOtherthing)
.then(deleteStuff)
.then(logResults);
But enough with async for the insane.
WebSocket
WebSocket is something I would recommend:
as of today very well supported
wonderful support in Play 2.x
full duplex TCP
you finally can find time to learn Akka ;)
So you can create a client which opens a WebSocket connection to the Play application. On the server side you can handle WebSocket connections either with Akka actors (which I recommend) or with callbacks on streams. Using actors is really easy and also fun - you define an Actor - and the moment someone opens a WebSocket connection, an instance of this actor is spawned and then every message you received in the WebSocket channel is going to be received by the actor - you can concentrate on your business logic without thinking about the surroundings and then send the message back - something Akka excels at.

Related

Long running processes inside service invoked from actor

We are using Akka actors in our application. Actors are invoking services and in some cases those services are calling rest API exposed by third party. Now the third party APIs are taking a very long time to respond. Due to this during peak time , the system through put is getting impacted as threads are blocked as they wait for client API to respond.
Sometimes during peak time , because threads are waiting , messages just keep waiting in akka mail box for long time and are picked up, once blocked threads are made available.
I am looking for some solution where in i can improve the throughput of the system and can free up threads so that actor messages can start processing.
I am thinking to change the rest API call from blocking to non blocking call using Future and creating a dedicated actor for such kind of rest API invocations . Actor will periodically check if future is completed and then sends a completion message and then rest of process can continue.This way the number of blocking threads will reduce and they will be available for actor message processing.
Also would like to have separate execution context for actors which are doing blocking operations. As of now we have global execution context.
Need further inputs on this.
Firstly, you are correct that you should not block when processing a message, so you need to start an asynchronous process and return immediately from the message handler. The Actor can then send another message when the process completes.
You don't say what technology you are using for the REST API calls but libraries like Akka HTTP will return a Future when making an HTTP request so you shouldn't need to add your own Future around it. E.g:
def getStatus(): Future[Status] =
Http().singleRequest(HttpRequest(uri = "http://mysite/status")).flatMap {
res =>
Unmarshal(res).to[Status]
}
If for some reason you are using a synchronous library, wrap the call in Future(blocking{ ??? }) to notify the execution context that it needs to create a new thread. You don't need a separate execution context unless you need to control the scheduling of threads for this operation.
You also don't need to poll the Future, simply add an onComplete handler that sends an internal message back to your actor which then processes the result of the API call.

gRPC Concurrency for Stubs

In gRPC I would like some more information on the way the server handles requests.
Are requests executed in parallel? Or does the server spawn a new thread for each request, and execute them in parallel? Is there a way to modify this behavior? I understand that in client-streaming rpc's that message order is guaranteed.
If I send Request A followed by Request B to the same RPC, is it guaranteed that A will be executed first before B begins processing? Or are they each their own thread and executed in parallel with no guarantee that A finishes before B.
Ideally I would like to send a request to the server, the server acknowledges receipt of the request, and then the request is added to a queue to be processed sequentially, and returns a response once it's been processed. An approach I was exploring is to use an external task queue (like RabbitMQ) to queue the work done by the service but I want to know if there is a better approach.
Also -- on a somewhat related note -- does gRPC have a native retry counter mechanism? I have a particularly error-prone RPC that may have to retry up to 3 times (with an arbitrary delay between retries) before it is successful. This is something that could be implemented with RabbitMQ as well.
grpc-java passes RPCs to the service using the Executor provided by ServerBuilder.executor(Executor), or a cached thread pool if no executor is provided.
There is no ordering between simultaneous RPCs. RPCs can arrive in any order.
You could use a server-streaming RPC to allow the server to respond twice, once for acknowledgement and once for completion. You can use a oneof in the response message to allow sending the two different responses.
grpc-java as experimental retry support. gRFC A6 describes the support. The configuration is delivered to the client via service config. Retries are disabled by default, so overall you would want something like channelBuilder.defaultServiceConfig(serviceConfig).enableRetry(). You can also reference the hedging example which is very similar to retries.

How Do Applications handle Asynchronous Responses - via Callback

I have been doing Java for a few years but I have not had much experience with Asynchronous programming.
I am working on an application that makes SOAP web service calls to some Synchronous web services and currently the implementation of my consuming application is Synchronous also ie. my applications threads block while waiting for the response.
I am trying to learn how to handle these SOAP calls in an asynchronous way - just for the hell of it but I have some high-level questions which I cant seem to find any answers to.
I am using CXF but my question is not specifically about CXF or SOAP, but higher-level, in terms of asynchronous application architecture I think.
What I want to know (working thru a scenario) - at a high level - is:
So I have a Thread (A) running in my JVM that makes a call to a remote web service
It registers a callback method and returns a Future
Thread (A) has done its bit and gets returned to its pool once it has returned the Future
The remote web service response returns and Thread (B) gets allocated and calls the callback method (which generally populates the Future with a result I believe)
Q1. I cant get my head off the blocking thread model - if Thread (A) is no longer listening to that network socket then how does the response that comes back from the remote service get allocated Thread (B) - is it simply treated as a new request coming into the server/container which then allocates a thread to service it?
Q2. Closely related to Q1 I imagine: if no Thread has the Future, or handler (with its callback method) on its stack, then how does the response from the remote web service get associated with the callback method it needs to call?
Or, in another way of asking, how does Thread B (now dealing with the response) get given a reference to the Future/Callback object?
Very sorry my question is so long - and thanks to anyone who gave their time to read through it! :)
I don't see why you'd add all this complexity using asynchronous Threading.
The way to design an asynchronous soap service:
You have one service sending out a response to a given client / clients.
Those clients work on the response given asynchronously.
When done, they would call another soap method to return their response.
The response will just be stored in a queue (e.g. a database table), without any extra logic. You'd have a "Worker" Service working on the incoming tasks. If a response is needed again another method on the other remote service would be called. The requests I would store as events in the database, which would later be asynchronously handled by an EventHandler. See
Hexagonal Architecture:
https://www.youtube.com/watch?v=fGaJHEgonKg
Your Q1 and Q2 seem to have more to do with multithreading than they have to do with asynchronous calls.
The magic of asynchronous web service calls is that you don't have to worry about multithreading to handle blocking while waiting for a response.
It's a bit unclear from the question what the specific problem statement is (i.e., what you are hoping to have your application do while blocking or rather than blocking), but here are a couple ways that you could use asynchronous web service calls that will allow you to do other work.
For the following cases, assume that the dispatch() method calls Dispatch.invokeAsync(T msg, AsyncHandler handler) and returns a Future:
1) Dispatch multiple web service requests, so that they run in parallel:
If you have multiple services to consume and they can all execute independently, dispatch them all at once and process the responses when you have received them all.
ArrayList<Future<?>> futures = new ArrayList<Future<?>>();
futures.add(serviceToConsume1.dispatch());
futures.add(serviceToConsume2.dispatch());
futures.add(serviceToConsume3.dispatch());
// now wait until all services return
for(Future f<?> : futures) {
f.get();
}
// now use responses to continue processing
2) Polling:
Future<?> f = serviceToConsume.dispatch();
while(!f.isDone()) {
// do other work here
}
// now use response to continue processing

WebSocket async send can result in blocked send once queue filled

I have pretty simple Jetty-based websockets server, responsible for streaming small binary messages to connect clients.
To avoid any blocking on server side I was using sendBytesByFuture method.
After increasing load from 2 clients to 20, they stop receive any data. During troubleshooting I decided to switch on synchronous send method and finally got potential reason:
java.lang.IllegalStateException: Blocking message pending 10000 for BLOCKING
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.lockMsg(WebSocketRemoteEndpoint.java:130)
at org.eclipse.jetty.websocket.common.WebSocketRemoteEndpoint.sendBytes(WebSocketRemoteEndpoint.java:244)
Clients not doing any calculations upon receiving data so potentially they can't be slow joiners.
So I wondering what can I do to solve this problem?
(using Jetty 9.2.3)
If the error message occurs from a synchronous send, then you have multiple threads attempting to send messages on the same RemoteEndpoint - something that isn't allowed per the protocol. Only 1 message at a time may be sent. (There is essentially no queue for synchronous sends)
If the error message occurs from an asynchronous send, then that means you have messages sitting in a queue waiting to be sent, yet you are still attempting to write more async messages.
Try not to mix synchronous and asynchronous at the same time (it would be very easy to accidentally have output that become an invalid protocol stream)
Using Java Futures:
You'll want to use the Future objects that are provided on the return of the sendBytesByFuture() and sendStringByFuture() methods to verify that the message was actually sent or not (could have been an error), and if enough start to queue up unsent you back off on sending more messages until the remote endpoint can catch up.
Standard Future behavior and techniques apply here.
Using Jetty Callbacks:
There is also the WriteCallback behavior available in the sendBytes(ByteBuffer,WriteCallback) and sendString(String,WriteCallback) methods that would call your own code on success/error, at which you can put some logic around what you send (limit it, send it slower, queue it, filter it, drop some messages, prioritize messages, etc. whatever you need)
Using Blocking:
Or you can just use blocking sends to never have too many messages queue up.

Stateless Blocking Server Design

A little help please.
I am designing a stateless server that will have the following functionality:
Client submits a job to the server.
Client is blocked while the server tries to perform the job.
The server will spawn one or multiple threads to perform the job.
The job either finishes, times out or fails.
The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.
Here is what I have thought of so far.
Client submits a job to the server.
The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
Have a thread pool that will execute the job, fetch the result and appropriately create the response.
Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.
Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.
Cheers
Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.
Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:
Create a thread pool (such as ThreadPoolExecutor)
For each client request:
If you have any parts of the job that you can execute in parallel, delegate them to the pool.
And / or do them in the current thread.
Wait until pooled job parts complete (if applicable).
Return results to client.
Shutdown the thread pool.
No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.
Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.
How are you communicating to the client?
I recommend you create an object to represent each job which holds job parameters and the socket (or other communication mechanism) to reach the client. The thread pool will then send the response to unblock the client at the end of job processing.
The timeouts will be somewhat tricky, and will have hidden gotcha's but the basic design would seem to be to straightforward, write a class that takes a Socket in the constructor. on socket.accept we just do a new socket processing instantiation, with great foresight and planning on scalability or if this is a bench-test-experiment, then the socket processing class just goes to the data processing stuff and when it returns you have some sort of boolean or numeric for the state or something, handy place for null btw, and ether writes the success to the Output Stream from the socket or informs client of a timeout or whatever your business needs are
If you have to have a scalable, effective design for long-running heavy-haulers, go directly to nio ... hand coded one-off solutions like I describe probably won't scale well but would provide fundamental conceptualizing basis for an nio design of code-correct work.
( sorry folks, I think directly in code - design patterns are then applied to the code after it is working. What does not hold up gets reworked then, not before )

Categories

Resources