Long running processes inside service invoked from actor - java

We are using Akka actors in our application. Actors are invoking services and in some cases those services are calling rest API exposed by third party. Now the third party APIs are taking a very long time to respond. Due to this during peak time , the system through put is getting impacted as threads are blocked as they wait for client API to respond.
Sometimes during peak time , because threads are waiting , messages just keep waiting in akka mail box for long time and are picked up, once blocked threads are made available.
I am looking for some solution where in i can improve the throughput of the system and can free up threads so that actor messages can start processing.
I am thinking to change the rest API call from blocking to non blocking call using Future and creating a dedicated actor for such kind of rest API invocations . Actor will periodically check if future is completed and then sends a completion message and then rest of process can continue.This way the number of blocking threads will reduce and they will be available for actor message processing.
Also would like to have separate execution context for actors which are doing blocking operations. As of now we have global execution context.
Need further inputs on this.

Firstly, you are correct that you should not block when processing a message, so you need to start an asynchronous process and return immediately from the message handler. The Actor can then send another message when the process completes.
You don't say what technology you are using for the REST API calls but libraries like Akka HTTP will return a Future when making an HTTP request so you shouldn't need to add your own Future around it. E.g:
def getStatus(): Future[Status] =
Http().singleRequest(HttpRequest(uri = "http://mysite/status")).flatMap {
res =>
Unmarshal(res).to[Status]
}
If for some reason you are using a synchronous library, wrap the call in Future(blocking{ ??? }) to notify the execution context that it needs to create a new thread. You don't need a separate execution context unless you need to control the scheduling of threads for this operation.
You also don't need to poll the Future, simply add an onComplete handler that sends an internal message back to your actor which then processes the result of the API call.

Related

gRPC Concurrency for Stubs

In gRPC I would like some more information on the way the server handles requests.
Are requests executed in parallel? Or does the server spawn a new thread for each request, and execute them in parallel? Is there a way to modify this behavior? I understand that in client-streaming rpc's that message order is guaranteed.
If I send Request A followed by Request B to the same RPC, is it guaranteed that A will be executed first before B begins processing? Or are they each their own thread and executed in parallel with no guarantee that A finishes before B.
Ideally I would like to send a request to the server, the server acknowledges receipt of the request, and then the request is added to a queue to be processed sequentially, and returns a response once it's been processed. An approach I was exploring is to use an external task queue (like RabbitMQ) to queue the work done by the service but I want to know if there is a better approach.
Also -- on a somewhat related note -- does gRPC have a native retry counter mechanism? I have a particularly error-prone RPC that may have to retry up to 3 times (with an arbitrary delay between retries) before it is successful. This is something that could be implemented with RabbitMQ as well.
grpc-java passes RPCs to the service using the Executor provided by ServerBuilder.executor(Executor), or a cached thread pool if no executor is provided.
There is no ordering between simultaneous RPCs. RPCs can arrive in any order.
You could use a server-streaming RPC to allow the server to respond twice, once for acknowledgement and once for completion. You can use a oneof in the response message to allow sending the two different responses.
grpc-java as experimental retry support. gRFC A6 describes the support. The configuration is delivered to the client via service config. Retries are disabled by default, so overall you would want something like channelBuilder.defaultServiceConfig(serviceConfig).enableRetry(). You can also reference the hedging example which is very similar to retries.

Designing non-real time, non-blocking, result-dependent system

Context:
1) We have a scheduler which picks up jobs and process them by calling another rest-call in a blocking manner.
2) Scheduler thread needs to wait for the rest-call to complete and in-turn do some another task based upon the result.
3) There is no constraint for this to be real time.
Problem Statement:
1) What we want is to free scheduler threads as soon as an external call is made as external call takes significant time to complete.
2) We should be informed about the result received from the external call as we need to do some processing based on the result.
Idea in my mind:
1) Rather than calling the external system using synchronous Http call, we
can push the event to the queue.
2) Api consumer of another system will read the event from the queue and do the long running task. And post processing push the result back to the queue on a different topic.
3) Our system now can read the response from the queue(second topic) and do the necessary actions.
This is one of the design approach that comes to my
I need advice on whether we can improve the design somehow.
1) Can this be done without introduction of queue ?
2) Is there any better way to achieve the asynchronous processing ?
If you want to avoid using a queue, I can think of 2 other alternatives, for example:
1) Rather than calling the external system using synchronous Http call, we can push the event to the queue.
alternative a)
you do a synchronous HTTP GET to tell the other system that you want certain job to be executed (the other system replies quickly with a "200 OK" to confirm that it received the request).
alternative b)
you do a synchronous HTTP GET to tell the other system that you want certain job to be executed (the other system replies quickly with a "200 OK" and a unique ID to identify the job to be executed)
2) Api consumer of another system will read the event from the queue and do the long running task. And post processing push the result back to the queue on a different topic.
3) Our system now can read the response from the queue(second topic) and do the necessary actions.
alternative a)
upon receiving the request, the other system performs the long running computation and then when it is ready it makes a synchronous HTTP call to your original system to inform that the job is ready.
alternative b)
upon receiving the request, the other system performs the long running computation.
the original system doesn't know if the job is done, so it polls at certain times (doing a synchronous HTTP GET to a different REST API) providing the JOB ID, to find out if the job is ready.

Play framework running long blocking tasks, without blocking the client

The web client will be blocked while waiting for the response, but nothing will be blocked on the server, and server resources can be used to serve other clients.
Some of the client requests require my server to execute long blocking tasks. I understand that I can execute them in a separate thread pool.
But I also do not want the client to be blocked. I just want to return an immediate response to the client (e.g. OK got your thick long blocking task). The client does not care about getting the result of the task execution it just needs to know that I am working on executing it.
How I can implement this behavior in play?
I think I can create a job queue and use another thread to process the job queue. Where the play controller only adds the job to the queue and the other thread execute the jobs from the queue. Should I do that? Should I use Akka actor? (I do not know Akka I will need to learn it)
Callbacks
It all started with the callbacks.
You have surely seen this:
Something.save(function(err) {
if (err) {
//error handling
return;
}
console.log('success');
});
This is defining a callback in JavaScript - something which is going to be executed async. Thanks to their syntax, implementation and what-not, callbacks are not really your friend. Overusing them can lead to the dreaded callback hell
Promises
In this context: Promises in ES6
Something.save()
.then(function() {
console.log('success');
})
.catch(function() {
//error handling
})
Promises are not an 'ES6-thing', they have existed since many years, ES6 is bringing them to you. Promises are nice, you can even chain them:
saveSomething()
.then(updateOtherthing)
.then(deleteStuff)
.then(logResults);
But enough with async for the insane.
WebSocket
WebSocket is something I would recommend:
as of today very well supported
wonderful support in Play 2.x
full duplex TCP
you finally can find time to learn Akka ;)
So you can create a client which opens a WebSocket connection to the Play application. On the server side you can handle WebSocket connections either with Akka actors (which I recommend) or with callbacks on streams. Using actors is really easy and also fun - you define an Actor - and the moment someone opens a WebSocket connection, an instance of this actor is spawned and then every message you received in the WebSocket channel is going to be received by the actor - you can concentrate on your business logic without thinking about the surroundings and then send the message back - something Akka excels at.

How Do Applications handle Asynchronous Responses - via Callback

I have been doing Java for a few years but I have not had much experience with Asynchronous programming.
I am working on an application that makes SOAP web service calls to some Synchronous web services and currently the implementation of my consuming application is Synchronous also ie. my applications threads block while waiting for the response.
I am trying to learn how to handle these SOAP calls in an asynchronous way - just for the hell of it but I have some high-level questions which I cant seem to find any answers to.
I am using CXF but my question is not specifically about CXF or SOAP, but higher-level, in terms of asynchronous application architecture I think.
What I want to know (working thru a scenario) - at a high level - is:
So I have a Thread (A) running in my JVM that makes a call to a remote web service
It registers a callback method and returns a Future
Thread (A) has done its bit and gets returned to its pool once it has returned the Future
The remote web service response returns and Thread (B) gets allocated and calls the callback method (which generally populates the Future with a result I believe)
Q1. I cant get my head off the blocking thread model - if Thread (A) is no longer listening to that network socket then how does the response that comes back from the remote service get allocated Thread (B) - is it simply treated as a new request coming into the server/container which then allocates a thread to service it?
Q2. Closely related to Q1 I imagine: if no Thread has the Future, or handler (with its callback method) on its stack, then how does the response from the remote web service get associated with the callback method it needs to call?
Or, in another way of asking, how does Thread B (now dealing with the response) get given a reference to the Future/Callback object?
Very sorry my question is so long - and thanks to anyone who gave their time to read through it! :)
I don't see why you'd add all this complexity using asynchronous Threading.
The way to design an asynchronous soap service:
You have one service sending out a response to a given client / clients.
Those clients work on the response given asynchronously.
When done, they would call another soap method to return their response.
The response will just be stored in a queue (e.g. a database table), without any extra logic. You'd have a "Worker" Service working on the incoming tasks. If a response is needed again another method on the other remote service would be called. The requests I would store as events in the database, which would later be asynchronously handled by an EventHandler. See
Hexagonal Architecture:
https://www.youtube.com/watch?v=fGaJHEgonKg
Your Q1 and Q2 seem to have more to do with multithreading than they have to do with asynchronous calls.
The magic of asynchronous web service calls is that you don't have to worry about multithreading to handle blocking while waiting for a response.
It's a bit unclear from the question what the specific problem statement is (i.e., what you are hoping to have your application do while blocking or rather than blocking), but here are a couple ways that you could use asynchronous web service calls that will allow you to do other work.
For the following cases, assume that the dispatch() method calls Dispatch.invokeAsync(T msg, AsyncHandler handler) and returns a Future:
1) Dispatch multiple web service requests, so that they run in parallel:
If you have multiple services to consume and they can all execute independently, dispatch them all at once and process the responses when you have received them all.
ArrayList<Future<?>> futures = new ArrayList<Future<?>>();
futures.add(serviceToConsume1.dispatch());
futures.add(serviceToConsume2.dispatch());
futures.add(serviceToConsume3.dispatch());
// now wait until all services return
for(Future f<?> : futures) {
f.get();
}
// now use responses to continue processing
2) Polling:
Future<?> f = serviceToConsume.dispatch();
while(!f.isDone()) {
// do other work here
}
// now use response to continue processing

Time restricted service

i'm developing an app that make requests to the Musicbrainz webservice. I read in the musicbrainz manual to not make more than one request per second to the webservice or the client IP will be blocked.
What architecture do you suggest in order to make this restriction transparent to the service client.
I would like to call a method (getAlbuns for example) and it should only make the request 1sec after the last request.
I also want to call 10 request at once and the service should handle the queueing, returning the results when avaiable (Non-blocking).
Thanks!
Because of the required delay between invocations, I'd suggest a java.util.Timer or java.util.concurrent.ScheduledThreadPoolExecutor. Timer is very simple, and perfectly adequate for this use case. But if additional scheduling requirements are identified later, a single Executor could handle all of them. In either case, use fixed-delay method, not a fixed-rate method.
The recurring task polls a concurrent queue for a request object. If there is a pending request, the task executes it, and returns the result via a callback. The query for the service and the callback to invoke are members of the request object.
The application keeps a reference to the shared queue. To schedule a request, simply add it to the queue.
Just to clarify, if the queue is empty when the scheduled task is executed, no request is made. The simple approach would be just to end the task, and the scheduler will invoke the task one second later to check again.
However, this means that it could take up to one second to start a task, even if no requests have been processed lately. If this unnecessary latency is intolerable, writing your own thread is probably preferable to using Timer or ScheduledThreadPoolExecutor. In your own timing loop, you have more control over the scheduling if you choose to block on an empty queue until a request is available. The built-in timers aren't guaranteed to wait a full second after the previous execution finished; they generally schedule relative to the start time of the task.
If this second case is what you have in mind, your run() method will contain a loop. Each iteration starts by blocking on the queue until a request is received, then recording the time. After processing the request, the time is checked again. If the time difference is less than one second, sleep for the the remainder. This setup assumes that the one second delay is required between the start of one request and the next. If the delay is required between the end of one request and the next, you don't need to check the time; just sleep for one second.
One more thing to note is that the service might be able to accept multiple queries in a single request, which would reduce overhead. If it does, take advantage of this by blocking on take() for the first element, then using poll(), perhaps with a very short blocking time (5 ms or so), to see if the application is making any more requests. If so, these can be bundled up in a single request to the service. If queue is a BlockingQueue<? extends Request>, it might look something like this:
Collection<Request> bundle = new ArrayList<Request>();
bundle.add(queue.take());
while (bundle.size() < BUNDLE_MAX) {
Request req = queue.poll(EXTRA, TimeUnit.MILLISECONDS);
if (req == null)
break;
bundle.add(req);
}
/* Now make one service request with contents of "bundle". */
You need to define a local "proxy service" which your local clients will call.
The local proxy will receive requests and pass it on to the real service. But only at the rate of one message per second.
How you do this depends very much on the tecnoligy available to you.
The simplest would be a mutithreaded java service with a static and synchronised LastRequestTime long;" timestamp variable. (Although you would need some code acrobatics to keep your requests in sequence).
A more sophisticated service could have worker threads receiving the requests and placing them on a queue with a single thread picking up the requests and passing them on to the real service.

Categories

Resources