I have a Controller that must accept a request and do not wait for the finishing of the processing, give a response.
#PostMapping
#ResponseStatus(HttpStatus.CREATED)
public void processEvent(#RequestBody RequestMyObjectDTO requestMyObjectDTO) {
MyProcessor.process(requestMyObjectDTO);
}
After I give a response, I must execute the processing.
#Async
public void process(RequestMyObjectDTO myRequestObject) {
List<TestObject> testObjects= repository.findAllTestObject();
if (testObjects.isEmpty()) {
return;
}
.............
Is there any difference in where I will go to the database, in the asynchronous method, or outside it? In my case in Controller for example.
How it will impact behavior and what approaches are better?
Given that I need a check:
List<TestObject> testObjects= repository.findAllTestObject();
if (testObjects.isEmpty()) {
return;
}
At the same time, I expect that the controller may receive millions of requests.
Is it possible with Project Reactor to wait in a mono for an event / condition without needing to use a blocking thread per mono? With a CompletableFuture I can pull such a thing off but I can't see how to do it with Project Reactor.
My problem is that I need to correlate requests with responses. The response time varies wildly and some will even never get a reply and timeout. When on the client side a blocking thread per request isn't a problem but since this is a server application I don't want to end up with spawning a thread per request that blocks waiting for a response.
The API looks something like this:
Mono<Response> doRequest(Mono<Request> request);
Since I don't know how to do it with Reactor I will explain how to do it with a CompletableFuture to clarify what I'm looking for. The API would look like this:
CompletableFuture<Response> doRequest(Request request);
When invoked by a caller a request to a server is made which has a correlation ID in it generated by this method. The caller is returned a CompletableFuture and the method stores a reference to this CompletableFuture in map with the correlation ID as key.
There is also a thread (pool) which receives all the responses for the server. When it receives a response it takes the correlation ID from the response and uses it to look up the original request (ie. the CompletableFuture) in the map and calls complete(response); on it.
In this implementation you don't need a blocking thread per request. This is basically more of a Vert.X / Netty way of thinking? I would like to know how to implement such a thing (if possible) with Project Reactor.
EDIT 25-07-2019:
As per request in the comments to clarify what I'm getting at below is an example of how I would implement this with CompleteableFuture's.
I also noticed I made a mistake which might have been rather confusing: In the CompletableFuture example I passed a Mono as argument. That should have been just a "normal" argument. My apologies and I hope I didn't confuse people too much with it.
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ConcurrentHashMap;
class NonBlockingCorrelatingExample {
/**
* This example shows how to implement correlating requests with responses without needing a (sleeping)
* thread per request to wait for the response with the use of {#link CompletableFuture}'s.
*
* So the main feat of this example is that there is always a fixed (small) number of threads used even if one
* would fire a thousands requests.
*/
public static void main(String[] args) throws Exception {
RequestResponseService requestResponseService = new RequestResponseService();
Request request = new Request();
request.correlationId = 1;
request.question = "Do you speak Spanish?";
CompletableFuture<Response> responseFuture = requestResponseService.doRequest(request);
responseFuture.whenComplete((response, throwable) -> System.out.println(response.answer));
// The blocking call here is just so the application doesn't exit until the demo is completed.
responseFuture.get();
}
static class RequestResponseService {
/** The key in this map is the correlation ID. */
private final ConcurrentHashMap<Long, CompletableFuture<Response>> responses = new ConcurrentHashMap<>();
CompletableFuture<Response> doRequest(Request request) {
Response response = new Response();
response.correlationId = request.correlationId;
CompletableFuture<Response> reponseFuture = new CompletableFuture<>();
responses.put(response.correlationId, reponseFuture);
doNonBlockingFireAndForgetRequest(request);
return reponseFuture;
}
private void doNonBlockingFireAndForgetRequest(Request request) {
// In my case this is where the request would be published on an MQTT broker (message bus) in a request topic.
// Right now we will just make a call which will simulate a response message coming in after a while.
simulateResponses();
}
private void processResponse(Response response) {
// There would usually be a (small) thread pool which is subscribed to the message bus which receives messages
// in a response topic and calls this method to handle those messages.
CompletableFuture<Response> responseFuture = responses.get(response.correlationId);
responseFuture.complete(response);
}
void simulateResponses() {
// This is just to make the example work. Not part of the example.
new Thread(() -> {
try {
// Simulate a delay.
Thread.sleep(10_000);
Response response = new Response();
response.correlationId = 1;
response.answer = "Si!";
processResponse(response);
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
}
}
static class Request {
long correlationId;
String question;
}
static class Response {
long correlationId;
String answer;
}
}
Yes, it is possible. You can use reactor.core.publisher.Mono#create method to achieve it
For your example:
public static void main(String[] args) throws Exception {
RequestResponseService requestResponseService = new RequestResponseService();
Request request = new Request();
request.correlationId = 1;
request.question = "Do you speak Spanish?";
Mono<Request> requestMono = Mono.just(request)
.doOnNext(rq -> System.out.println(rq.question));
requestResponseService.doRequest(requestMono)
.doOnNext(response -> System.out.println(response.answer))
// The blocking call here is just so the application doesn't exit until the demo is completed.
.block();
}
static class RequestResponseService {
private final ConcurrentHashMap<Long, Consumer<Response>> responses =
new ConcurrentHashMap<>();
Mono<Response> doRequest(Mono<Request> request) {
return request.flatMap(rq -> doNonBlockingFireAndForgetRequest(rq)
.then(Mono.create(sink -> responses.put(rq.correlationId, sink::success))));
}
private Mono<Void> doNonBlockingFireAndForgetRequest(Request request) {
return Mono.fromRunnable(this::simulateResponses);
}
private void processResponse(Response response) {
responses.get(response.correlationId).accept(response);
}
void simulateResponses() {
// This is just to make the example work. Not part of the example.
new Thread(() -> {
try {
// Simulate a delay.
Thread.sleep(10_000);
Response response = new Response();
response.correlationId = 1;
response.answer = "Si!";
processResponse(response);
} catch (InterruptedException e) {
e.printStackTrace();
}
}).start();
}
}
I have an HTTP request that triggers a long-running task (multiple HTTP requests to another service) that is supposed to be completed in the background while the original requests complete.
So what I do is
public void triggerWork(#RequestBody SomeObject somObject) {
return new ResponseEntity<>(startWorkAndReturn(somObject), HttpStatus.OK);
}
public void startWorkAndReturn(SomeObject someObject) {
Observable.create(observableEmitter -> {
// do the work with someObject here and at some time call
observableEmitter.onNext("result");
}).subscribe(new Observer<Object>() {
#Override
public void onSubscribe(Disposable disposable) {
}
#Override
public void onNext(Object o) {
// called at some unknown time
}
#Override
public void onError(Throwable throwable) {
}
#Override
public void onComplete() {
// currently not used as all the work is done in onNext but maybe that's a mistake
}
});
return;
}
But this seems to block the request until all the work has been done. Which already seems odd to me, since I never call onComplete, which in itself might be a mistake. But still, I am wondering how to create a request that immediately returns after triggering a background worker.
Is Flowables the solution here? I am going to refactor to those anyways to handle backpressure. Or do I need to create a background worker Thread? What is the best practice here?
Thanks
I would use Observable.fromCallable{} since you need emit only single event. That will handle onCompleate call. From information you share I don`t know how can you properly handle disposable. You should add subscribeOn() and observeOn() operators that will define on which thread 'work' should be processed and result should be observed.
Docs ref:
http://reactivex.io/RxJava/javadoc/io/reactivex/Observable.html#fromCallable-java.util.concurrent.Callable-
http://reactivex.io/documentation/operators/subscribeon.html
http://reactivex.io/documentation/operators/observeon.html
I am trying to implement a generic solution for third party API(work in async way) but not getting any idea that how i can implement a call for rest of my application in synchronized way. Basically API is working like take request, process it and then give response when it finish but in meanwhile open to receive other requests as well. So i put API response method in a thread so that monitor continuously either there is a response or not by calling api response method with time interval.
API have interface to take request like:
public void api_execute(string UUID,String request);
API response object:
public APIReponse
{
private String UUID;
private String response_string
// Getter setter
}
I want to write a wrapper on top of this API in which have a single method so that different objects of my application use this to send request and receive response. so UUID will be create by this wrapper class but i am not getting that how i will wait a caller object until i received some response and also distinguish which caller send which request. I was thinking by using observer pattern here but seems to be not fit with this scenario. Can someone give me a hint how i can implement this.
You create async task executor using thread pool
ExecutorService threadpool = Executors.newFixedThreadPool(3);
public Future<APIReponse> submitTask(APIRequest request) throws InterruptedException, ExecutionException {
System.out.println("Submitting Task ...");
Future<APIReponse> future = threadpool.submit(new Callable<APIReponse>() {
#Override
public APIReponse call() throws Exception {
api_execute(request,UUID);
return new APIReponse();
}
});
return future;
I'm using Square's Retrofit Client to make short-lived json requests from an Android App. Is there a way to cancel a request? If so, how?
For canceling async Retrofit request, you can achieve it by shutting down the ExecutorService that performs the async request.
For example I had this code to build the RestAdapter:
Builder restAdapter = new RestAdapter.Builder();
restAdapter.setEndpoint(BASE_URL);
restAdapter.setClient(okClient);
restAdapter.setErrorHandler(mErrorHandler);
mExecutorService = Executors.newCachedThreadPool();
restAdapter.setExecutors(mExecutor, new MainThreadExecutor());
restAdapter.setConverter(new GsonConverter(gb.create()));
and had this method for forcefully abandoning the requests:
public void stopAll(){
List<Runnable> pendingAndOngoing = mExecutorService.shutdownNow();
// probably await for termination.
}
Alternatively you could make use of ExecutorCompletionService and either poll(timeout, TimeUnit.MILISECONDS) or take() all ongoing tasks. This will prevent thread pool not being shut down, as it would do with shutdownNow() and so you could reuse your ExecutorService
Hope it would be of help for someone.
Edit: As of OkHttp 2 RC1 changelog performing a .cancel(Object tag) is possible. We should expect the same feature in upcoming Retrofit:
You can use actual Request object to cancel it
okClient.cancel(request);
or if you have supplied tag to Request.Builder you have to use
okClient.cancel(request.tag());
All ongoing, executed or pending requests are queued inside Dispatcher, okClient.getDispatcher(). You can call cancel method on this object too. Cancel method will notify OkHttp Engine to kill the connection to the host, if already established.
Edit 2: Retrofit 2 has fully featured canceling requests.
Wrap the callback in a delegate object that implements Callback as
well. Call some method to clear out the delegate and have it just
no-op whenever it gets a response.
Look at the following discussion
https://plus.google.com/107765816683139331166/posts/CBUQgzWzQjS
Better strategy would be canceling the callback execution
https://stackoverflow.com/a/23271559/1446469
I've implemented cancelable callback class based on answer https://stackoverflow.com/a/23271559/5227676
public abstract class CancelableCallback<T> implements Callback<T> {
private static List<CancelableCallback> mList = new ArrayList<>();
private boolean isCanceled = false;
private Object mTag = null;
public static void cancelAll() {
Iterator<CancelableCallback> iterator = mList.iterator();
while (iterator.hasNext()){
iterator.next().isCanceled = true;
iterator.remove();
}
}
public static void cancel(Object tag) {
if (tag != null) {
Iterator<CancelableCallback> iterator = mList.iterator();
CancelableCallback item;
while (iterator.hasNext()) {
item = iterator.next();
if (tag.equals(item.mTag)) {
item.isCanceled = true;
iterator.remove();
}
}
}
}
public CancelableCallback() {
mList.add(this);
}
public CancelableCallback(Object tag) {
mTag = tag;
mList.add(this);
}
public void cancel() {
isCanceled = true;
mList.remove(this);
}
#Override
public final void success(T t, Response response) {
if (!isCanceled)
onSuccess(t, response);
mList.remove(this);
}
#Override
public final void failure(RetrofitError error) {
if (!isCanceled)
onFailure(error);
mList.remove(this);
}
public abstract void onSuccess(T t, Response response);
public abstract void onFailure(RetrofitError error);
}
Usage example
rest.request(..., new CancelableCallback<MyResponse>(TAG) {
#Override
public void onSuccess(MyResponse myResponse, Response response) {
...
}
#Override
public void onFailure(RetrofitError error) {
...
}
});
// if u need to cancel all
CancelableCallback.cancelAll();
// or cancel by tag
CancelableCallback.cancel(TAG);
This is for retrofit 2.0, the method call.cancel() is there which cancels the in-flight call as well. below is the document definition for it.
retrofit2.Call
public abstract void cancel()
Cancel this call. An attempt will be made to cancel in-flight calls, and if the call has not yet been executed it never will be.
Now there is an easy way in latest version of Retrofit V 2.0.0.beta2. Can implement retry too.
Take a look here How to cancel ongoing request in retrofit when retrofit.client.UrlConnectionClient is used as client?
According to the Retrofit 2.0 beta 3 changelog via link https://github.com/square/retrofit/releases/tag/parent-2.0.0-beta3
New: isCanceled() method returns whether a Call has been canceled. Use this in onFailure to determine whether the callback was invoked from cancelation or actual transport failure.
This should make stuff easier.
I might be a bit late, but I've possibly found a solution.
I haven't been able to prevent a request from being executed, but if you're satisfied with the request being performed and not doing anything, you might check this question and answer, both made by me.