In an Async servlet processing scenario, I want to achieve cancellation of requests.
(Am also hoping to keep this RESTful)
Say, I have a code like this:
#RequestMapping("/quotes")
#ResponseBody
public void quotes() {
//...
final AsyncContext ac = request.startAsync();
ac.setTimeout(0);
RunJob job = new RunJob(ac);
asyncContexts.add(job);
pool.submit(job);
};
// In some other application-managed thread with a message-driven bean:
public void onMessage(Message msg) {
//...
if (notEndOfResponse) {
ServletOutputStream out = ac.getResponse().getOutputStream();
//...
out.print(message);
} else {
ac.complete();
asyncContexts.remove(ac);
}
};
If the Client decides to cancel this processing at the server-side, it needs to send another HTTP request that identifies the previous request and the server then cancels the previous request (i.e stops server-side processing for that request and completes the response for it).
Is there a standard way to do this ?
If it is the case that there is NO standard way to do this and each developer does it as per their will and skill, I would like to know if my (trivial) approach to this problem is ok.
My way (after #Pace's suggestion) is:
Create a "requestId" on the server and return a URL/link as
part of the first partial responses (because I could get
many partial responses for a single request as part of Async processing).
The link could be, for ex:
.../outstandingRequests/requestId
When needing to cancel the request, the client does a DELETE request on the URL and let the server figure out how to achieve cancellation at its end.
Any problems with this approach ?
When using long running operations/tasks in a RESTful sense it is best to treat the operation itself as a resource. A post to the operations URL returns a URL you can use to GET the status of that operation (including the results when the operation finishes) and a DELETE to that URL will terminate the operation.
Related
I have a service where a couple requests can be long running actions. Occasionally we have timeouts for these requests, and that causes bad state because steps of the flux stop executing after the cancel is called when the client disconnects. Ideally we want this action to continue processing to completion.
I've seen WebFlux - ignore 'cancel' signal recommend using the cache method... Are there any better solutions and/or drawbacks to using cache to achieve this?
there are some solutions for that.
One could be to make it asyncron. when you get the request from the customer you can put it in a processor
Sinks.Many<QueueTask<T>> queue = Sinks.many().multicast().onBackpressureBuffer()
and when the client comes from the customer you just push it to the queue and the queue will be in background processing the items.
But in this case customer will not get any response with the progress of item. only if you send it by socket or he do another request after some times.
Another one is to use Chunked http request.
#GetMapping(value = "/sms-stream/{s}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> streamResponse(#PathVariable("s") String s) {
return service.streamResponse(s);
}
In this case the connection will be open and you can close it automatically in server when processing is done
I'm looking for an example like this but with a synchronous call. My program needs data from external source and should wait until response returns (or until timeout).
The Play WS library is meant for asynchronous requests and this is good!
Using it ensures that your server is not going to be blocked and wait for some response (your client might be blocked but that is a different topic).
Whenever possible you should always opt for the async WS call. Keep in mind that you still get access to the result of the WS call:
public static Promise<Result> index() {
final Promise<Result> resultPromise = WS.url(feedUrl).get().map(
new Function<WS.Response, Result>() {
public Result apply(WS.Response response) {
return ok("Feed title:" + response.asJson().findPath("title"));
}
}
);
return resultPromise;
}
You just need to handle it a bit differently - you provide a mapping function - basically you are telling Play what to do with the result when it arrives. And then you move on and let Play take care of the rest. Nice, isn't it?
Now, if you really really really want to block, then you would have to use another library to make the synchronous request. There is a sync variant of the Apache HTTP Client - https://hc.apache.org/httpcomponents-client-ga/index.html
I also like the Unirest library (http://unirest.io/java.html) which actually sits on top of the Apache HTTP Client and provides a nicer and cleaner API - you can then do stuff like:
Unirest.post("http://httpbin.org/post")
.queryString("name", "Mark")
.field("last", "Polo")
.asJson()
As both are publically available you can put them as a dependency to your project - by stating this in the build.sbt file.
All you can do is just block the call wait until get response with timeout if you want.
WS.Response response = WS.url(url)
.setHeader("Authorization","BASIC base64str")
.setContentType("application/json")
.post(requestJsonNode)
.get(20000); //20 sec
JsonNode resNode = response.asJson();
In newer Versions of play, response does ot have an asJson() method anymore. Instead, Jackson (or any other json mapper) must be applied to the body String:
final WSResponse r = ...;
Json.mapper().readValue(r, Type.class)
I'm playing around with Vert.x and quite new to the servers based on event loop as opposed to the thread/connection model.
public void start(Future<Void> fut) {
vertx
.createHttpServer()
.requestHandler(r -> {
LocalDateTime start = LocalDateTime.now();
System.out.println("Request received - "+start.format(DateTimeFormatter.ISO_DATE_TIME));
final MyModel model = new MyModel();
try {
for(int i=0;i<10000000;i++){
//some simple operation
}
model.data = start.format(DateTimeFormatter.ISO_DATE_TIME) +" - "+LocalDateTime.now().format(DateTimeFormatter.ISO_DATE_TIME);
} catch (Exception e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
r.response().end(
new Gson().toJson(model)
);
})
.listen(4568, result -> {
if (result.succeeded()) {
fut.complete();
} else {
fut.fail(result.cause());
}
});
System.out.println("Server started ..");
}
I'm just trying to simulate a long running request handler to understand how this model works.
What I've observed is the so called event loop is blocked until my first request completes. Whatever little time it takes, subsequent request is not acted upon until the previous one completes.
Obviously I'm missing a piece here and that's the question that I have here.
Edited based on the answers so far:
Isn't accepting all requests considered to be asynchronous? If a new
connection can only be accepted when the previous one is cleared
off, how is it async?
Assume a typical request takes anywhere between 100 ms to 1 sec (based on the kind and nature of the request). So it means, the
event loop can't accept a new connection until the previous request
finishes(even if its winds up in a second). And If I as a programmer
have to think through all these and push such request handlers to a
worker thread , then how does it differ from a thread/connection
model?
I'm just trying to understand how is this model better from a traditional thread/conn server models? Assume there is no I/O op or
all the I/O op are handled asynchronously? How does it even solve
c10k problem, when it can't start all concurrent requests parallely and have to wait till the previous one terminates?
Even if I decide to push all these operations to a worker thread(pooled), then I'm back to the same problem isn't it? Context switching between threads?
Edits and topping this question for a bounty
Do not completely understand how this model is claimed to asynchronous.
Vert.x has an async JDBC client (Asyncronous is the keyword) which I tried to adapt with RXJava.
Here is a code sample (Relevant portions)
server.requestStream().toObservable().subscribe(req -> {
LocalDateTime start = LocalDateTime.now();
System.out.println("Request for " + req.absoluteURI() +" received - " +start.format(DateTimeFormatter.ISO_DATE_TIME));
jdbc.getConnectionObservable().subscribe(
conn -> {
// Now chain some statements using flatmap composition
Observable<ResultSet> resa = conn.queryObservable("SELECT * FROM CALL_OPTION WHERE UNDERLYING='NIFTY'");
// Subscribe to the final result
resa.subscribe(resultSet -> {
req.response().end(resultSet.getRows().toString());
System.out.println("Request for " + req.absoluteURI() +" Ended - " +LocalDateTime.now().format(DateTimeFormatter.ISO_DATE_TIME));
}, err -> {
System.out.println("Database problem");
err.printStackTrace();
});
},
// Could not connect
err -> {
err.printStackTrace();
}
);
});
server.listen(4568);
The select query there takes 3 seconds approx to return the complete table dump.
When I fire concurrent requests(tried with just 2), I see that the second request completely waits for the first one to complete.
If the JDBC select is asynchronous, Isn't it a fair expectation to have the framework handle the second connection while it waits for the select query to return anything.?
Vert.x event loop is, in fact, a classical event loop existing on many platforms. And of course, most explanations and docs could be found for Node.js, as it's the most popular framework based on this architecture pattern. Take a look at one more or less good explanation of mechanics under Node.js event loop. Vert.x tutorial has fine explanation between "Don’t call us, we’ll call you" and "Verticles" too.
Edit for your updates:
First of all, when you are working with an event loop, the main thread should work very quickly for all requests. You shouldn't do any long job in this loop. And of course, you shouldn't wait for a response to your call to the database.
- Schedule a call asynchronously
- Assign a callback (handler) to result
- Callback will be executed in the worker thread, not event loop thread. This callback, for example, will return a response to the socket.
So, your operations in the event loop should just schedule all asynchronous operations with callbacks and go to the next request without awaiting any results.
Assume a typical request takes anywhere between 100 ms to 1 sec (based on the kind and nature of the request).
In that case, your request has some computation expensive parts or access to IO - your code in the event loop shouldn't wait for the result of these operations.
I'm just trying to understand how is this model better from a traditional thread/conn server models? Assume there is no I/O op or all the I/O op are handled asynchronously?
When you have too many concurrent requests and a traditional programming model, you will make thread per each request. What this thread will do? They will be mostly waiting for IO operations (for example, result from database). It's a waste of resources. In our event loop model, you have one main thread that schedule operations and preallocated amount of worker threads for long tasks. + None of these workers actually wait for the response, they just can execute another code while waiting for IO result (it can be implemented as callbacks or periodical checking status of IO jobs currently in progress). I would recommend you go through Java NIO and Java NIO 2 to understand how this async IO can be actually implemented inside the framework. Green threads is a very related concept too, that would be good to understand. Green threads and coroutines are a type of shadowed event loop, that trying to achieve the same thing - fewer threads because we can reuse system thread while green thread waiting for something.
How does it even solve c10k problem, when it can't start all concurrent requests parallel and have to wait till the previous one terminates?
For sure we don't wait in the main thread for sending the response for the previous request. Get request, schedule long/IO tasks execution, next request.
Even if I decide to push all these operations to a worker thread(pooled), then I'm back to the same problem isn't it? Context switching between threads?
If you make everything right - no. Even more, you will get good data locality and execution flow prediction. One CPU core will execute your short event loop and schedule async work without context switching and nothing more. Other cores make a call to the database and return response and only this. Switching between callbacks or checking different channels for IO status doesn't actually require any system thread's context switching - it's actually working in one worker thread. So, we have one worker thread per core and this one system thread await/checks results availability from multiple connections to database for example. Revisit Java NIO concept to understand how it can work this way. (Classical example for NIO - proxy-server that can accept many parallel connections (thousands), proxy requests to some other remote servers, listen to responses and send responses back to clients and all of this using one or two threads)
About your code, I made a sample project for you to demonstrate that everything works as expected:
public class MyFirstVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> fut) {
JDBCClient client = JDBCClient.createShared(vertx, new JsonObject()
.put("url", "jdbc:hsqldb:mem:test?shutdown=true")
.put("driver_class", "org.hsqldb.jdbcDriver")
.put("max_pool_size", 30));
client.getConnection(conn -> {
if (conn.failed()) {throw new RuntimeException(conn.cause());}
final SQLConnection connection = conn.result();
// create a table
connection.execute("create table test(id int primary key, name varchar(255))", create -> {
if (create.failed()) {throw new RuntimeException(create.cause());}
});
});
vertx
.createHttpServer()
.requestHandler(r -> {
int requestId = new Random().nextInt();
System.out.println("Request " + requestId + " received");
client.getConnection(conn -> {
if (conn.failed()) {throw new RuntimeException(conn.cause());}
final SQLConnection connection = conn.result();
connection.execute("insert into test values ('" + requestId + "', 'World')", insert -> {
// query some data with arguments
connection
.queryWithParams("select * from test where id = ?", new JsonArray().add(requestId), rs -> {
connection.close(done -> {if (done.failed()) {throw new RuntimeException(done.cause());}});
System.out.println("Result " + requestId + " returned");
r.response().end("Hello");
});
});
});
})
.listen(8080, result -> {
if (result.succeeded()) {
fut.complete();
} else {
fut.fail(result.cause());
}
});
}
}
#RunWith(VertxUnitRunner.class)
public class MyFirstVerticleTest {
private Vertx vertx;
#Before
public void setUp(TestContext context) {
vertx = Vertx.vertx();
vertx.deployVerticle(MyFirstVerticle.class.getName(),
context.asyncAssertSuccess());
}
#After
public void tearDown(TestContext context) {
vertx.close(context.asyncAssertSuccess());
}
#Test
public void testMyApplication(TestContext context) {
for (int i = 0; i < 10; i++) {
final Async async = context.async();
vertx.createHttpClient().getNow(8080, "localhost", "/",
response -> response.handler(body -> {
context.assertTrue(body.toString().contains("Hello"));
async.complete();
})
);
}
}
}
Output:
Request 1412761034 received
Request -1781489277 received
Request 1008255692 received
Request -853002509 received
Request -919489429 received
Request 1902219940 received
Request -2141153291 received
Request 1144684415 received
Request -1409053630 received
Request -546435082 received
Result 1412761034 returned
Result -1781489277 returned
Result 1008255692 returned
Result -853002509 returned
Result -919489429 returned
Result 1902219940 returned
Result -2141153291 returned
Result 1144684415 returned
Result -1409053630 returned
Result -546435082 returned
So, we accept a request - schedule a request to the database, go to the next request, we consume all of them and send a response for each request only when everything is done with the database.
About your code sample I see two possible issues - first, it looks like you don't close() connection, which is important to return it to pool. Second, how your pool is configured? If there is only one free connection - these requests will serialize waiting for this connection.
I recommend you to add some printing of a timestamp for both requests to find a place where you serialize. You have something that makes the calls in the event loop to be blocking. Or... check that you send requests in parallel in your test. Not next after getting a response after previous.
How is this asynchronous? The answer is in your question itself
What I've observed is the so called event loop is blocked until my
first request completes. Whatever little time it takes, subsequent
request is not acted upon until the previous one completes
The idea is instead of having a new for serving each HTTP request, same thread is used which you have blocked by your long running task.
The goal of event loop is to save the time involved in context switching from one thread to another thread and utilize the ideal CPU time when a task is using IO/Network activities. If while handling your request it had to other IO/Network operation eg: fetching data from a remote MongoDB instance during that time your thread will not be blocked and instead an another request would be served by the same thread which is the ideal use case of event loop model (Considering that you have concurrent requests coming to your server).
If you have long running tasks which does not involve Network/IO operation, you should consider using thread pool instead, if you block your main event loop thread itself other requests would be delayed. i.e. for long running tasks you are okay to pay the price of context switching for for server to be responsive.
EDIT:
The way a server can handle requests can vary:
1) Spawn a new thread for each incoming request (In this model the context switching would be high and there is additional cost of spawning a new thread every time)
2) Use a thread pool to server the request (Same set of thread would be used to serve requests and extra requests gets queued up)
3) Use a event loop (single thread for all the requests. Negligible context switching. Because there would be some threads running e.g: to queue up the incoming requests)
First of all context switching is not bad, it is required to keep application server responsive, but, too much context switching can be a problem if the number of concurrent requests goes too high (roughly more than 10k). If you want to understand in more detail I recommend you to read C10K article
Assume a typical request takes anywhere between 100 ms to 1 sec (based
on the kind and nature of the request). So it means, the event loop
can't accept a new connection until the previous request finishes(even
if its winds up in a second).
If you need to respond to large number of concurrent requests (more than 10k) I would consider more than 500ms as a longer running operation. Secondly, Like I said there are some threads/context switching involved e.g.: to queue up incoming requests, but, the context switching amongst threads would be greatly reduced as there would be too few threads at a time. Thirdly, if there is a network/IO operation involved in resolving first request second request would get a chance to be resolved before first is resolved, this is where this model plays well.
And If I as a programmer have to think
through all these and push such request handlers to a worker thread ,
then how does it differ from a thread/connection model?
Vertx is trying to give you best of threads and event loop, so, as programmer you can make a call on how to make your application efficient under both the scenario i.e. long running operation with and without network/IO operation.
I'm just trying to understand how is this model better from a
traditional thread/conn server models? Assume there is no I/O op or
all the I/O op are handled asynchronously? How does it even solve c10k
problem, when it can't start all concurrent requests parallely and
have to wait till the previous one terminates?
The above explanation should answer this.
Even if I decide to push all these operations to a worker
thread(pooled), then I'm back to the same problem isn't it? Context
switching between threads?
Like I said, both have pros and cons and vertx gives you both the model and depending on your use case you got to choose what is ideal for your scenario.
In these sort of processing engines, you are supposed to turn long running tasks in to asynchronously executed operations and these is a methodology for doing this, so that the critical thread can complete as quickly as possible and return to perform another task. i.e. any IO operations are passed to the framework to call you back when the IO is done.
The framework is asynchronous in the sense that it supports you producing and running these asynchronous tasks, but it doesn't change your code from being synchronous to asynchronous.
I'm fairly new to Java (I'm using Java SE 7) and the JVM and trying to write an asynchronous controller using:
Tomcat 7
Spring MVC 4.1.1
Spring Servlet 3.0
I have a component that my controller is delegating some work to that has an asynchronous portion and returns a ListenableFuture. Ideally, I'd like to free up the thread that initially handles the controller response as I'm waiting for the async operation to return, hence the desire for an async controller.
I'm looking at returning a DeferredResponse -- it seems pretty easy to bridge this with ListenableFuture -- but I can't seem to find any resources that explain how the response is delivered back to the client once the DeferredResponse resolves.
Maybe I'm not fully grok'ing how an asynchronous controller is supposed to work, but could someone explain how the response gets returned to the client once the DeferredResponse resolves? There has to be some thread that picks up the job of sending the response, right?
I recently used Spring's DeferredResponse to excellent effect in a long-polling situation that I recently coded. Focusing on the 'how' of the response getting back to the user is, I believe, not the correct way to think about the object. Depending upon where it's used, it returns messages to the user in exactly the same way as a regular, synchronous call would only in a delayed, asynchronous manner. Again, the object does not define nor propose a delivery mechanism. Just a way to 'insert' an asynchronous response into existing channels.
Per your query, yes, it does so by creating a thread that has a timeout of the user's specification. If the code completes before the timeout, using 'setResult', the object returns the code's result. Otherwise, if the timeout fires before the result, the default, also set by the user, is returned. Either way, the object does not return anything (other than the object itself) until one of these mechanisms is called. Also, the object has to then be discarded as it cannot be reused.
In my case, I was using a HTTP request/response function that would wrap the returned response in a DeferredResponse object that would provide a default response - asking for another packet from client so the browser would not time out - if the computation the code was working on did not return before the timeout. Whenever the computation was complete, it would send the response via the 'setResult' function call. In this situation both cases would simply use the HTTP response to send a packet back to the user. However, in neither case would the response go back to the user immediately.
In practice the object worked flawlessly and allowed me to implement an effective long-polling mechanism.
Here is a snippet of the code in my example:
#RequestMapping(method = RequestMethod.POST, produces = "application/text")
#ResponseBody
// public DeferredResult<String> onMessage(#RequestBody String message, HttpSession session) {
public DeferredResult<String> onMessage(InputStream is, HttpSession session) {
String message = convertStreamToString(is);
// HttpSession session = null;
messageInfo info = getMessageInfo(message);
String state = info.getState();
String id = info.getCallID();
DeferredResult<String> futureMessage =
new DeferredResult<>(refreshIntervalSecs * msInSec, getRefreshJsonMessage(id));
if(state != null && id != null) {
if(state.equals("REFRESH")) {
// Cache response for future and "swallow" call as it is restocking call
LOG.info("Refresh received for call " + id);
synchronized (lock) {
boolean isReplaceable = callsMap.containsKey(id) && callsMap.get(id).isSetOrExpired();
if (isReplaceable)
callsMap.put(id, futureMessage);
else {
LOG.warning("Refresh packet arrived on a non-existent call");
futureMessage.setResult(getExitJsonMessage(id));
}
}
} else if (state.equals("NEW")){
// Store response for future and pass the call onto the processing logic
LOG.info("New long-poll call received with id " + id);
ClientSupport cs = clientSupportMap.get(session.getId());
if(cs == null) {
cs = new ClientSupport(this, session.getId());
clientSupportMap.put(session.getId(), cs);
}
callsMap.put(id, futureMessage);
// *** IMPORTANT ****
// This method sets up a separate thread to do work
cs.newCall(message);
}
} else {
LOG.warning("Invalid call information");
// Return value immediately when return is called
futureMessage.setResult("");
}
return futureMessage;
}
I am using the Oracle Jersey Client, and am trying to cancel a long running get or put operation.
The Client is constructed as:
JacksonJsonProvider provider = new JacksonJsonProvider(new ObjectMapper());
ClientConfig clientConfig = new DefaultClientConfig();
clientConfig.getSingletons().add(provider);
Client client = Client.create(clientConfig);
The following code is executed on a worker thread:
File bigZipFile = new File("/home/me/everything.zip");
WebResource resource = client.resource("https://putfileshere.com");
Builder builder = resource.getRequestBuilder();
builder.type("application/zip").put(bigZipFile); //This will take a while!
I want to cancel this long-running put. When I try to interrupt the worker thread, the put operation continues to run. From what I can see, the Jersey Client makes no attempt to check for Thread.interrupted().
I see the same behavior when using an AsyncWebResource instead of WebResource and using Future.cancel(true) on the Builder.put(..) call.
So far, the only solution I have come up with to interrupt this is throwing a RuntimeException in a ContainerListener:
client.addFilter(new ConnectionListenerFilter(
new OnStartConnectionListener(){
public ContainerListener onStart(ClientRequest cr) {
return new ContainerListener(){
public void onSent(long delta, long bytes) {
//If the thread has been interrupted, stop the operation
if (Thread.interrupted()) {
throw new RuntimeException("Upload or Download canceled");
}
//Report progress otherwise
}
}...
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
Yeah, interrupting the thread will only work if the code is watching for the interrupts or calling other methods (such as Thread.sleep(...)) that watch for it.
Throwing an exception out of listener doesn't sound like a bad idea. I would certainly create your own RuntimeException class such as TimeoutRuntimeException or something so you can specifically catch and handle it.
Another thing to do would be to close the underlying IO stream that is being written to which would cause an IOException but I'm not familiar with Jersey so I'm not sure if you can get access to the connection.
Ah, here's an idea. Instead of putting the File, how about putting some sort of extension on a BufferedInputStream that is reading from the File but also has a timeout. So Jersey would be reading from the buffer and at some point it would throw an IOException if the timeout expires.
As of Jersey 2.35, the above API has changed. A timeout has been introduces in the client builder which can set read timeout. If the server takes too long to respond, the underlying socket will timeout. However, if the server starts sending the response, it shall not timeout. This can be utilized, if the server does not start sending partial response, which depends on the server implementation.
client=(JerseyClient)JerseyClientBuilder
.newBuilder()
.connectTimeout(1*1000, TimeUnit.MILLISECONDS)
.readTimeout(5*1000, TimeUnit.MILLISECONDS).build()
The current filters and interceptors are for data only and the solution posted in the original question will not work with filters and interceptors (though I admit I may have missed something there).
Another way is to get hold of the underlying HttpUrlConnection (for standard Jersey client configuration) and it seems to be possible with org.glassfish.jersey.client.HttpUrlConnectorProvider
HttpUrlConnectorProvider httpConProvider=new HttpUrlConnectorProvider();
httpConProvider.connectionFactory(new CustomHttpUrlConnectionfactory());
public static class CustomHttpUrlConnectionfactory implements
HttpUrlConnectorProvider.ConnectionFactory{
#Override
public HttpURLConnection getConnection(URL url) throws IOException {
System.out.println("CustomHttpUrlConnectionfactory ..... called");
return (HttpURLConnection)url.openConnection();
}//getConnection closing
}//inner-class closing
I did try the connection provider approach, however, I could not get that working. The idea would be to keep reference to the connection by some means (thread id etc.) and close it if the communication is taking too long. The primary problem was I could not find a way to register the provider with the client. The standard
.register(httpConProvider)
mechanism does not seem to work (or perhaps it is not supposed to work like that) and the documentation is a bit sketchy in that direction.