Small Demo application on DeferredResult In spring - java

Hi I just Need an application on DeferredResult in Spring MVC which clear it's working.

It is easier when you understand the concept of DeferredResult:
Your controller is eventually a function executed by the servlet container (for that matter, let's assume that the server container is Tomcat) worker thread. Your service flow start with Tomcat and ends with Tomcat. Tomcat gets the request from the client, holds the connection, and eventually returns a response to the client. Your code (controller or servlet) is somewhere in the middle.
Consider this flow:
Tomcat get client request.
Tomcat executes your controller.
Release Tomcat thread but keep the client connection (don't return response) and run heavy processing on different thread.
When your heavy processing complete, update Tomcat with its response and return it to the client (by Tomcat).
Because the servlet (your code) and the servlet container (Tomcat) are different entities, then to allow this flow (releasing tomcat thread but keep the client connection) we need to have this support in their contract, the package javax.servlet, which introduced in Servlet 3.0 . Spring MVC use this new Servlet 3.0 capability when the return value of the controller is DeferredResult (BTW, also Callable). DeferredResult is a class designed by Spring to allow more options (that I will describe) for asynchronous request processing in Spring MVC, and this class just holds the result (as implied by its name) so it means you need some kind of thread that will run you async code. What do you get by using DeferredResult as the return value of the controller? DeferredResult has built-in callbacks like onError, onTimeout, and onCompletion. It makes error handling very easy.
Here you can find a simple working examples I created.
The main part from the github example:
#RestController
public class DeferredResultController {
static ExecutorService threadPool = getThreadPool();
private Request request = new Request();
#RequestMapping(value="/deferredResultHelloWorld/{name}", method = RequestMethod.GET)
public DeferredResult<String> search(#PathVariable("name") String name) {
DeferredResult<String> deferredResult = new DeferredResult<>();
threadPool.submit(() -> deferredResult.setResult(request.runSleepOnOtherService(name)));
return deferredResult;
}
}

https://spring.io/blog/2012/05/14/spring-mvc-3-2-preview-adding-long-polling-to-an-existing-web-application
and the source code is: https://github.com/spring-projects/spring-amqp-samples/tree/spring-mvc-async
for useful articles see blog post series about spring async support:
https://spring.io/blog/2012/05/07/spring-mvc-3-2-preview-introducing-servlet-3-async-support

Related

how to make spring web flux wait until specified condition met in server and then return the response

I'm a starter in Spring Web-Flux. i want to have a reactive service for example named isConfirmed and this service must wait until another service of my server is called for example named confirm. both services are located in my server and the first reactive service must wait until the second service (confirm) is called and then return the confirm message. i want no threads to be blocked in my server until the second service is called. like an observer pattern. is it possible with spring web flux?
update: can we have this feature while server using distributed cache?
I think you could use a CompletableFuture between your 2 services, something like that:
CompletableFuture<String> future = new CompletableFuture<>();
public Mono<String> isConfirmed() {
return Mono.fromFuture(future);
}
public void confirm(String confirmation) {
future.complete(confirmation);
}

Are multiple requests handled by single thread in Spring Boot until certain requests threshold is reached?

In my Spring Boot 2.1.6 project (based on Tomcat) I have a rest controller. I added a default constructor to it which prints something. I thought in Tomcat-based servers each request is handled in separate thread. So I expected each request to trigger a new controller object and as a result new print from the constructor. However, I made a test of sending 30 requests to the rest controller and I only see that the print was made once. So as far as I understand the rest controller handles all those requests in one thread.
My question is whether indeed multiple requests are handled in a single thread or maybe there's certain request threshold upon which another thread will be opened? I'm using default Spring Boot configuration perhaps this is controlled somewhere in the config?
This is the code for my controller:
#RestController
public class TrackingEventController {
public TrackingEventController() {
System.out.println("from TrackingEventController");
}
#RequestMapping(method=GET, path=trackingEventPath)
public ResponseEntity<Object> handleTrackingEvent(
#RequestParam(name = Routes.event) String event,
#RequestParam(name = Routes.pubId) String pubId,
#RequestParam(name = Routes.advId) String advId) {
return new ResponseEntity<>(null, new HttpHeaders(), HttpStatus.OK);
}
}
You're mixing two orthogonal concepts:
a thread
a controller instance
A single thread could create and/or use one, or several controller instances.
Multiple threads could also create and/or use one, or several controller instances.
The two are unrelated.
And how it actually works is
Spring beans are singletons by default, so Spring creates a single instance of your controller
A servlet container uses a pool of threads.
Every time a request comes in, a thread is chosen from the pool, and this thread handles the request. If the request is mapped to your controller, then the appropriate method of the unique controller instance is executed by this thread.
If you want to know which thread is handling the current request, add this to your controller method:
System.out.println(Thread.currentThread().getName());
Spring boot Tomcat thread pool default size is 200. You can make out that different threads server different requests. Put a debug point on some REST endpoint, and call it multiple times from Postman etc. From debugger, check the thread name. s.b.

How to use servlet 3.1 in spring mvc?

There are 2 different features available:
servlet 3.0 allows to process request in a thread different from the container thread.
servlet 3.1 allows to read/write into socket without blocking reading/writing thread
There are a lot of examples in the internet about servlet 3.0 feature. We can use it in Spring very easily. We just have to return DefferedResult or CompletableFuture
But I can't find example of usage servlet 3.1 in spring. As far as I know we have to register WriteListener and ReadListener and do dome dirty work inside. But I can't find the example of that Listeners. I believe it is not very easy.
Could you please provide example of servlet 3.1 feature in spring with explanation of Listener implementaion ?
For servlet 3.1 you can support non-blocking I/O by using Reactive Streams bridge
Servlet 3.1+ Container
To deploy as a WAR to any Servlet 3.1+ container, you can extend and include {api-spring-framework}/web/server/adapter/AbstractReactiveWebInitializer.html[AbstractReactiveWebInitializer] in the WAR. That class wraps an HttpHandler with ServletHttpHandlerAdapter and registers that as a Servlet.
So you should extend AbstractReactiveWebInitializer which is adding async support
registration.setAsyncSupported(true);
And the support in ServletHttpHandlerAdapter
AsyncContext asyncContext = request.startAsync();
If you're looking for an example of Spring/Servlet 3.1 non-blocking HTTP API declaration, try the following:
#GetMapping(value = "/asyncNonBlockingRequestProcessing")
public CompletableFuture<String> asyncNonBlockingRequestProcessing(){
ListenableFuture<String> listenableFuture = getRequest.execute(new AsyncCompletionHandler<String>() {
#Override
public String onCompleted(Response response) throws Exception {
logger.debug("Async Non Blocking Request processing completed");
return "Async Non blocking...";
}
});
return listenableFuture.toCompletableFuture();
}
Requires Spring Web 5.0+ and Servlet 3.1 support at Servlet Container level (Tomcat 8.5+, Jetty 9.4+, WildFly 10+)
Shouldn't be too hard to chase down some examples. I found one from IBM at WASdev/sample.javaee7.servlet.nonblocking . Working with javax.servlet API in Spring or Spring Boot is just a matter of asking Spring to inject HttpServletRequest or HttpServletResponse. So, a simple example could be:
#SpringBootApplication
#Controller
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
#RequestMapping(path = "")
public void writeStream(HttpServletRequest request, HttpServletResponse response) throws IOException {
ServletOutputStream output = response.getOutputStream();
AsyncContext context = request.startAsync();
output.setWriteListener(new WriteListener() {
#Override
public void onWritePossible() throws IOException {
if ( output.isReady() ) {
output.println("WriteListener:onWritePossible() called to send response data on thread : " + Thread.currentThread().getName());
}
context.complete();
}
#Override
public void onError(Throwable t) {
context.complete();
}
});
}
}
This simply creates a WriteListener and attaches it to the request output stream then returns. Nothing fancy.
EDITS: The point is that the servlet container, e.g., Tomcat, calls onWritePossible when data can be written without blocking. More on this at Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)..
The listeners (and writers) have callback methods that are invoked when the content is available to be read or can be written without blocking.
So therefore onWritePossible is only called when out.println can be called without blocking.
Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O.
Presumably what you have to do check output.isReady to know if you can continue to write bytes. It seems to be that you would have to have some sort of implicit agreement with the sender/receiver about block sizes. I have never used it so I don't know, but you asked for an example of this in Spring framework and that's what is provided.
So therefore onWritePossible is only called when out.println can be called without blocking. It is sounds correct but how can I understand how many bytes can be written ? How should I control this?
EDIT 2: That is a very good question that I can't give you an exact answer to. I would assume that onWritePossible is called when the server executes the code in a separate (asynchronous) thread from the main servlet. From the example you check input.isReady() or output.isReady() and I assume that blocks your thread until the sender/receiver is ready for more. Since this is done asynchronously the server itself is not blocked and can handle other requests. I have never used this so I am not an expert.
When I said there would be some sort of implicit agreement with the sender/receiver about block sizes that means that if the receiver is capable of accepting 1024 byte blocks then you would write that amount when output.isReady is true. You would have to have knowledge about that by reading the documentation, nothing in the api about it. Otherwise you could write single bytes but the example from oracle uses 1024 byte blocks. 1024 byte blocks is a fairly standard block size for streaming I/O. The example above would have to be expanded to write bytes in a while loop like it is shown in the oracle example. That is an exercise left for the reader.
Project reactor and Spring Webflux have the concept of backpressure that may address this more carefully. That would be a separate question and I have not looked closely into how that couples senders and receivers (or vice versa).
Servlet 3.0 - Decouples the container thread and the processing thread. Returns DeferredResult or CompletableFuture. So controller processing can be in a different thread than the server request handling thread. The server thread pool is free to handle more incoming requests.
But still, the IO is still blocking. Reading and Writing to the Input and Output stream ( receiving and sending responses to slow clients).
Servlet 3.1 - Non-blocking all the way including IO. Using Servlet 3.1 on Spring directly means you have to use ReadListener and WriteListener interfaces which are too cumbersome. Also, you have to deviate from using Servlet API like Servlet, and Filter which are synchronous and blocking. Use Spring Webflux instead which uses Reactive Streams API and supports Servlet 3.1.

Rest Web Service Interface - Multithreaded

Below is a snippet of an existing Rest Interface implementation.
#RestController
#RequestMapping("/login")
public class LoginController {
#Autowired
private LoginProcessor loginProcessor;
#RequestMapping(
consumes = MediaType.TEXT_XML_VALUE,
produces = { MediaType.TEXT_XML_VALUE,
MediaType.APPLICATION_JSON_VALUE },
value = "/v1/login",
method = RequestMethod.POST)
public LoginResponse loginRequest(
#RequestBody String credentials) throws JAXBException {
return loginProcessor.request(credentials);
}
}
If the REST call to loginRequest() is initiated from different clients, and possibly, at the same time :-
1) Will a new thread be created to handle each request. Therefore, all requests are being processed concurrently ?
or
2) Is there one thread to handle all requests, which would mean only loginRequest() is being executed at any one time, and other request are queued up ?
I would ideally like to the interface to be able to handle multiple requests at any one time.
Thank you for your help in both clarifying and furthering my understanding on the subject.
Pete
You can search stack overflow for this type of question as it has been answered before. You can read these answers:
https://stackoverflow.com/a/7457252/10632970
https://stackoverflow.com/a/17236345/10632970
Good luck with your studies.
Every application should run in server either web server (tomcat) or application server (web logic), by default tomcat web container will have 200 threads ( you can adjust as your wish), so 200 threads can process concurrently at a time in tomcat
For every input request will be taken by web container thread and next to dispatcher servlet to corresponding controller class
I suppose you are using spring framework ( as you have used Autowired and other annotations). Thus to ans your ques: Yes spring will create new thread for each new request. Kindly refer to this answer, this should solve your queries
https://stackoverflow.com/a/17236345/7622687

Threadpool and request handling in Tomcat

Alright, I've already asked one question regarding this, but needed a bit more info. I'll try to be coherent with my question as much as I can. (Since I am not sure of the concepts).
Background
I have a java web project(dynamic). I am writing Restful webservices. Below is a snippet from my class
/services
class Services{
static DataSource ds;
static{
ds = createNewDataSource;
}
/serviceFirst
#Consumes(Something)
#produces(Something)
public List<Data> doFirst(){
Connection con = ds.getConnection();
ResultSet res = con.execute(preparedStatement);
//iterate over res, and create list of Data.
return list;
}
}
This is a very basic functionality that I have stated here.
I've got tomcat server where I've deployed this. I've heard that Tomcat has a threadpool, of size 200 (by default). Now my question is that, how exactly does the threadpool work here.
Say I have two requests coming in at the same time. That means that 2 of the threads from the threadpool will get to work. Does this mean that both the threads will have an instance of my class Services? Because below is how I understand the threads and concurrency.
public class myThread extends Thread(){
public void run(){
//do whatever you wan to do here;
}
}
In the above, when I call start on my Thread it will execute the code in run() method and all the objects that it creates in there, will belong to it.
now, coming back to the Tomcat, is there somewhere a run() method written that instantiates the Services class, and that is how the threadpool handles 200 concurrent requests. (Obviously, I understant they will require 200 cores for them to execute concurrently, so ignore that).
Because otherwise, if tomcat does not have 200 different threads having the same path of execution (i.e. my Services class), then how exactly will it handle the 200 concurrent requests.
Thanks
Tomcat's thread pool works, more or less, like what you would get from an ExecutorService (see Executors).
YMMV. Tomcat listens for requests. When it receives a request, it puts this request in a queue. In parallel, it maintains X threads which will continuously attempt to take from this queue. They will prepare the ServletRequest and ServletResponse objects, as well as the FilterChain and appropriate Servlet to invoke.
In pseudocode, this would look like
public void run() {
while (true) {
socket = queue.take();
ServletRequest request = getRequest(socket.getInputStream());
ServletResponse response = generateResponse(socket.getOutputStream());
Servlet servletInstance = determineServletInstance(request);
FilterChain chain = determineFilterChainWithServlet(request, servletInstance);
chain.doFilter(request,response); // down the line invokes the servlet#service method
// do some cleanup, close streams, etc.
}
}
Determining the appropriate Servlet and Filter instances depends on the URL path in the request and the url-mappings you've configured. Tomcat (and every Servlet container) will only ever manage a single instance of a Servlet or Filter for each declared <servlet> or <filter> declaration in your deployment descriptor.
As such, every thread is potentially executing the service(..) method on the same Servlet instance.
That's what the Servlet Specification and the Servlet API, and therefore Tomcat, guarantee.
As for your Restful webservices, read this. It describes how a resource is typically an application scoped singleton, similar to the Servlet instance managed by a Servlet container. That is, every thread is using the same instance to handle requests.

Categories

Resources