My question is pretty similar to this question - Java - AsyncHttpClient - Fire and Forget but I am using Jersey / Jax-RS in my case.
How do you configure Jersey Jax-RS asynchronous calls to achieve a "fire-and-forget" where it is imperative to not block the current working thread no matter what?
For example, if there are no available threads to process the request, skip it complete and move on do not block the calling thread.
So given this test client here:
Client client = ClientBuilder.newClient();
Future<Response> future1 = client.target("http://example.com/customers/123")
.request()
.async().get();
Cool that works great for a get. But what about a fire-and-forget put or post or something. How would I change this to act more "fire-and-forget"?
client.target("http://example.com/customers/123")
.request()
.async().put(myCustomer);
In a fire-and-forget, you could configure it in many ways for example that it will buffer into an in-memory queue up to a configurable amount of memory and then will just start discarding new entries if the queue was full.
Or another example would be N worker threads and if they are all busy you just drop the http request.
What are the different common Jax-RS async parameters that I should configure? Any gotchas?
Related
I have a service where a couple requests can be long running actions. Occasionally we have timeouts for these requests, and that causes bad state because steps of the flux stop executing after the cancel is called when the client disconnects. Ideally we want this action to continue processing to completion.
I've seen WebFlux - ignore 'cancel' signal recommend using the cache method... Are there any better solutions and/or drawbacks to using cache to achieve this?
there are some solutions for that.
One could be to make it asyncron. when you get the request from the customer you can put it in a processor
Sinks.Many<QueueTask<T>> queue = Sinks.many().multicast().onBackpressureBuffer()
and when the client comes from the customer you just push it to the queue and the queue will be in background processing the items.
But in this case customer will not get any response with the progress of item. only if you send it by socket or he do another request after some times.
Another one is to use Chunked http request.
#GetMapping(value = "/sms-stream/{s}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> streamResponse(#PathVariable("s") String s) {
return service.streamResponse(s);
}
In this case the connection will be open and you can close it automatically in server when processing is done
I have a list of usernames and want to fetch user details from the remote service without blocking the main thread. I'm using Spring's reactive client WebClient. For the response, I get Mono then subscribe it and print the result.
private Mono<User> getUser(String username) {
return webClient
.get()
.uri(uri + "/users/" + username)
.retrieve()
.bodyToMono(User.class)
.doOnError(e ->
logger.error("Error on retrieveing a user details {}", username));
}
I have implemented the task in two ways:
Using Java stream
usernameList.stream()
.map(this::getUser)
.forEach(mono ->
mono.subscribe(System.out::println));
Using Flux.fromIterable:
Flux.fromIterable(usernameList)
.map(this::getUser)
.subscribe(mono ->
mono.subscribe(System.out::println));
It seems the main thread is not blocked in both ways.
What is the difference between Java Stream and Flux.fromIterable in this situation? If both are doing the same thing, which one is recommended to use?
There are not huge differences between both variants. The Flux.fromIterable variant might give your more options and control about concurrency/retries, etc - but not really in this case because calling subscribe here defeats the purpose.
Your question is missing some background about the type of application you're building and in which context these calls are made. If you're building a web application and this is called during request processing, or a batch application - opinions might vary.
In general, I think applications should stay away from calling subscribe because it disconnects the processing of that pipeline from the rest of the application: if an exception happens, you might not be able to report it because the resource to use to send that error message might be gone at that point. Or maybe the application is shutting down and you have no way to make it wait the completion of that task.
If you're building an application that wants to kick off some work and that its result is not useful to the current operation (i.e. it doesn't matter if that work completes or not during the lifetime of the current operation), then subscribe might be an option.
In that case, I'd try and group all operations in a single Mono<Void> operation and then trigger that work:
Mono<Void> logUsers = Flux.fromIterable(userNameList)
.map(name -> getUser(name))
.doOnNext(user -> System.out.println(user)) // assuming this is non I/O work
.then();
logUsers.subscribe(...);
If you're concerned about consuming server threads in a web application, then it's really different - you might want to get the result of that operation to write something to the HTTP response. By calling subscribe, both tasks are now disconnected and the HTTP response might be long gone by the time that work is done (and you'll get an error while writing to the response).
In that case, you should chain the operations with Reactor operators.
I'm calling multiple request(10) at same time HTTP GET method.Here calling method will create different threads (Like Thread 1,Thread 2 ....)
Caller Method:
enter code here: for(int i=0;i<10;i++){Thread.currentThread().getId();HttpClient httpClient = new HtpClient(url);res = httpClient.get(5000);}
Then Request will hit application entry point.The entry point will create new threads for each and every request (Like Thread 11,Thread 12 ....).
enter code here//public void DoProcess(){Thread.currentThread().getId();// New threads for each request.else........}
But i want to know which caller thread request created application thread.
Like Thread 1 belongs Thread 11
Thread 2 belongs Thread 12
Please let me know,how to achieved this.
Client connects through TCP, so there is a socket client ip and port involved.
I don't know HTTPClient api by heart but if there is a getClientPort() then you should be able to printout the time, thread name and client ip+port. On the server, whatever accept the socket will have the client ip and port too. If this is a servlet container, the servlet request has getRemoteAddress() and getRemotePort(). There too you can print out the time, ip+port and thread name. If you pile those events in 2 tables, you should be able to join by ip+port with a tolerance on the client time vs server time (try less than 2 seconds apart, assuming C and S are on same time with NTP).
The other trivial way (but it changes the http payload) is to inject a HTTP header from the client into the http request, stating the current thread name/tid. Ex: "my_custom_remote_thread_id: Thread-11". This way on the server you can pull the request header to figure the client thread name/tid.
I am consuming a REST web service from Java code using Apache commons HTTP client API. If no response returns within the socket timeout value configured in the connection manager parameters, socket time out exception occurs. In such cases as the thread returns the exception to the caller class, even if the REST service returns response few secs later, will be lost.
Is it possible to create a new thread which will still listen to the service even after the timeout and just logs the response, while the main thread returns the exception to the caller class?
Is there any better way to achieve this?
Thanks.
The pattern you are most likely looking for involves asynchronous requests. For every action you post you create a unique "job" id and with that a specific URL for the job status. After starting the job, you can then query on that specific job instance's status. For example:
POST to /actions
Returns 202 Accepted & include a Location header to /actions/results/1234
Immediately GET /actions/results/1234 to ascertain it's status.
If it returns a 2xx your job is done.
If it returns 404, wait 10 seconds (or whatever) and try again.
Once you are happy with the result, issue a DELETE to /actions/results/1234 to clean up after yourself.
Of course you don't have to return 404 if the job is not done, there are other strategies for checking on the status - the key thing is that it's a subsequent call.
How can I do long-polling using netty framework? Say for example I fetch http://localhost/waitforx
but waitforx is asynchronous because it has to wait for an event? Say for example it fetches something from a blocking queue(can only fetch when data in queue). When getting item from queue I would like to sent data back to client. Hopefully somebody can give me some tips how to do this.
Many thanks
You could write a response header first, and then send the body (content) later from other thread.
void messageReceived(...) {
HttpResponse res = new DefaultHttpResponse(...);
res.setHeader(...);
...
channel.write(res);
}
// In a different thread..
ChannelBuffer partialContent = ...;
channel.write(partialContent);
You can use netty-socketio project. It's implementation of Socket.IO server with long polling support. On web side you can use Socket.IO client javascript lib.
You could also do the following in [sfnrpc]: http://code.google.com/p/sfnrpc
Object object = RPCClient.getInstance().invoke("#URN1","127.0.0.1:6878","echo",true,60,"", objArr,classArr, sl);
The true causes communication to be synchronous.