I have a service where a couple requests can be long running actions. Occasionally we have timeouts for these requests, and that causes bad state because steps of the flux stop executing after the cancel is called when the client disconnects. Ideally we want this action to continue processing to completion.
I've seen WebFlux - ignore 'cancel' signal recommend using the cache method... Are there any better solutions and/or drawbacks to using cache to achieve this?
there are some solutions for that.
One could be to make it asyncron. when you get the request from the customer you can put it in a processor
Sinks.Many<QueueTask<T>> queue = Sinks.many().multicast().onBackpressureBuffer()
and when the client comes from the customer you just push it to the queue and the queue will be in background processing the items.
But in this case customer will not get any response with the progress of item. only if you send it by socket or he do another request after some times.
Another one is to use Chunked http request.
#GetMapping(value = "/sms-stream/{s}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> streamResponse(#PathVariable("s") String s) {
return service.streamResponse(s);
}
In this case the connection will be open and you can close it automatically in server when processing is done
Related
My operation takes 30 mins to process which is invoked by a rest call request. i want to give the client an immediate response telling operation in progress,and processing should happen in another thread, what is the best way to crack this out,Is deferred result the only way.
30 minutes is a long time. I'd suggest you using websockets to push progress updates and operation status.
Since you are providing rest services, another approach could be to immediately return 'Accepted' (202) or 'Created' (201) to the client and provide a link to another service that would provide updates about the progress status of the processing. This way the client is free to decide whether to poll the server for updates, or just provide the user an 'update status' button.
Use a message queue (ActiveMQ, Redis).
Send request from client.
Controller gets request, post process/message in message queue.
Send response back to client saying it's processing.
Another thread to look for changes/new process in message queue.
Execute the process - Update the status in message queue each step is completed. - (started/running/completed/failed).
You can show the status of process everytime with the id of process in queue.
I am using the Spring MVC and I have created Controllers and I am accessing them using the JQuery Ajax request, but sometimes it may be possible that the process takes long time, so I want to give the user option to stop the request for that I am using the abort function of JQuery and it stops the Ajax request successfully but it does not stop the process started on the controller. Is there any other way to do that?
If the request has already been sent to the server then the server will process the request even if we abort the request but the client will not wait for/handle the response.
Maybe you'll have to write some custom logic where initiate a thread in controller for each such ajax call and track their thread id. Then on abort request go back in controller and kill that thread.
Just a basic idea, I haven't implemented it myself so not sure about code/logic problems you might face.
Your jQuery's abort() will stop the client to listen for the response but still the server will process.
I can think of something:(I hope the process is long enough for you to stop by the way)
Continuous Polling using a thread. Have a status flag that reads at high frequency so that when you do an abort, you update the status of the request to stop. If continuous polling is enabled, you can check that status is now to abort and the thread is stopped or something like that.
If you don't use any custom logic there is no way to stop an ongoing process at controller level. Because controller process and UI process are two different layers the only connection between them is socket request - response connection.
abort on javascript side could cause big risk for your application, backend will complete process but ui won't be aware of it. You can manage your response time on controller side and return an error and abort your controller process with your own logic. for example if you have some loop on Controller put a timecheck, or you have another socket connection like db configure your connection based on timeout parameter.
What you want to do is directly related to your controller implementation.
Declare a variable to store the ajax request
var req = $.ajax({
type: 'GET',
url: '<*url*>',
......
});
We can use my request .abort()
req.abort();
Check for reference jQuery ajax
I have a multi client server application in java. The server keeps on receiving connections and each client is handled by a separate thread. The client/server communication goes on until the socket is closed. So the request received from clients is put in a LinkedBlockingQueue and then other thread process each request from that queue. Since the client request is added to the queue I am using a ConcurrentHashMap to get the clientsocket later on when the request is processed and response is ready so that i can send the response to client later.
Now I need to implement a timeout functionality so if the request is not process and response is not ready within a time period then some sort of message is sent to the client that your request cannot be processed now. Can anybody tell me the best idea to do it in a multithreaded environment. Remember that I have a client map in which client connection is put against each request id.
I am thinking to have a separate thread that will keep on iterating the map keys and check the time. But since request keep on adding in the map I want some best way to do it.
Thanks
Guava's loading cache can solve the timeout and concurrent modifications for you: https://code.google.com/p/guava-libraries/wiki/CachesExplained Exchange your request map to a LoadingCache by setting it up like this:
LoadingCache<Request, Connection> requests = CacheBuilder.newBuilder()
.maximumSize(1000)
.expireAfterAccess(1, TimeUnit.MINUTES)
.removalListener(MY_LISTENER)
.build(
new CacheLoader<Request, Connection>() {
public Connection load(Request request) throws AnyException {
return clientConnectionForRequest(request);
}
});
When a request comes in, you load it in the cache:
requests.get(request);
After this, the request will sit there waiting to be processed. If processing is started, then get the connection and invalidate the request, so it is removed from the cache. ①
Connection c = requests.getIfPresent(request);
if (c != null) {
requests.invalidate(request); // remove from the waiting area
// proceeed with processing the request
} else {
// the request was evicted from the cache as it expired
}
In the removal listener you need to implement some simple logic that listens for evictions. (If you invalidate explicitly, then wasEvicted() will return false.)
MY_LISTENER = new RemovalListener<Request, Connection>() {
#Override
public void onRemovaRequest RemovalNotification<Request, Connection> notification) {
if (notification.wasEvicted()) {
Connection c = notification.getValue();
// send timeout response to client
}
}
};
You can order the requests by placing them in a queue and executing the method described at ① That method will also take care of executing only those requests that did not time out yet, you need no additional house keeping.
Use Concurrent Hash Map. It allows full concurrency for reads and adjustable concurrency for writes. It uses volatile variables to put the data. Even if any modification is being made by any thread to a bucket, it will be visible to any other thread trying to read the datafrom the same bucket.
In an Async servlet processing scenario, I want to achieve cancellation of requests.
(Am also hoping to keep this RESTful)
Say, I have a code like this:
#RequestMapping("/quotes")
#ResponseBody
public void quotes() {
//...
final AsyncContext ac = request.startAsync();
ac.setTimeout(0);
RunJob job = new RunJob(ac);
asyncContexts.add(job);
pool.submit(job);
};
// In some other application-managed thread with a message-driven bean:
public void onMessage(Message msg) {
//...
if (notEndOfResponse) {
ServletOutputStream out = ac.getResponse().getOutputStream();
//...
out.print(message);
} else {
ac.complete();
asyncContexts.remove(ac);
}
};
If the Client decides to cancel this processing at the server-side, it needs to send another HTTP request that identifies the previous request and the server then cancels the previous request (i.e stops server-side processing for that request and completes the response for it).
Is there a standard way to do this ?
If it is the case that there is NO standard way to do this and each developer does it as per their will and skill, I would like to know if my (trivial) approach to this problem is ok.
My way (after #Pace's suggestion) is:
Create a "requestId" on the server and return a URL/link as
part of the first partial responses (because I could get
many partial responses for a single request as part of Async processing).
The link could be, for ex:
.../outstandingRequests/requestId
When needing to cancel the request, the client does a DELETE request on the URL and let the server figure out how to achieve cancellation at its end.
Any problems with this approach ?
When using long running operations/tasks in a RESTful sense it is best to treat the operation itself as a resource. A post to the operations URL returns a URL you can use to GET the status of that operation (including the results when the operation finishes) and a DELETE to that URL will terminate the operation.
I would like to have an advice for this issue:
I am using Jbos 5.1.0, EJB3.0
I have system, which sending requests via UDP'S to remote modems, and suppose to wait for an answer from the target modem.
the remote modems support only UDP calls, therefor I o design asynchronous mechanism. (also coz I want to request X modems parallel)
this is what I try to do:
all calls are retrieved from Data Base, then each call will be added as a message to JMS QUE.
let's say i will set X MDB'S on that que, so I can work asynchronous. now each MDB will send UDP request to the IP-address(remote modem) which will be parsed from the que message.
so basicly each MDB, which takes a message is sending a udp request to the remote modem and [b]waiting [/b]for an answer from that modem.
[u]now here is the BUG:[/u]
could happen a scenario where MDB will get an answer, but not from the right modem( which it requested in first place).
that bad scenario cause two wrong things:
a. the sender which sent the message will wait forever since the message never returned to him(it got accepted by another MDB).
b. the MDB which received the message is not the right one, and probablly if it was on a "listener" mode, then it supposed to wait for an answer from diffrent sender.(else it wouldnt get any messages)
so ofcourse I can handle everything with a RETRY mechanisem. so both mdb's(the one who got message from the wrong sender, and the one who never got the answer) will try again, to do thire operation with a hope that next time it will success.
This is the mechanism, mybe you could tell me if there is any design pattren, or any other effective solution for this problem?
Thanks,
ray.
It's tough to define an exacting solution without knowing the details, but I will assume that when a response is received from a modem (either the correct one or not), it is possible to determine which exact modem the request came from.
If this is the case, I would separate out the request handler from the response handler:
RequestMDB receives a message from the [existing] queue, dispatches the request and returns.
A new component (call it the ResponseHandler) handles all incoming responses from the modems. The response sender is identified (a modem ID ?) and packages the response into a JMS message which is sent to a JMS Response Queue.
A new MDB (ResponseMDB) listens on the JMS Response Queue and processes the response for which the modem ID is now known.
In short, by separating concerns, you remove the need for the response processing MDB to only be able to process responses from a specific modem and can now process any response that is queued by the ResponseHandler.
The ResponseHandler (listening for responses from the modems) would need to be a multithreaded service. You could implement this as a JBoss ServiceMBean with some sort of ThreadPool support. It will need a reference to the JMS QueueConnectionFactory and the JMS response queue.
In order to handle request timeouts, I propose you create a scheduled task, one for each modem, named after the modem ID. When a request is sent, the task is scheduled for execution after a delay of the timeout period. When a response is received by the ResponseHandler, the ResponseHandler queues the response and then cancels the named task. If the timeout period elapsed without a cancellation, the scheduled task executes and queues another request (an reschedules the timeout task).
Easier said than done, I suppose, but I hope this helps.
//Nicholas