I'm looking for a way to check a job status over a period of time:
If during the polling I get a result that the job was completed, I return it, otherwise I keep polling until the period I set is over and return a failure result.
I know how to do this using a timer and a while loop.
Is there a better way of doing it?
Thanks in advance
Better way to pool for the result should be using CompletionService.
Since you are already using asynchronous way of communication, make your thread implement Callable and implement call method similar to what you would do in a run method.
Now when you use completion service and to your executor service just say submit. With completion Service you get a blocking queue on which as and when you get the result completion service will put the result onto Queue and you can then do whatever you want with the result.
Related
Let's consider the following situation in producer-consumer pattern:
I cannot wait with a task to be performed. I want to produce task on demand (eg. with Supplier) when a consumer is ready to process it. In SynchronousQueue I need to have actual task when executing put() method. How to solve my problem?
I know that I could solve it by design - just make a set of workers and tell them to produceTask-consume-Task-repeat, but I'm looking for another way.
To be more specific:
Let's consider that I have remote http resource A. I can get a 'task' from it to process in my worker threads. Results are sent asynchronously. But the thing is that I should not get a task from A if I am not able to process it right now.
"I want to produce task on demand (eg. with Supplier) when a consumer is ready to process it."
One example of producing data on demand is Reactive Streams protocol, where Subscriber (consumer) requests Publisher (producer) to push next chunk of data with Subscription.request() method.
This protocol is implemented in RxJava and other libraries.
If I were You, in the case of "producer-consumer pattern" you should not use blocking queues, however You looking for a non-blocking asynchronous queue.
Then everybody notified just in time.
Or is there any other constraint with the actual tasks. Or somehow I misunderstand You? Which side of the producer-consumer goes hungry?
Currently, I am working on a rest API through which client can submit a forever running task and it should return started confirmation if there is no error. The client should be able to submit any number of similar tasks. There should be a functionality to check the status and cancel any task. Currently, I am trying to do it in Java using Spring. How should I approach the problem?
I was able to create an asynchronous task in Spring each time I sent a post request. But till now unable to figure out how to check the status and cancel any task.
Threads you can use here.
Create a job executor class responsible for running the task.
Make a callable. So, you would be able to get the result in the Future.
Use BlockingQueue for the task to store and pick from it (you should because of processor i.e. dual/4 cores).
Use Runtime Class to get the Cores available and Run the thread in cores-1.
No. of thread = Available cores-1.
for cancellation, you can use a flag that mapped to respected stored Thread. Check that flag and return if thread started at any point of time
I'm using openejb and asynchronous EJBs. I have a lot of Futures and would like to know if one on the futures takes too much time (so i can trace it and finally cancel it). The problem is that to know since how long the future si running, I need to know when it starts.
The Future interface let me know if a Future is finished or cancelled, but if it's not, I can't know if it's running or waiting for a thread in the pool.
Is it possible to get the Future status (Running/Not started)?
Thanks
I believe your intent is
1) Get the return value from submitted task using Future
2) If its taking too long , cancel it.
If this is correct - then why not use get with specifying timeout value.
In the gwt client I can cancel a request by letting the method in the Async interface return a Request object which contains a cancel() method I can call.
So far so good, but is there a way to detect this in the java code running on the server?
My use-case is that I have a rpc call which take a long time to complete on the server, and where the user has a "cancel" button to stop the request.
And If the user cancel the request, the server should stop processing, but there seems to be no way to detect that the client have closed the connection.
It is usually not a good idea to use server's request threads for long running tasks. You need to redesign them to be executed asynchronously (if you still have not done it). You can utilize java.util.concurent tools like FutureTask and Executors to achieve this. You will also need to use thread pools to make sure to control a max number of concurrent tasks.
When you submit a request for a long task from the client, you need to return a reference key (e.g. UUID or some unique string) to your FutureTask as soon as you schedule it for execution. Then to cancel the task, you need pass the key from the client and look up you task and cancel it:
See javadoc for more details:
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/FutureTask.html
i'm developing an app that make requests to the Musicbrainz webservice. I read in the musicbrainz manual to not make more than one request per second to the webservice or the client IP will be blocked.
What architecture do you suggest in order to make this restriction transparent to the service client.
I would like to call a method (getAlbuns for example) and it should only make the request 1sec after the last request.
I also want to call 10 request at once and the service should handle the queueing, returning the results when avaiable (Non-blocking).
Thanks!
Because of the required delay between invocations, I'd suggest a java.util.Timer or java.util.concurrent.ScheduledThreadPoolExecutor. Timer is very simple, and perfectly adequate for this use case. But if additional scheduling requirements are identified later, a single Executor could handle all of them. In either case, use fixed-delay method, not a fixed-rate method.
The recurring task polls a concurrent queue for a request object. If there is a pending request, the task executes it, and returns the result via a callback. The query for the service and the callback to invoke are members of the request object.
The application keeps a reference to the shared queue. To schedule a request, simply add it to the queue.
Just to clarify, if the queue is empty when the scheduled task is executed, no request is made. The simple approach would be just to end the task, and the scheduler will invoke the task one second later to check again.
However, this means that it could take up to one second to start a task, even if no requests have been processed lately. If this unnecessary latency is intolerable, writing your own thread is probably preferable to using Timer or ScheduledThreadPoolExecutor. In your own timing loop, you have more control over the scheduling if you choose to block on an empty queue until a request is available. The built-in timers aren't guaranteed to wait a full second after the previous execution finished; they generally schedule relative to the start time of the task.
If this second case is what you have in mind, your run() method will contain a loop. Each iteration starts by blocking on the queue until a request is received, then recording the time. After processing the request, the time is checked again. If the time difference is less than one second, sleep for the the remainder. This setup assumes that the one second delay is required between the start of one request and the next. If the delay is required between the end of one request and the next, you don't need to check the time; just sleep for one second.
One more thing to note is that the service might be able to accept multiple queries in a single request, which would reduce overhead. If it does, take advantage of this by blocking on take() for the first element, then using poll(), perhaps with a very short blocking time (5 ms or so), to see if the application is making any more requests. If so, these can be bundled up in a single request to the service. If queue is a BlockingQueue<? extends Request>, it might look something like this:
Collection<Request> bundle = new ArrayList<Request>();
bundle.add(queue.take());
while (bundle.size() < BUNDLE_MAX) {
Request req = queue.poll(EXTRA, TimeUnit.MILLISECONDS);
if (req == null)
break;
bundle.add(req);
}
/* Now make one service request with contents of "bundle". */
You need to define a local "proxy service" which your local clients will call.
The local proxy will receive requests and pass it on to the real service. But only at the rate of one message per second.
How you do this depends very much on the tecnoligy available to you.
The simplest would be a mutithreaded java service with a static and synchronised LastRequestTime long;" timestamp variable. (Although you would need some code acrobatics to keep your requests in sequence).
A more sophisticated service could have worker threads receiving the requests and placing them on a queue with a single thread picking up the requests and passing them on to the real service.