I'm using executor service with newSingleThreadExecutor to execute my Runnable task in serial order, however seems to be serial execution order is not guaranteed, as sometime tasks are executed in random order.
executorService = Executors.newSingleThreadExecutor();
executorService.submit(MyTask1);
executorService.submit(MyTask2);
MyTask performs some Asynchronous operation and send the result back to the class from where I'm executing the task.
though docs says, with newSingleThreadExecutor () tasks has to be executed serially,Im not able to find out what I'm missing here.any help would be appreciated.
Thanks
Since execution order is guaranteed to be sequential, you are problably not using a single thread executor in the code you are actually running.
As a work around, submit one task that does two things:
executorService.submit(() -> {MyTask1.run(); MyTask2.run();});
tl;dr
If the results you are watching arrive out-of-order are being produced asynchronously, then all is well. Async work is being done on its own separate timeline. By definition, the async work launched by task # 1 may not complete until long after tasks # 1, 2, & 3 are all done (for example).
Asynchronous means “don’t wait for me”
You mentioned:
MyTask performs some Asynchronous operation
Asynchronous execution means the calling code need not wait for the async operation to complete.
The calling code, your task submitted to the executor service, makes the request for async work to be done, and the task immediately continues. If that task has no further work, the task is complete. So the executor service can move on.
The executor service moves on. The executor service executes the second submitted task. Meanwhile, the async work requested above may not yet be done. Perhaps the async work is waiting on a resource such waiting for a call over the network to return, or the async work is waiting for a database query to execute. That, by definition of asynchronous, does not block the task submitted to the executor. The executor service is now running the 2nd submitted task, and may be a third or fourth, before, finally, your async work completes.
Feature, not a bug
In other words, a feature, not a bug. The ExecutorService returned by Executors.newSingleThreadExecutor() fulfilled its promise that “Tasks are guaranteed to execute sequentially”. The fact that as a by-product one of those tasks spins off async work does not change the fact that the tasks as submitted were indeed executed in their sequential order.
Related
I have an ssh client library implementation. Each connection has few executors. One is the thread pool using ScheduledThreadPoolExecutor, that is used to queue short lived tasks and timers. One is the read executor, used to hold a packet receiver task. One is the write executor, serially executing tasks, of which each sends one packet to the server. Of course both read and write executor are single threaded, and write executor is used as something like a message queue.
The problem that i have is: methods to queue a message, and some methods queuing tasks, return a CompletableFuture. I queue stuff with CompletableFuture.runAsync method. However, the connection may be asynchronously closed in an orderly or forced manner. In that case some or all pools are shutdown using the shutdownNow method.
What to do in the case that some threads, including threads outside of those pools, could wait for some task to complete synchronously, and there is a risk of asynchronous shutdownNow due to everything including network errors? shutdownNow does not issue future's cancel method. I do not care if actual tasks are interrupted or not, i just care that futures will block indefinitely if executor was shutdown while their task was still in the queue.
What is the best practice to handle this situation? What do people do/etc?
Okay, i believe I have an idea. it is the following:
Because parallel shutdown waits for all tasks to complete, and shutdownNow will just trash them without cancelling, and because I actually end up using completable futures all the time, I decided to maintain a set of completable futures of all kinds per connection, that would hold all tasks including message senders and normal tasks submitted to the task pool. Each method that closes the connection or begins orderly disconnect or so will go through the set and complete all futures exceptionally with some exception. That gives better errors than cancellation. Also nothing should happen if the task will cancel itself this way.
Instead of using runAsync, or normally creating a completable future in case of tasks not associated to runnables, I have a special method that creates such a task, adds it to the set, and attaches a function using CompletableFuture.whenCompleted(), that removes the task from the set if it completed for any reason. I also have runAsync that creates the task using the previously described method, and then submits a runnable using CompletableFuture.completeAsync.
That way all waiting threads should unblock on connection close and get a nice exception from all tasks including sent messages, no matter which method I would use to wait for completion, get() or join().
I had some queries regarding Future usage. Please go through below example before addressing my queries.
http://javarevisited.blogspot.in/2015/01/how-to-use-future-and-futuretask-in-Java.html
The main purpose of using thread pools & Executors is to execute task asynchronously without blocking main thread. But once you use Future, it is blocking calling thread. Do we have to create separate new thread/thread pool to analyse the results of Callable tasks? OR is there any other good solution?
Since Future call is blocking the caller, is it worth to use this feature? If I want to analyse the result of a task, I can have synchronous call and check the result of the call without Future.
What is the best way to handle Rejected tasks with usage of RejectionHandler? If a task is rejected, is it good practice to submit the task to another Thread or ThreadPool Or submit the same task to current ThreadPoolExecutor again?
Please correct me if my thought process is wrong about this feature.
Your question is about performing an action when an asynchronous action has been done. Futures on the other hand are good if you have an unrelated activity which you can perform while the asynchronous action is running. Then you may regularly poll the action represented by the Future via isDone() and do something else if not or call the blocking get() if you have no more unrelated work for your current thread.
If you want to schedule an on-completion action without blocking the current thread, you may instead use CompletableFuture which offers such functionality.
CompletableFuture is the solution for queries 1 and 2 as suggested by #Holger
I want to update about RejectedExecutionHandler mechanism regarding query 3.
Java provides four types of Rejection Handler policies as per javadocs.
In the default ThreadPoolExecutor.AbortPolicy, the handler throws a runtime RejectedExecutionException upon rejection.
In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.
In ThreadPoolExecutor.DiscardPolicy, a task that cannot be executed is simply dropped.
In ThreadPoolExecutor.DiscardOldestPolicy, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.)
CallerRunsPolicy: If you have more tasks in task queue, using this policy will degrade the performance. You have to be careful since reject tasks will be executed by main thread itself. If Running the rejected task is critical for your application and you have limited task queue, you can use this policy.
DiscardPolicy: If discarding a non-critical event does not bother you, then you can use this policy.
DiscardOldestPolicy: Discard the oldest job and try to resume the last one
If none of them suits your need, you can implement your own RejectionHandler.
In my application, Jobs will be submitted dynamically, I need to keep track of submitted job's completion. While I shutdown my application, I want to wait till all the submitted jobs completed. For this I maintain a list of submitted job ids. As soon as process completion notification is raised, I remove the id from the list. When shutdown is called, I am waiting till the list becomes empty.
while (!ids.isEmpty());
Is there a better way to this busy wait.
If you are implementing the job dispatching and running by hand by creating and starting threads, then you need to use Object.wait and Object.notify to implement a condition variable. It is a bit fiddly to get right ...
But a better approach is to use a ThreadPoolExecutor service for running your jobs. That allows you to submit all of the jobs, and then call shutdown and awaitTermination ... which will wait until all of the queued jobs have completed.
Are you reinventing an ExecutorService? In particular, its awaitTermination() method? And yes, awaitTermination() does not busy wait ...
You could try to do it the other way around:
Each job calls an "exit"-method (in the class that holds the ids) that checks if this is the last process to die. Then there will be no busy-wait loop. Provide each job with an "TerminationHandlerInterface" that have the exit-method.
I try to work with Java's FutureTask, Future, Runnable, Callable and ExecutorService types.
What is the best practice to compose those building blocks?
Given that I have multiple FutureTasks and and I want to execute them in sequence.
Ofcourse I could make another FutureTask which is submitting / waiting for result for each subtask in sequence, but I want to avoid blocking calls.
Another option would be to let those subtasks invoke a callback when they complete, and schedule the next task in the callback. But going that route, how to I create a proper outer FutureTask object which also handles exceptions in the subtask without producing that much of a boilerplate?
Do I miss something here?
Very important thing, though usually not described in tutorials:
Runnables to be executed on an ExecutorService should not block. This is because each blocking switches off a working thread, and if ExecutorService has limited number of working threads, there is a risk to fall into deadlock (thread starvation), and if ExecutorService has unlimited number of working threads, then there is a risk to run out of memory. Blocking operations in the tasks simply destroy all advantages of ExecutorService, so use blocking operations on usual threads only.
FutureTask.get() is blocking operation, so can be used on ordinary threads and not from an ExecutorService task. That is, it cannot serve as a building block, but only to deliver result of execution to the master thread.
Right approach to build execution from tasks is to start next task when all input data for the next task is ready, so that the task do not have to block waiting for input data. So you need a kind of a gate which stores intermediate results and starts new task when all arguments have arrived. Thus tasks do not bother explicitly to start other tasks. So a gate, which consists of input sockets for arguments and a Runnable to compute them, can be considered as a right building block for computations on ExcutorServices.
This approach is called dataflow or workflow (if gates cannot be created dynamically).
Actor frameworks like Akka use this approach but are limited in the fact that an actor is a gate with single input socket.
I have written a true dataflow library published at https://github.com/rfqu/df4j.
I tried to do something similar with a ScheduledFuture, trying to cause a delay before things were displayed to the user. This is what I come up with, simply use the same ScheduledFuture for all your 'delays'. The code was:
public static final ScheduledExecutorService scheduler = Executors
.newScheduledThreadPool(1);
public ScheduledFuture delay = null;
delay = scheduler.schedule(new Runnable() {
#Override
public void run() {
//do something
}
}, 1000, TimeUnit.MILLISECONDS);
delay = scheduler.schedule(new Runnable() {
#Override
public void run() {
//do something else
}
}, 2000, TimeUnit.MILLISECONDS);
Hope this helps
Andy
The usual approach is to:
Decide about ExecutorService (which type, how many threads).
Decide about the task queue (for how long it could be non-blocking).
If you have some external code that waits for the task result:
* Submit tasks as Callables (this is non blocking as long as you do not run out of the queue).
* Call get on the Future.
If you want some actions to be taken automatically after the task is finished:
You can submit as Callables or Runnables.
Just add that you need to do at the end as the last code inside the task. Use
Activity.runOnUIThread these final actions need to modify GUI.
Normally, you should not actively check when you can submit one more task or schedule callback in order just to submit them. The thread queue (blocking, if preferred) will handle this for you.
I am creating a http proxy server in java. I have a class named Handler which is responsible for processing the requests and responses coming and going from web browser and to web server respectively. I have also another class named Copy which copies the inputStream object to outputStream object . Both these classes implement Runnable interface. I would like to use the concept of Thread pooling in my design, however i don't know how to go about that! Any hint or idea would be highly appreciated.
I suggest you look at Executor and ExecutorService. They add a lot of good stuff to make it easier to use Thread pools.
...
#Azad provided some good information and links. You should also buy and read the book Java Concurrency in Practice. (often abbreviated as JCiP) Note to stackoverflow big-wigs - how about some revenue link to Amazon???
Below is my brief summary of how to use and take advantage of ExecutorService with thread pools. Let's say you want 8 threads in the pool.
You can create one using the full featured constructors of ThreadPoolExecutor, e.g.
ExecutorService service = new ThreadPoolExecutor(8,8, more args here...);
or you can use the simpler but less customizable Executors factories, e.g.
ExecutorService service = Executors.newFixedThreadPool(8);
One advantage you immediately get is the ability to shutdown() or shutdownNow() the thread pool, and to check this status via isShutdown() or isTerminated().
If you don't care much about the Runnable you wish to run, or they are very well written, self-contained, never fail or log any errors appropriately, etc... you can call
execute(Runnable r);
If you do care about either the result (say, it calculates pi or downloads an image from a webpage) and/or you care if there was an Exception, you should use one of the submit methods that returns a Future. That allows you, at some time in the future, check if the task isDone() and to retrieve the result via get(). If there was an Exception, get() will throw it (wrapped in an ExecutionException). Note - even of your Future doesn't "return" anything (it is of type Void) it may still be good practice to call get() (ignoring the void result) to test for an Exception.
However, this checking the Future is a bit of chicken and egg problem. The whole point of a thread pool is to submit tasks without blocking. But Future.get() blocks, and Future.isDone() begs the questions of which thread is calling it, and what it does if it isn't done - do you sleep() and block?
If you are submitting a known chunk of related of tasks simultaneously, e.g., you are performing some big mathematical calculation like a matrix multiply that can be done in parallel, and there is no particular advantage to obtaining partial results, you can call invokeAll(). The calling thread will then block until all the tasks are complete, when you can call Future.get() on all the Futures.
What if the tasks are more disjointed, or you really want to use the partial results? Use ExecutorCompletionService, which wraps an ExecutorService. As tasks get completed, they are added to a queue. This makes it easy for a single thread to poll and remove events from the queue. JCiP has a great example of an web page app that downloads all the images in parallel, and renders them as soon as they become available for responsiveness.
I hope below will help you:,
class Executor
An object that executes submitted Runnable tasks. This interface provides a way of decoupling task submission from the mechanics of how each task will be run, including details of thread use, scheduling, etc. An Executor is normally used instead of explicitly creating threads. For example, rather than invoking new Thread(new(RunnableTask())).start() for each of a set of tasks, you might use:
Executor executor = anExecutor;
executor.execute(new RunnableTask1());
executor.execute(new RunnableTask2());
...
class ScheduledThreadPoolExecutor
A ThreadPoolExecutor that can additionally schedule commands to run after a given delay, or to execute periodically. This class is preferable to Timer when multiple worker threads are needed, or when the additional flexibility or capabilities of ThreadPoolExecutor (which this class extends) are required.
Delayed tasks execute no sooner than they are enabled, but without any real-time guarantees about when, after they are enabled, they will commence. Tasks scheduled for exactly the same execution time are enabled in first-in-first-out (FIFO) order of submission.
and
Interface ExecutorService
An Executor that provides methods to manage termination and methods that can produce a Future for tracking progress of one or more asynchronous tasks.
An ExecutorService can be shut down, which will cause it to stop accepting new tasks. After being shut down, the executor will eventually terminate, at which point no tasks are actively executing, no tasks are awaiting execution, and no new tasks can be submitted.
Edited:
you can find example to use Executor and ExecutorService herehereand here Question will be useful for you.