I have a task that will run many times with different values. I'd like to prevent it from executing 2 of the same tasks (Based on the string value) at the same time. Below is an example of the strings. These values will change, but for simplicity I have included these values below in the example. I submit these tasks via an ExecutorService The tasks run, but the 2nd hi blocks the other tasks from running. So 4/5 tasks run concurrently. Once the lock is released from the first hi the 5th tasks continues and the other tasks continue fine. Is there a way to prevent this type of blocking of the task so that the other 3 tasks can run before it so there is no queuing until there is actually 5 tasks running concurrently.
Submission of the tasks:
executor.submit(new Task("hi"));
executor.submit(new Task("h"));
executor.submit(new Task("u"));
executor.submit(new Task("y"));
executor.submit(new Task("hi"));
executor.submit(new Task("p"));
executor.submit(new Task("o"));
executor.submit(new Task("bb"));
The Task is simple. It just prints out the string:
Lock l = getLock(x);
try {
l.lock();
System.out.println(x);
try {
Thread.sleep(5000);
} catch (InterruptedException ex) {
Logger.getLogger(Task.class.getName()).log(Level.SEVERE, null, ex);
}
} finally {
l.unlock();
}
I've updated the post to allow for things to be more clearly understood...
To avoid blocking a thread, you have to ensure that the action doesn’t even run before the other. For example, you can use a CompletableFuture to chain an action, to be scheduled when the previous has been completed:
public static void main(String[] args) {
ExecutorService es = Executors.newFixedThreadPool(2);
for(int i = 0; i < 5; i++) submit("one", task("one"), es);
for(int i = 0; i < 5; i++) submit("two", task("two"), es);
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(26));
es.shutdown();
}
static Runnable task(String x) {
return () -> {
System.out.println(x);
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(5));
};
}
static final ConcurrentHashMap<String, CompletableFuture<Void>> MAP
= new ConcurrentHashMap<>();
static final void submit(String key, Runnable task, Executor e) {
CompletableFuture<Void> job = MAP.compute(key,
(k, previous) -> previous != null?
previous.thenRunAsync(task, e): CompletableFuture.runAsync(task, e));
job.whenComplete((v,t) -> MAP.remove(key, job));
}
The ConcurrentHashMap allows us to handle the cases as atomic updates
If no previous future exists for a key, just schedule the action, creating the future
If a previous future exists, chain the action, to be scheduled when the previous completed; the dependent action becomes the new future
If a job completed, the two-arg remove(key, job) will remove it if and only if it is still the current job
The example in the main method demonstrates how two independent actions can run with a thread pool of two threads, never blocking at thread.
Related
I need a group of threads to run at the same time, and then another group of threads after that. For example, 10 threads start working, and then 10 or 15 other threads.
Of course, the first approach I've tried was to create a loop.
while (true) {
for (int i = 0; i < 10; i++) {
Thread thread = new Thread(
new Runnable() {
#Override
public void run() {
System.out.println("hi");
}
});
thread.start();
}
}
But the problem is when scenario like this happens: imagine if in first iteration, 8 threads finished their tasks, and 2 threads take longer time. The next 10 threads won't start until all 8 + 2 (completed and not completed) threads finish. while, I want an approach where 8 threads get replaced by 8 of waiting to start threads.
Bare Threads
It can be done using bare Thread and Runnable without diving into more advance technologies.
For that, you need to perform the following steps:
define your task (provide an implementation of the Runnable interface);
generate a collection of Threads creating based on this task);
start every thread;
invoke join() on every of these thread (note that firstly we need to start all threads).
That's how it might look like:
public static void main(String[] args) throws InterruptedException {
Runnable task = () -> System.out.println("hi");
int counter = 0;
while (true) {
System.out.println("iteration: " + counter++);
List<Thread> threads = new ArrayList<>();
for (int i = 0; i < 10; i++) {
threads.add(new Thread(task));
}
for (Thread thread : threads) {
thread.start();
}
for (Thread thread : threads) {
thread.join();
}
Thread.currentThread().sleep(1000);
}
}
Instead of managing your Threads manually, it definitely would be wise to look at the facilities provided by the implementations of the ExecutorService interfaces.
Things would be a bit earthier if you use Callable interface for your task instead of Runnable. Callable is more handy in many cases because it allows obtaining the result from the worker-thread and also propagating an exception if thing went wrong (as opposed run() would force you to catch every checked exception). If you have in mind something more interesting than printing a dummy message, you might find Callable to be useful for your purpose.
ExecutorService.invokeAll() + Callable
ExecutorService has a blocking method invokeAll() which expects a collection of the callable-tasks and return a list of completed Future objects when all the tasks are done.
To generate a light-weight collection of repeated elements (since we need to fire a bunch of identical tasks) we can use utility method Collections.nCopies().
Here's a sample code which repeatedly runs a dummy task:
ExecutorService executor = Executors.newWorkStealingPool();
while (true) {
executor.invokeAll(Collections.nCopies(10, () -> {
System.out.println("hi");
return true;
}));
}
To make sure that it does what expected, we can add a counter of iterations and display it on the console and Thread.currentThread().sleep() to avoid cluttering the output very fast (for the same reason, the number of tasks reduced to 3):
public static void main(String[] args) throws InterruptedException {
ExecutorService executor = Executors.newWorkStealingPool();
int counter = 0;
while (true) {
System.out.println("iteration: " + counter++);
executor.invokeAll(Collections.nCopies(3, () -> {
System.out.println("hi");
return true;
}));
Thread.currentThread().sleep(1000);
}
}
Output:
iteration: 0
hi
hi
hi
iteration: 1
hi
hi
hi
... etc.
CompletableFuture.allOf().join() + Runnable
Another possibility is to use CompletableFuture API, and it's method allOf() which expects a varargs of submitted tasks in the form CompletableFuture and return a single CompletableFuture which would be completed when all provided arguments are done.
In order to synchronize the execution of the tasks with the main thread, we need to invoke join() on the resulting CompletableFuture instance.
That's how it might be implemented:
public static void main(String[] args) throws InterruptedException {
ExecutorService executor = Executors.newWorkStealingPool();
Runnable task = () -> System.out.println("hi");
int counter = 0;
while (true) {
System.out.println("iteration: " + counter++);
CompletableFuture.allOf(
Stream.generate(() -> task)
.limit(3)
.map(t -> CompletableFuture.runAsync(t, executor))
.toArray(CompletableFuture<?>[]::new)
).join();
Thread.currentThread().sleep(1000);
}
}
Output:
iteration: 0
hi
hi
hi
iteration: 1
hi
hi
hi
... etc.
ScheduledExecutorService
I suspect you might interested in scheduling these tasks instead of running them reputedly. If that's the case, have a look at ScheduledExecutorService and it's methods scheduleAtFixedRate() and scheduleWithFixedDelay().
For adding tasks to threads and replacing them you can use ExecutorService. You can create it by using:
ExecutorService executor = Executors.newFixedThreadPool(10);
I'd like to submit tasks to a thread pool (or executor service) but task should not be executed concurrently if there's already one task in the executor with the same key.
Specifically, this is for a build tool, to prevent tasks for the same part of a source tree executing concurrently.
This is an example of why I would want this behaviour:
public class Example {
public static void main(String[] args) throws Exception {
ExecutorService service = Executors.newFixedThreadPool(2);
Path resource = Paths.get("tmp");
service.submit(() -> {
Files.write(resource, Collections.singleton("foo"));
Thread.sleep(10);
if (!new String(Files.readAllBytes(resource)).equals("foo")) {
System.err.println("someone changed my stuff");
}
return null;
});
service.submit(() -> Files.write(resource, Collections.singleton("bar")));
service.shutdown();
service.awaitTermination(1, TimeUnit.MINUTES);
}
}
The solution is to use a separate single-threaded executor for each key. Since there can exist many keys, creating a thread for each key can be expensive, and so we replace single-threaded executor with a light-weight SerialExecutor which behaves like a single-threaded executor but have no own thread, borrowing a thread from some normal backend executor when needed. SerialExecutor is described in JavaDocs of Executor. Opimized version can be found at my CodeSamples project,
See similar question Design pattern to guarantee only one Runnable object of particular id value is being executed by the pool
Try with 2 different executers for your different type of tasks and if first pool completed then start another one.
ExecutorService executor1 = Executors.newFixedThreadPool(5);
for (int i = 0; i < 10; i++) {
Runnable worker = new FirstTask("" + i);
executor1.execute(worker);
}
executor1.shutdown();
while (!executor.isTerminated()) {
}
ExecutorService executor2 = Executors.newFixedThreadPool(5);
for (int i = 0; i < 10; i++) {
Runnable worker = new AnotherTask("" + i);
executor2.execute(worker);
}
executor2.shutdown();
I'v got ConcurrentLinkedDeque which I'm using for synchronic push/pop elements,
and I'v got some async tasks which are taking one element from stack and if this element has neighbors It's pushing it to stack.
Example code:
private ConcurrentLinkedDeque<Item> stack = new ConcurrentLinkedDeque<>();
private ExecutorService exec = Executors.newFixedThreadPool(5);
while ((item = stack.pollFirst()) != null) {
if (item == null) {
} else {
Runnable worker = new Solider(this, item);
exec.execute(worker);
}
}
class Solider{
public void run(){
if(item.hasNeighbors){
for(Item item:item.neighbors){
stack.push(item)
}
}
}
}
I would like to have additional statement in while loop which answers the question - "any task in Executor is working?"
There isn't a clean way to check if all Runnables are done if you use ExecutorService.execute(Runnable). Unless you build a mechanism to do so in the Runnable itself (which is sloppy in my opinion).
Instead:
Use ExecutorService.submit(Runnable). This method will return a Future<?> which is a handle to the result of a Runnable. Using Futures provides a clean way to check results.
All you have to do is maintain a list of Futures that you submit, and then you can iterate over the whole list of Futures and either:
A) wait for all the futures to be done in a blocking way or
B) check if all the futures are done in a non-blocking way.
Here is a code example:
List<Future<?>> futures = new ArrayList<Future<?>>();
ExecutorService exec = Executors.newFixedThreadPool(5);
// Instead of using exec.execute() use exec.submit()
// because it returns a monitorable future
while((item = stack.pollFirst()) != null){
Runnable worker = new Solider(this, item);
Future<?> f = exec.submit(worker);
futures.add(f);
}
// A) Await all runnables to be done (blocking)
for(Future<?> future : futures)
future.get(); // get will block until the future is done
// B) Check if all runnables are done (non-blocking)
boolean allDone = true;
for(Future<?> future : futures){
allDone &= future.isDone(); // check if future is done
}
Update: with Java 8+ CompletableFutures you can manage this with its new callback functions. First you will need to create all of the CompletableFutures you need which will also start running, eg:
We need to accumulate all the futures generated in an Array in order to pass them later to CompletableFuture.allOf(CompletableFutures...)
So let's say you have a list of people you want to calculate its days until birthday asynchronously:
First we create all those needed futures and collect them together in an array:
CompletableFuture<?>[] completables = people.stream()
.map(p -> createCompletableFuture(p))
.toArray(CompletableFuture<?>[]::new);
private CompletableFuture createCompletableFuture(Person p) {
return CompletableFuture.runAsync(daysUntillBirthday(p));
}
Then you pass those completables to a new CompletableFuture:
CompletableFuture c = CompletableFuture.allOf(completables)
And you can now check if there are still futures running with:
c.isDone()
This may not be the cleanest solution, but you can use ThreadPoolExecutor.getActiveCount() to check how many threads are actively executing tasks.
Implementing this within a while loop with a simple condition to check if the active thread count is zero is a palatable solution.
Here is a code example:
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(5);
for (int x = 0; x < 4; x++) {
Runnable worker = new Solider(this,item);
executor.execute(worker);
}
// Now check for active threads.
while(executor.getActiveCount()!=0)
{
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
executor.shutdown();
The while block directly answers your question.
IE - If the while block is active, tasks are being executed.
I'm using a global Executor service with some fixed thread pool size. We have bunch of related tasks that we submit for execution and wait on list of futures.
Recently, we faced a high CPU utilization issue and on debugging I found that an exception occurred while calling get() on one of the item in list of futures. Current, we iterate over the list and there is a try catch surrounding the whole loop.
try{
List<Result> results = new ArrayList<>()
for(Future<Result> futureResult: futureResults{
Result result = futureResult.get();
results.add(result);
}
} catch(Exception e){
throw new InternalServiceException(e);
}
//Do something with results
Wanted to know the behaviour of other threads if get is never called on some of the items in future. I tried searching but was not able to find anything.
Also, can this behaviour trigger high CPU utilization ?
http://www.journaldev.com/1650/java-futuretask-example-program
I would still check if the future isDone as in the example above.
If you need to run other operations or want to utilize the CPU better then I would put the collector in a separate thread and perhaps just poll for results every minute or so.
Could be scheduled or handled by Thread.sleep.
Executors class provides various methods to execute Callable in a thread pool. Since callable tasks run in parallel, we have to wait for the returned Object.
Callable tasks return java.util.concurrent.Future object. Using Future we can find out the status of the Callable task and get the returned Object.
It provides get() method that can wait for the Callable to finish and then return the result.
There is an overloaded version of get() method where we can specify the time to wait for the result, it’s useful to avoid current thread getting blocked for longer time.
Future provides cancel() method to cancel the associated Callable task. There are isDone() and isCancelled() methods to find out the current status of associated Callable task.
Here is a simple example of Callable task that returns the name of thread executing the task after some random time.
We are using Executor framework to execute 10 tasks in parallel and use Future to get the result of the submitted tasks.
public class FutureObjectTest implements Callable<String>{
#Override
public String call() throws Exception {
long waitTime = (long) (Math.random()*10000);
System.out.println(Thread.currentThread().getName() + " waiting time in MILISECONDS " + waitTime);
Thread.sleep(waitTime);
return Thread.currentThread().getName() + " exiting call method.";
}
public static void main(String [] args){
List<Future<String>> futureObjectList = new ArrayList<Future<String>>();
ExecutorService executorService = Executors.newFixedThreadPool(5);
Callable<String> futureObjectTest = new FutureObjectTest();
for(int i=0; i<10; i++){
Future<String> futureResult = executorService.submit(futureObjectTest);
futureObjectList.add(futureResult);
}
for(Future<String> futureObj : futureObjectList){
try {
System.out.println(futureObj.get());
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
System.out.println("Starting get method of wait");
////////////get(Timeout) method///////
futureObjectList.clear();
for(int i=0; i<10; i++){
Future<String> futureResult = executorService.submit(futureObjectTest);
futureObjectList.add(futureResult);
}
executorService.shutdown();
for(Future<String> futureObj : futureObjectList){
try {
System.out.println(futureObj.get(2000,TimeUnit.MILLISECONDS));
} catch (InterruptedException | ExecutionException | TimeoutException e) {
e.printStackTrace();
}
}
}
}
I am looking for a way to execute batches of tasks in java. The idea is to have an ExecutorService based on a thread pool that will allow me to spread a set of Callable among different threads from a main thread. This class should provide a waitForCompletion method that will put the main thread to sleep until all tasks are executed. Then the main thread should be awaken, and it will perform some operations and resubmit a set of tasks.
This process will be repeated numerous times, so I would like to use ExecutorService.shutdown as this would require to create multiple instances of ExecutorService.
Currently I have implemented it in the following way using a AtomicInteger, and a Lock/Condition:
public class BatchThreadPoolExecutor extends ThreadPoolExecutor {
private final AtomicInteger mActiveCount;
private final Lock mLock;
private final Condition mCondition;
public <C extends Callable<V>, V> Map<C, Future<V>> submitBatch(Collection<C> batch){
...
for(C task : batch){
submit(task);
mActiveCount.incrementAndGet();
}
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
mLock.lock();
if (mActiveCount.decrementAndGet() == 0) {
mCondition.signalAll();
}
mLock.unlock();
}
public void awaitBatchCompletion() throws InterruptedException {
...
// Lock and wait until there is no active task
mLock.lock();
while (mActiveCount.get() > 0) {
try {
mCondition.await();
} catch (InterruptedException e) {
mLock.unlock();
throw e;
}
}
mLock.unlock();
}
}
Please not that I will not necessarily submit all the tasks from the batch at once, therefore CountDownLatch does not seem to be an option.
Is this a valid way to do it? Is there a more efficient/elegant way to implement that?
Thanks
I think the ExecutorService itself will be able to perform your requirements.
Call invokeAll([...]) and iterate over all of your Tasks. All Tasks are finished, if you can iterate through all Futures.
As the other answers point out, there doesn't seem to be any part of your use case that requires a custom ExecutorService.
It seems to me that all you need to do is submit a batch, wait for them all to finish while ignoring interrupts on the main thread, then submit another batch perhaps based on the results of the first batch. I believe this is just a matter of:
ExecutorService service = ...;
Collection<Future> futures = new HashSet<Future>();
for (Callable callable : tasks) {
Future future = service.submit(callable);
futures.add(future);
}
for(Future future : futures) {
try {
future.get();
} catch (InterruptedException e) {
// Figure out if the interruption means we should stop.
}
}
// Use the results of futures to figure out a new batch of tasks.
// Repeat the process with the same ExecutorService.
I agree with #ckuetbach that the default Java Executors should provide you with all of the functionality you need to execute a "batch" of jobs.
If I were you I would just submit a bunch of jobs, wait for them to finish with the ExecutorService.awaitTermination() and then just start up a new ExecutorService. Doing this to save on "thread creations" is premature optimization unless you are doing this 100s of times a second or something.
If you really are stuck on using the same ExecutorService for each of the batches then you can allocate a ThreadPoolExecutor yourself, and be in a loop looking at ThreadPoolExecutor.getActiveCount(). Something like:
BlockingQueue jobQueue = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor executor = new ThreadPoolExecutor(NUM_THREADS, NUM_THREADS,
0L, TimeUnit.MILLISECONDS, jobQueue);
// submit your batch of jobs ...
// need to wait a bit for the jobs to start
Thread.sleep(100);
while (executor.getActiveCount() > 0 && jobQueue.size() > 0) {
// to slow the spin
Thread.sleep(1000);
}
// continue on to submit the next batch