java: maximum execution time within for-loop - java

I've got the folowing code:
List<String> instances2 = Arrays.asList("instances/umps20.txt","instances/umps22.txt","instances/umps24.txt","instances/umps26.txt","instances/umps28.txt","instances/umps30.txt","instances/umps32.txt");
List<Integer> qq1 = Arrays.asList(9,10,11,12,13,14,14);
List<Integer> qq2 = Arrays.asList(4,4,5,5,5,5,6);
for (int i = 0; i<7; i++) {
Tournament t = p.process(instances2.get(i));
int nTeams = t.getNTeams();
int q1 = qq1.get(i);
int q2 = qq2.get(i);
UndirectedGraph graph = g.create(t, q1, q2);
new Choco(graph, nTeams);
}
}
Now i want to put a limit on each iteration. So after let's say 3h = 10 800 000ms, i would like for everything in the for-loop to stop and start the next iteration over the loop. Any ideas?
Thanks in advance!
Nicholas

You can get the System time before you start the loop and compare it after each iteration to check if the time is over the specified time, like this:
On for loop starting:
long start = System.currentTimeMillis();
in each iteration:
if(start + 10_800_000 >= System.currentTimeMillis()){
start = System.currentTimeMillis();
i++;
}
and you have to remove the i++ in the for loop
for (int i = 0; i<7;) {

You will have to create a new thread which will run your loop, the ExecutorService will run this loop (or whatever code you put into the call() method) for the specified amount of time.
Here is a demo of a task which takes 5 seconds to run, it will be interrupted after 3 seconds:
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class QuickTest {
public static void main(String[] args) throws Exception {
ExecutorService executor = Executors.newSingleThreadExecutor();
Future<String> future = executor.submit(new Task());
try {
System.out.println("Started.."); // your task is running
System.out.println(future.get(3, TimeUnit.SECONDS)); // enter the amount of time you want to allow your code to run
System.out.println("Finished!"); // the task finished within the given time
} catch (TimeoutException e) {
future.cancel(true);
System.out.println("Terminated!"); // the task took too long and was interrupted
}
executor.shutdownNow();
}
}
class Task implements Callable<String> {
#Override
public String call() throws Exception { // enter the code you want to run for x time in here
Thread.sleep(5000); // Just to demo some code which takes 5 seconds to finish.
return "Ready!"; // code finished and was not interrupted (you gave it enough time).
}
}

There are many ways of implementing the requested functionality.
One approach could be converting the code in the for to a FutureTask object and submit it to an ExecutorService - even one with just 1 thread, if the loop has to be executed in sequence - e.g.
ExecutorService executor = Executors.newFixedThreadPool(1);
The benefit of having a FutureTask (or any other object implementing the Future interface), is that the cancel() method can be used to make sure that the interrupted iteration will not create any side effects.
For the interrupts, there are numerous alternatives. For example, the javax.swing.Timer class can be used, which fires ActionEvent notifications after the expiry of the timer.
In the above approach, the task (for loop code) will be executed until completion, or until an ActionEvent is received from the timer. In the latter case, a call to cancel() can be used to stop the running task and the next task will start. The counter of the total number of iterations can be maintained at the same place.
For more sophisticated solutions, one can play with the various implementations of ExecutorService and timeout specification options, as in another StackOverflow question.

Related

How to limit number of threads within a time period

A service I am using starts blocking requests after 5 are made within 1 second.
Using Java in Spring I am looking for a way to queue threads in such a way that up to 5 threads can access the critical section within a second and any other threads are queued up and released once there is bandwidth for them to continue.
Currently I've attempted this with a lock but it causes the thread to wait 1/5th of a second always, even if we wouldn't be at the max calls per second without sleeping.
Lock l = new ReentrantLock();
try {
l.lock();
//critical section
} finally {
try {
Thread.sleep(200);
} catch (InterruptedException e) {
e.printStackTrace();
}
l.unlock();
}
With this implementation I never exceed the 5 per second but I also cause the response to be delayed by 200 milli after everything is ready to be returned to the user.
I need a solution that only delays threads when a delay is needed. In this case the 6th+ call in a second should be delayed but the first 5 do not need to be delayed. Likewise calls 6-11 could all go through at the same time.
This sort of rate-limiting is quite a common problem in microservice architectures, as it is part of the broader issue of addressing cascading failures. There are many libraries around to deal with this issue, and one of the most widely-used modern ones is called Resilience4j, which provides a RateLimiter implementation. You probably want something pretty close to this:
Create the limiter:
RateLimiterConfig config = RateLimiterConfig.custom()
.limitRefreshPeriod(Duration.ofSeconds(1))
.limitForPeriod(5)
.timeoutDuration(Duration.ofSeconds(4)) //or however long you want to wait before failing
.build();
// Create registry
RateLimiterRegistry rateLimiterRegistry = RateLimiterRegistry.of(config);
// Use registry
RateLimiter rateLimiter = rateLimiterRegistry
.rateLimiter("someServiceLimiter", config);
Use it:
// Decorate your call to BackendService.doSomething()
CheckedRunnable restrictedCall = RateLimiter
.decorateCheckedRunnable(rateLimiter, backendService::doSomething);
//Or, you can use an annotation:
#RateLimiter(name = "someServiceLimiter")
public void doSomething() {
//backend call
}
I think solving it using semaphore API would be the best approach.
import java.util.LinkedList;
import java.util.Queue;
import java.util.concurrent.*;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class BulkheadSemaphore {
private Queue<Long> enterQueue = new LinkedList<>();
private ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor();
private Semaphore semaphore;
public BulkheadSemaphore(final Long timeLimit, final int concurrentThreadsLimit) {
this.semaphore = new Semaphore(concurrentThreadsLimit);
executor.scheduleAtFixedRate(() -> {
final Long now = now();
while (!enterQueue.isEmpty() && now - enterQueue.peek() >= timeLimit) {
enterQueue.poll();
semaphore.release();
}
}, timeLimit, 200, TimeUnit.MILLISECONDS);
}
private Long now() {
return System.currentTimeMillis();
}
public void acquire() {
try {
semaphore.acquire();
} catch (InterruptedException e) {
// todo: handle exception
}
}
public void release() {
semaphore.release();
}
}
The api is quite simple:
Each thread entering the critical section, call bulkheadSemaphore.acqure()
After an external call execution finishes, call bulkheadSemaphore.release()
Why does it solve the problem?
This semaphore releases permits for threads which entered the
critical section long time ago.
It releases it's permits at a certain rate (I set it to 200ms, it can be smaller though). It also guarantees that if a work unit has been done quickly, the next thread will be able to start a new work unit.
Some threads would still face redundant waiting, however it doesn't happen every time and they'd spend 200ms at most.
As requests take time, I'd set timeLimit to 1.5 seconds to match your 1 second limitation.
P.S. Don't forget to shutdown executor service

Awaiting pool to finish threads

StopWatch sw = new StopWatch();
sw.start();
ExecutorService executor = Executors.newFixedThreadPool(MYTHREADS);
for (int i = 0; i < MYTHREADS; i++) {
Runnable worker = new SingleConnectionRunnable();
executor.execute(worker);
}
sw.stop();
System.out.println("total time"+sw.toString());
sw.reset();
sw.start();
for (int i = 0; i < MYTHREADS; i++) {
Runnable worker2 = new PooledConnectionRunnable();
executor.execute(worker2);
}
executor.shutdown();
executor.awaitTermination(Integer.MAX_VALUE, TimeUnit.SECONDS);
while (!executor.isTerminated()) {
}
sw.stop();
System.out.println("total time"+sw.toString());
I am trying to run some perf tests on the code above. I am trying to use the same executor on different Runnable and measure the time. But it doesn't quite work. the first "total time" is not correct which is in milliseconds.
I want to print the elapsed time on the first loop then print the second loop. Not sure how I can wait executor to finish the first one then restart the executor.
What is the correct way to get this done?
First, awaitTermination will block until all tasks terminate. Is there any particular reason that you use a while loop check after waiting potentially 70 years?
Anyways, to answer your question, in order to wait for the first run to finish, you should use a CountDownLatch to signal completion of each thread and await for them in the main thread until they finish. You can also use a CyclicBarrier to await until all your threads are ready to go before starting timing, like so:
...
CountDownLatch latch = new CountDownLatch(MYTHREADS);
CyclicBarrier cb = new CyclicBarrier(MYTHREADS, new Runnable() {
#Override public void run() {
sw.start();
}
});
for (...) {
Runnable worker = ...
executor.execute(new Runnable() {
#Override public void run() {
try {
cb.await();
} catch (Exception e) {
throw new RuntimeException(e);
}
worker.run();
latch.countDown();
}
});
}
latch.await();
sw.stop();
...
I moved the sw.start() to the beginning of the for-loop to avoid measuring object allocation overhead to setup (probably won't be measured anyways since its in ms).
You can also reset the two concurrency classes to run an indefinite number of times.
What you are doing now is:
Start the stopwatch
Start a few threads
Read the stopwatch
You are not waiting for them to finish like you do with the second loop.
This is what you can do to fix this.
Make a callback method in the SingleConnectionRunnable.
This method will be called at the last point of this runnable (when you terminate it) and caught by the class that starts the loop (which is not method in the question but that is fine).
In this callback method you keep track of how many times it is called.
When it is called MYTHREAD amount of times you print the stopwatch time.
Now you know how long it will take until all started threads are finished.

Java concurrency counter not properly clean up

This is a java concurrency question. 10 jobs need to be done, each of them will have 32 worker threads. Worker thread will increase a counter . Once the counter is 32, it means this job is done and then clean up counter map. From the console output, I expect that 10 "done" will be output, pool size is 0 and counterThread size is 0.
The issues are :
most of time, "pool size: 0 and countThreadMap size:3" will be
printed out. even those all threads are gone, but 3 jobs are not
finished yet.
some time, I can see nullpointerexception in line 27. I have used ConcurrentHashMap and AtomicLong, why still have concurrency
exception.
Thanks
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.atomic.AtomicLong;
public class Test {
final ConcurrentHashMap<Long, AtomicLong[]> countThreadMap = new ConcurrentHashMap<Long, AtomicLong[]>();
final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
final ThreadPoolExecutor tPoolExecutor = ((ThreadPoolExecutor) cachedThreadPool);
public void doJob(final Long batchIterationTime) {
for (int i = 0; i < 32; i++) {
Thread workerThread = new Thread(new Runnable() {
#Override
public void run() {
if (countThreadMap.get(batchIterationTime) == null) {
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis()); //start up time
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
System.out.println("done");
countThreadMap.remove(batchIterationTime);
}
}
});
tPoolExecutor.execute(workerThread);
}
}
public void report(){
while(tPoolExecutor.getActiveCount() != 0){
//
}
System.out.println("pool size: "+ tPoolExecutor.getActiveCount() + " and countThreadMap size:"+countThreadMap.size());
}
public static void main(String[] args) throws Exception {
Test test = new Test();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
test.doJob(batchIterationTime);
}
test.report();
System.out.println("All Jobs are done");
}
}
Let’s dig through all the mistakes of thread related programming, one man can make:
Thread workerThread = new Thread(new Runnable() {
…
tPoolExecutor.execute(workerThread);
You create a Thread but don’t start it but submit it to an executor. It’s a historical mistake of the Java API to let Thread implement Runnable for no good reason. Now, every developer should be aware, that there is no reason to treat a Thread as a Runnable. If you don’t want to start a thread manually, don’t create a Thread. Just create the Runnable and pass it to execute or submit.
I want to emphasize the latter as it returns a Future which gives you for free what you are attempting to implement: the information when a task has been finished. It’s even easier when using invokeAll which will submit a bunch of Callables and return when all are done. Since you didn’t tell us anything about your actual task, it’s not clear whether you can let your tasks simply implement Callable (may return null) instead of Runnable.
If you can’t use Callables or don’t want to wait immediately on submission, you have to remember the returned Futures and query them at a later time:
static final ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
public static List<Future<?>> doJob(final Long batchIterationTime) {
final Random r=new Random();
List<Future<?>> list=new ArrayList<>(32);
for (int i = 0; i < 32; i++) {
Runnable job=new Runnable() {
public void run() {
// pretend to do something
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(r.nextInt(10)));
}
};
list.add(cachedThreadPool.submit(job));
}
return list;
}
public static void main(String[] args) throws Exception {
Test test = new Test();
Map<Long,List<Future<?>>> map=new HashMap<>();
for (int i = 0; i < 10; i++) {
Long batchIterationTime = System.currentTimeMillis();
while(map.containsKey(batchIterationTime))
batchIterationTime++;
map.put(batchIterationTime,doJob(batchIterationTime));
}
// print some statistics, if you really need
int overAllDone=0, overallPending=0;
for(Map.Entry<Long,List<Future<?>>> e: map.entrySet()) {
int done=0, pending=0;
for(Future<?> f: e.getValue()) {
if(f.isDone()) done++;
else pending++;
}
System.out.println(e.getKey()+"\t"+done+" done, "+pending+" pending");
overAllDone+=done;
overallPending+=pending;
}
System.out.println("Total\t"+overAllDone+" done, "+overallPending+" pending");
// wait for the completion of all jobs
for(List<Future<?>> l: map.values())
for(Future<?> f: l)
f.get();
System.out.println("All Jobs are done");
}
But note that if you don’t need the ExecutorService for subsequent tasks, it’s much easier to wait for all jobs to complete:
cachedThreadPool.shutdown();
cachedThreadPool.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS);
System.out.println("All Jobs are done");
But regardless of how unnecessary the manual tracking of the job status is, let’s delve into your attempt, so you may avoid the mistakes in the future:
if (countThreadMap.get(batchIterationTime) == null) {
The ConcurrentMap is thread safe, but this does not turn your concurrent code into sequential one (that would render multi-threading useless). The above line might be processed by up to all 32 threads at the same time, all finding that the key does not exist yet so possibly more than one thread will then be going to put the initial value into the map.
AtomicLong[] atomicThreadCountArr = new AtomicLong[2];
atomicThreadCountArr[0] = new AtomicLong(1);
atomicThreadCountArr[1] = new AtomicLong(System.currentTimeMillis());
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
That’s why this is called the “check-then-act” anti-pattern. If more than one thread is going to process that code, they all will put their new value, being confident that this was the right thing as they have checked the initial condition before acting but for all but one thread the condition has changed when acting and they are overwriting the value of a previous put operation.
} else {
AtomicLong[] atomicThreadCountArr = countThreadMap.get(batchIterationTime);
atomicThreadCountArr[0].getAndAdd(1);
countThreadMap.put(batchIterationTime, atomicThreadCountArr);
Since you are modifying the AtomicInteger which is already stored into the map, the put operation is useless, it will put the very array that it retrieved before. If there wasn’t the mistake that there can be multiple initial values as described above, the put operation had no effect.
}
if (countThreadMap.get(batchIterationTime)[0].get() == 32) {
Again, the use of a ConcurrentMap doesn’t turn the multi-threaded code into sequential code. While it is clear that the only last thread will update the atomic integer to 32 (when the initial race condition doesn’t materialize), it is not guaranteed that all other threads have already passed this if statement. Therefore more than one, up to all threads can still be at this point of execution and see the value of 32. Or…
System.out.println("done");
countThreadMap.remove(batchIterationTime);
One of the threads which have seen the 32 value might execute this remove operation. At this point, there might be still threads not having executed the above if statement, now not seeing the value 32 but producing a NullPointerException as the array supposed to contain the AtomicInteger is not in the map anymore. This is what happens, occasionally…
After creating your 10 jobs, your main thread is still running - it doesn't wait for your jobs to complete before it calls report on the test. You try to overcome this with the while loop, but tPoolExecutor.getActiveCount() is potentially coming out as 0 before the workerThread is executed, and then the countThreadMap.size() is happening after the threads were added to your HashMap.
There are a number of ways to fix this - but I will let another answer-er do that because I have to leave at the moment.

what is the most simple and efficient way to return value from Runnables in Thread Pool Executor?

I have created a DataBaseManager Class in my android app that manages all database operations for my app.
I have different methods to create,update, and retrieve value from database.
I do it on a runnable and submit it to the Thread pool Executor.
In case, I have to return some value from this Runnable, how can I achieve it, I know about callbacks but it will be little cumbersome for me as the number of methods large.
Any help will be appreciated!
You need to use Callable: Interface Callable<V>
Like Runnable, It's instances are potentially executed by another thread.
But smarter then Runnable: Capable of returning result and checked Exception
Using it is as much simple as Runnable:
private final class MyTask extends Callable<T>{
public T call(){
T t;
// your code
return t;
}
}
I am using T to represent a reference type e.g. String.
Getting the result upon completion:
using Future<V>: A Future represents the result of an
asynchronous computation. Methods are provided to check if the
computation is complete, to wait for its completion. The result is
retrieved using method get() when the computation has completed,
blocking if necessary until it is ready.
List<Future<T>> futures = new ArrayList<>(10);
for(int i = 0; i < 10; i++){
futures.add(pool.submit(new MyTask()));
}
T result;
for(Future<T> f: futures)
result = f.get(); // get the result
The disadvantages of above approach is that, if first task takes a
long time to compute and all the other tasks ends before the first,
the current thread cannot compute the result before the first task
ends. Hence another solution would be to use CompletionService.
using CompletionService<V>: A service that decouples the
production of new asynchronous tasks from the consumption of the
results of completed tasks. Producers submit tasks for execution.
Consumers take completed tasks and process their results in the order
they complete. Using it is as simple as follows:
CompletionService<T> pool = new ExecutorCompletionService<T>(threadPool);
And then use pool.take().get() to read the returned result from
callable instances:
for(int i = 0; i < 10; i++){
pool.submit(new MyTask());
}
for(int i = 0; i < 10; i++){
T result = pool.take().get();
//your another code
}
the below is the sample code for using callable
import java.util.concurrent.Callable;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Future;
public class Test {
public static void main(String[] args) throws Exception {
ExecutorService executorService1 = Executors.newFixedThreadPool(4);
Future f1 =executorService1.submit(new callable());
Future f2 =executorService1.submit(new callable());
System.out.println("f1 " + f1.get());
System.out.println("f1 " + f2.get());
executorService1.shutdown();
}
}
class callable implements Callable<String> {
public String call() {
System.out.println(" Starting callable Asynchronous task" + Thread.currentThread().getName());
try {
Thread.currentThread().sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println(" Ending callable Asynchronous task" + Thread.currentThread().getName());
return Thread.currentThread().getName();
}
}

Why is BlockingQueue.take() not releasing the thread?

In this simple short program, you will notice that the program hangs forever because the take() does not release the thread. According to my understanding, take() causes the thread to be released even though the task itself is blocked on take().
Edited:
This works (thanks to you all for fixing the autoboxing):
import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.LinkedBlockingQueue;
public class ProducersConsumers {
private static int THREAD_COUNT = 5;
public static void main(String[] args) throws ExecutionException, InterruptedException {
final ExecutorService executorPool = Executors.newFixedThreadPool(THREAD_COUNT);
final LinkedBlockingQueue<Long> queue = new LinkedBlockingQueue<Long>();
Collection<Future<Long>> collection = new ArrayList<Future<Long>>();
// producer:
for (int i = 0; i < 20; i++) {
collection.add(executorPool.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
for (int i = 100; i >= 0; i--) {
queue.put((long) i);
}
return -1L;
}
}));
}
// consumer:
for (int i = 0; i < 20; i++) {
collection.add(executorPool.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
while (true) {
Long item = queue.take();
if (item.intValue() == 0) {
break;
}
}
return 1L;
}
}));
}
long sum = 0;
for (Future<Long> item : collection) {
sum += item.get();
}
executorPool.shutdown();
System.out.println("sum = " + sum);
}
}
But if you swap the producer and consumer invocations, it will hang:
import java.util.ArrayList;
import java.util.Collection;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.LinkedBlockingQueue;
public class ProducersConsumers {
private static int THREAD_COUNT = 5;
public static void main(String[] args) throws ExecutionException, InterruptedException {
final ExecutorService executorPool = Executors.newFixedThreadPool(THREAD_COUNT);
final LinkedBlockingQueue<Long> queue = new LinkedBlockingQueue<Long>();
Collection<Future<Long>> collection = new ArrayList<Future<Long>>();
// consumer:
for (int i = 0; i < 20; i++) {
collection.add(executorPool.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
while (true) {
Long item = queue.take();
if (item.intValue() == 0) {
break;
}
}
return 1L;
}
}));
}
// producer:
for (int i = 0; i < 20; i++) {
collection.add(executorPool.submit(new Callable<Long>() {
#Override
public Long call() throws Exception {
for (int i = 100; i >= 0; i--) {
queue.put((long) i);
}
return -1L;
}
}));
}
long sum = 0;
for (Future<Long> item : collection) {
sum += item.get();
}
executorPool.shutdown();
System.out.println("sum = " + sum);
}
}
To my understanding the producer and consumer order should not matter. In other words, there is a notion of task and thread. Thread are independent of code program whereas task is associated with a certain program. Therefore, in my example, when the JVM assigns a thread to execute of the Callable tasks, if the consumer is instantiated first, then the task will block on take(). Once the JVM discovers that the task is blocked, it will release the thread (or as I understand it but it is not releasing it) and places it back to the worker thread pool in preparation for processing a runnable task (which in this case are the Producers). Consequently, at the end of instantiating all the Callable's, there should be 40 tasks but only 5 threads; 20 of those tasks are blocked, 5 of the tasks should be running and 15 should be waiting (to run).
I think you misunderstand how threads and threadpools work. A threadpool typically has a work item queue which contains items to be worked on (in your case Callable<>s).
It also contains a (maximum) number of threads (in your case 5) which can work on those items.
The lifetime of an active thread is defined by the code it executes - usually a method. The thread becomes "alive" when it starts executing the method and it ends when it returns. If the method blocks to wait on some signal it does not mean the the thread can go away and execute some other method - that's not how threads work. Instead the thread will be blocked until it can continue execution and enable other threads to be run.
The method which is run by a threadpool thread usually looks like this:
void threadloop()
{
while (!quit)
{
Callable<T> item = null;
synchronized (workQueue)
{
if (workQueue.Count == 0)
workQueue.wait();
// we could have been woken up for some other reason so check again
if (workQueue.Count > 0)
item = workQueue.pop();
}
if (item != null)
item.Call();
}
}
This is more or less pseudo code (I'm not a Java developer) but it should show the concept. Now item.Call() executes the method which is supplied by the user of the pool. If that method blocks, then what happens? Well - the thread will be blocked in its execution of item.Call() until the method wakes up again. It can't just go away and execute some other code arbitrarily.
From javadoc:
Retrieves and removes the head of this queue, waiting if no elements are present on this queue.
It will wait: you're running in main, so it will stay there.
EDIT: correction: the blocking still happens (in the thread pool threads, not in main). There is no yielding going on: the 20 threads are blocked on the take calls, so no put calls execute, so the Futures never complete, so the program hangs.
I don't know what exactly you mean by release thread but once you block on take() the calling thread is blocked and is not going back to the pool.
I think you've misunderstood what gets "blocked" in a BlockingQueue.
The call to queue.take() blocks the thread that invoked it until something is available in the queue. This means that the thread will wait there endlessly, unless interrupted, until an item is added to the queue.
The second code sample hangs the problem because you are adding 20 tasks to wait for an item to appear in the BlockingQueue, and the executor has just 5 threads in it - thus the first five tasks cause all five of the threads to block. This executor is filled with 15 further consumer tasks.
The addition of tasks in the second for-loop to add items to the queue results in 20 tasks that can never be executed, because all threads in the executor are stuck waiting.
So when you say this:
According to my understanding, take() causes the thread to be released even though the task itself is blocked on take().
You have a misunderstanding because there is no difference here between what the "thread" does and what the "task" does. A thread cannot be "released" while the task is blocked - it is the thread that runs the task. When the thread encounters a blocking call to take(), the thread is blocked, period.

Categories

Resources