I've a situation where I need to implement a thread safe method, The method must be executed by only one thread at a time, And while the method is being executed by a thread, all other threads trying to execute the same method shouldn't wait and must exit the method.
Synchronization won't help here since threads will be waiting to execute the method sequentially.
I thought I would achieve this by making use of ConcurrentHashMap using below code, but not sure if this is the perfect way to implement it.
Class Test {
private ConcurrentHashMap<String, Object> map = new ConcurrentHashMap<>();
public void execute() {
if (map.putIfApsent("key", new Object()) != null) { // map has value for key which means a thread has already entered.
return; // early exit
}
threadSafeMethod();
map.remove("key");
}
private void threadSafeMethod() {
// my code
}
}
You can do this without synchronization, with compare-and-swap using a boolean:
private AtomicBoolean entered = new AtomicBoolean(false);
public void execute() {
if(entered.compareAndSet(false,true) {
try {
method()
} finally {
entered.set(false)
}
}
}
You could use a ReentrantLock and specify a negative value for waiting time. In that case the scheduler will not try to wait if there is a thread already executing the code.
// define the lock somewhere as an instance variable
Lock lock = new ReentrantLock();
try {
var isAvailable = lock.tryLock(-1, TimeUnit.NANOSECONDS);
if(isAvailable) {
System.out.println("do work");
lock.unlock();
}
} catch (InterruptedException e) {
e.printStackTrace();
}
Related
I have two methods in Java and I execute them in parallel in a class which has a fixed delay. The first thread takes a few minutes to complete, while the second one can take some hours. What I want is to restart the first thread and execute it when it ends, instead of waiting for the second one to finish and re-execute both of them.
Can anyone help me with this?
My code is below:
#Scheduled(fixedDelay = 30)
public void scheduled_function() throws IOException, InterruptedException {
Callable<Void> callableSchedule = new Callable<Void>()
{
#Override
public Void call() throws Exception
{
getAndUpdateSchedule();
return null;
}
};
Callable<Void> callableMatches = new Callable<Void>()
{
#Override
public Void call() throws Exception
{
processMatches();
return null;
}
};
//add to a list
List<Callable<Void>> taskList = new ArrayList<Callable<Void>>();
taskList.add(callableSchedule);
taskList.add(callableMatches);
//create a pool executor with threads
ExecutorService executor = Executors.newFixedThreadPool(2);
try
{
//start the threads
executor.invokeAll(taskList);
}
catch (InterruptedException ie)
{
System.out.println("An InterruptedException occured");
}
You can just store a boolean variable, let's call it isComplete, that stores whether the long task has completed or not. This will be an instance variable, since we need it to stay around after scheduled_function() returns. Something like this:
private boolean isComplete = false;
Now, right now this variable is meaningless because we never update it. So, we need to make sure to update this variable when the long task completes:
Callable<Void> callableMatches = new Callable<Void>()
{
#Override
public Void call() throws Exception
{
processMatches();
synchronized (MyClass.this) { // MyClass is just a placeholder name
isComplete = true;
}
return null;
}
};
Notice that where I update the isComplete variable, I put it in a synchronized block. This ensures that the value we are writing is actually going to be updated on the other thread, and it prevents the other thread from reading while we're writing the value. The result is that the other thread always gets the updated value.
This bit is tangential to the answer, but we can actually shorten this piece of code significantly by using lambda syntax. Callable is a functional interface, so this is perfectly legal:
Callable<Void> callableMatches = () -> {
processMatches();
synchronized (MyClass.this) { // MyClass is just a placeholder name
isComplete = true;
}
return null;
};
Now all we have to do is check this variable every time we want to start the short task. Since we only have 2 threads, and one of the threads is being used for the long task, we know that this task will always be executed on the same thread. This means there's no point in going back to the executor, we can just put it in a while loop inside the callable. On every iteration of the while loop, we just need to check our isComplete variable, and we'll break out of the loop if the other task has completed.
Callable<Void> callableSchedule = () -> {
while (true) {
synchronized (MyClass.this) { // MyClass is just a placeholder name
if (isComplete) {
break;
}
}
getAndUpdateSchedule();
}
return null;
};
Note that in this example, I've used the lambda syntax and I've put the if statement inside another synchronized block. As I explained above, we don't want to get a stale value here and keep looping after the other task is complete.
Consider the following (simplified) class, designed to allow my entire component to enter some interim state before completely stopping. (The purpose of the interim state is to allow the component to complete its existing tasks, but reject any new ones).
The component might be started and stopped multiple times from any number of threads.
class StopHandler {
boolean isStarted = false;
synchronized void start() {isStarted = true;}
//synchronized as I do want the client code to block until the component is stopped.
//I might add some async method as well, but let's concentrate on the sync version only.
synchronized void stop(boolean isUrgent) {
if (isStarted) {
if (!isUrgent) {
setGlobalState(PREPARING_TO_STOP); //assume it is implemented
try {Thread.sleep(10_000L);} catch (InterruptedException ignored) {}
}
isStarted = false;
}
}
The problem with the current implementation is that if some client code needs to urgently stop the component while it is in the interim state, it will still have to wait.
For example:
//one thread
stopHandler.stop(false); //not urgent => it is sleeping
//another thread, after 1 millisecond:
stopHandler.stop(true); //it's urgent, "please stop now", but it will wait for 10 seconds
How would you implement it?
I might need to interrupt the sleeping thread, but I don't have the sleeping thread object on which to call 'interrupt()'.
How about storing a reference to current Thread (returned by Thread.currentThread()) in a field of StopHandler directly before you call sleep? That would allow you you to interrupt it in the subsequent urgent call in case the Thread is still alive.
Couldn't find a better solution than the one suggested by Lars.
Just need to encapsulate the sleep management for completeness.
class SleepHandler {
private final ReentrantLock sleepingThreadLock;
private volatile Thread sleepingThread;
SleepHandler() {
sleepingThreadLock = new ReentrantLock();
}
void sleep(long millis) throws InterruptedException {
setSleepingThread(Thread.currentThread());
Thread.sleep(millis);
setSleepingThread(null);
}
void interruptIfSleeping() {
doWithinSleepingThreadLock(() -> {
if (sleepingThread != null) {
sleepingThread.interrupt();
}
});
}
private void setSleepingThread(#Nullable Thread sleepingThread) {
doWithinSleepingThreadLock(() -> this.sleepingThread = sleepingThread);
}
private void doWithinSleepingThreadLock(Runnable runnable) {
sleepingThreadLock.lock();
try {
runnable.run();
} finally {
sleepingThreadLock.unlock();
}
}
}
With this helper class, handling of the original problem is trivial:
void stop(boolean isUrgent) throws InterruptedException {
if (isUrgent) {sleepHandler.interruptIfSleeping();} //harmless if not sleeping
try {
doStop(isUrgent); //all the stuff in the original 'stop(...)' method
} catch (InteruptedException ignored) {
} finally {
Thread.interrupted(); //just in case, clearing the 'interrupt' flag as no need to propagate it futher
}
Class clazz has two methods methodA() and methodB().
How to ensure that methodB is "blocked" if some threads are in methodA in Java (I am using Java 8)?
By "blocking methodB", I mean that "wait until no threads are in methodA()". (Thanks to #AndyTurner)
Note that the requirement above allows the following situations:
Multiple threads are simultaneously in methodA.
Multiple threads are in methodB while no threads are in methodA.
Threads in methodB does not prevent other threads from entering methodA.
My trial: I use StampedLock lock = new StampedLock.
In methodA, call long stamp = lock.readLock()
Create a new method unlockB and call lock.unlockRead(stamp) in it.
In methodB, call long stamp = lock.writeLock() and lock.unlockWrite(stamp).
However, this locking strategy disallows the second and the third situations above.
Edit: I realize that I have not clearly specified the requirements of the synchronization between methodA and methodB. The approach given by #JaroslawPawlak works for the current requirement (I accept it), but not for my original intention (maybe I should first clarify it and then post it in another thread).
I think this can do the trick:
private final Lock lock = new ReentrantLock();
private final Semaphore semaphore = new Semaphore(1);
private int threadsInA = 0;
public void methodA() {
lock.lock();
threadsInA++;
semaphore.tryAcquire();
lock.unlock();
// your code
lock.lock();
threadsInA--;
if (threadsInA == 0) {
semaphore.release();
}
lock.unlock();
}
public void methodB() throws InterruptedException {
semaphore.acquire();
semaphore.release();
// your code
}
Threads entering methodA increase the count and try to acquire a permit from semaphore (i.e. they take 1 permit if available, but if not available they just continue without a permit). When the last thread leaves methodA, the permit is returned. We cannot use AtomicInteger since changing the count and acquiring/releasing permit from semaphore must be atomic.
Threads entering methodB need to have a permit (and will wait for one if not available), but after they get it they return it immediately allowing others threads to enter methodB.
EDIT:
Another simpler version:
private final int MAX_THREADS = 1_000;
private final Semaphore semaphore = new Semaphore(MAX_THREADS);
public void methodA() throws InterruptedException {
semaphore.acquire();
// your code
semaphore.release();
}
public void methodB() throws InterruptedException {
semaphore.acquire(MAX_THREADS);
semaphore.release(MAX_THREADS);
// your code
}
Every thread in methodA holds a single permit which is released when the thread leaves methodA.
Threads entering methodB wait until all 1000 permits are available (i.e. no threads in methodA), but don't hold them, which allows other threads to enter both methods while methodB is still being executed.
You can't really prevent that methodA or methodB is called (while other threads are inside the other method) but you can implement thread intercommunication in such a way so that you can still achieve what you want.
class MutualEx {
boolean lock = false;
public synchronized void methodA() {
if (lock) {
try {
wait();
}catch (InterruptedException e) {
}
}
//do some processing
lock = true;
notifyAll();
}
public synchronized void methodB() {
if (!lock) {
try {
wait();
}catch (InterruptedException e) {
}
}
//do some processing
lock = false;
notifyAll();
}
}
Now, for this to work any Thread object you create should have a reference to the same instance of MutualEx object.
Why not using an kind of external orchestrator?
I mean another class that will be responsible to call the methodA or methodB when it allowed.
Multi-thread can still be handle via locking or maybe just with some AtomicBoolean(s).
Please find below a naive draft of how to do it.
public class MyOrchestrator {
#Autowired
private ClassWithMethods classWithMethods;
private AtomicBoolean aBoolean = = new AtomicBoolean(true);
public Object callTheDesiredMethodIfPossible(Method method, Object... params) {
if(aBoolean.compareAndSet(true, false)) {
return method.invoke(classWithMethods, params);
aBoolean.set(true);
}
if ("methodA".equals(method.getName())) {
return method.invoke(classWithMethods, params);
}
}
}
In very simple terms what you all need is ENTER methodB only if no thread inside methodA.
Simply you can have a global counter, first initialized to 0 to record the number of threads that are currently inside methodA(). You should have a lock/mutex assigned to protect the variable count.
Threads entering methodsA do count++.
Threads exiting methodA do count-- .
Threads that are entering methodB first should check whether count == 0.
methodA(){
mutex.lock();
count++;
mutex.signal();
//do stuff
mutex.lock();
count--;
mutex.signal();
}
methodB(){
mutex.lock();
if(count != 0){
mutex.signal();
return;
}
mutex.signal();
//do stuff
}
You would need an int to count threads in methodA, and ReentrantLock.Condition to signal all threads waiting in methodB once there are no threads in methodA:
AtomicInteger threadsInMethodA = new AtomicInteger(0);
Lock threadsForMethodBLock = new ReentrantLock();
Condition signalWaitingThreadsForMethodB = threadsForMethodBLock.newCondition();
public void methodA() {
threadsInMethodA.incrementAndGet();
//do stuff
if (threadsInMethodA.decrementAndGet() == 0) {
try {
threadsForMethodBLock.lock();
signalWaitingThreadsForMethodB.signalAll();
} finally {
threadsForMethodBLock.unlock();
}
}
}
public void methodB() {
try {
threadsForMethodBLock.lock();
while (!Thread.isInterrupted() && threadsInMethodA.get() != 0) {
try {
signalWaitingThreadsForMethodB.await();
} catch (InterruptedException e) {
Thread.interrupt();
throw new RuntimeException("Not sure if you should continue doing stuff in case of interruption");
}
}
signalWaitingThreadsForMethodB.signalAll();
} finally {
threadsForMethodBLock.unlock();
}
//do stuff
}
So each thread entering methodB will first check if nobody in methodA, and signal previous waiting threads. On the other hand, each thread entering methodA will increment counter to prevent new threads doing work in methodB, and on decrement it will release all the threads waiting to do stuff in methodB if no threads left inside methodA.
I'm using a thread that is continuously reading from a queue.
Something like:
public void run() {
Object obj;
while(true) {
synchronized(objectsQueue) {
if(objectesQueue.isEmpty()) {
try {
objectesQueue.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
obj = objectesQueue.poll();
}
}
// Do something with the Object obj
}
}
What is the best way to stop this thread?
I see two options:
1 - Since Thread.stop() is deprecated, I can implement a stopThisThread() method that uses a n atomic check-condition variable.
2 - Send a Death Event object or something like that to the queue. When the thread fetches a death event, it exits.
I prefer the 1st way, however, I don't know when to call the stopThisThread() method, as something might be on it's way to the queue and the stop signal can arrive first (not desirable).
Any suggestions?
The DeathEvent (or as it is often call, "poison pill") approach works well if you need to complete all of the work on the queue before shutting down. The problem is that this could take a long time.
If you want to stop as soon as possible, I suggest you do this
BlockingQueue<O> queue = ...
...
public void run() {
try {
// The following test is necessary to get fast interrupts. If
// it is replaced with 'true', the queue will be drained before
// the interrupt is noticed. (Thanks Tim)
while (!Thread.interrupted()) {
O obj = queue.take();
doSomething(obj);
}
} catch (InterruptedException ex) {
// We are done.
}
}
To stop the thread t that instantiated with that run method, simply call t.interrupt();.
If you compare the code above with other answers, you will notice how using a BlockingQueue and Thread.interrupt() simplifies the solution.
I would also claim that an extra stop flag is unnecessary, and in the big picture, potentially harmful. A well-behaved worker thread should respect an interrupt. An unexpected interrupt simply means that the worker is being run in a context that the original programmer did not anticipate. The best thing is if the worker to does what it is told to do ... i.e. it should stop ... whether or not this fits with the original programmer's conception.
Why not use a scheduler which you simply can stop when required? The standard scheduler supports repeated scheduling which also waits for the worker thread to finish before rescheduling a new run.
ScheduledExecutorService service = Executors.newSingleThreadScheduledExecutor();
service.scheduleWithFixedDelay(myThread, 1, 10, TimeUnit.SECONDS);
this sample would run your thread with a delay of 10 sec, that means when one run finishes, it restarts it 10 seconds later. And instead of having to reinvent the wheel you get
service.shutdown()
the while(true) is not necessary anymore.
ScheduledExecutorService Javadoc
In your reader thread have a boolean variable stop. When you wish for this thread to stop set thius to true and interrupt the thread. Within the reader thread when safe (when you don't have an unprocessed object) check the status of the stop variable and return out of the loop if set. as per below.
public class readerThread extends Thread{
private volitile boolean stop = false;
public void stopSoon(){
stop = true;
this.interrupt();
}
public void run() {
Object obj;
while(true) {
if(stop){
return;
}
synchronized(objectsQueue) {
if(objectesQueue.isEmpty()) {
try {
objectesQueue.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
if(stop){
return;
}
obj = objectesQueue.poll();
// Do something with the Object obj
}
}
}
}
public class OtherClass{
ThreadReader reader;
private void start(){
reader = ...;
reader.start();
}
private void stop(){
reader.stopSoon();
reader.join(); // Wait for thread to stop if nessasery.
}
}
Approach 1 is the preferred one.
Simply set a volatile stop field to true and call interrupt() on the running thread. This will force any I/O methods that wait to return with an InterruptedException (and if your library is written correctly this will be handled gracefully).
I think your two cases actually exhibit the same potential behavior. For the second case consider Thread A adds the DeathEvent after which Thread B adds a FooEvent. When your job Thread receives the DeathEvent there is still a FooEvent behind it, which is the same scenario you are describing in Option 1, unless you try to clear the queue before returning, but then you are essentially keeping the thread alive, when what you are trying to do is stop it.
I agree with you that the first option is more desirable. A potential solution would depend on how your queue is populated. If it is a part of your work thread class you could have your stopThisThread() method set a flag that would return an appropriate value (or throw Exception) from the enqueuing call i.e.:
MyThread extends Thread{
boolean running = true;
public void run(){
while(running){
try{
//process queue...
}catch(InterruptedExcpetion e){
...
}
}
}
public void stopThisThread(){
running = false;
interrupt();
}
public boolean enqueue(Object o){
if(!running){
return false;
OR
throw new ThreadNotRunningException();
}
queue.add(o);
return true;
}
}
It would then be the responsibility of the object attempting to enqueue the Event to deal with it appropriately, but at the least it will know that the event is not in the queue, and will not be processed.
I usually put a flag in the class that has the Thread in it and in my Thread code I would do. (NOTE: Instead of while(true) I do while(flag))
Then create a method in the class to set the flag to false;
private volatile bool flag = true;
public void stopThread()
{
flag = false;
}
public void run() {
Object obj;
while(flag) {
synchronized(objectsQueue) {
if(objectesQueue.isEmpty()) {
try {
objectesQueue.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
obj = objectesQueue.poll();
}
}
// Do something with the Object obj
}
}
I have few asynchronous tasks running and I need to wait until at least one of them is finished (in the future probably I'll need to wait util M out of N tasks are finished).
Currently they are presented as Future, so I need something like
/**
* Blocks current thread until one of specified futures is done and returns it.
*/
public static <T> Future<T> waitForAny(Collection<Future<T>> futures)
throws AllFuturesFailedException
Is there anything like this? Or anything similar, not necessary for Future. Currently I loop through collection of futures, check if one is finished, then sleep for some time and check again. This looks like not the best solution, because if I sleep for long period then unwanted delay is added, if I sleep for short period then it can affect performance.
I could try using
new CountDownLatch(1)
and decrease countdown when task is complete and do
countdown.await()
, but I found it possible only if I control Future creation. It is possible, but requires system redesign, because currently logic of tasks creation (sending Callable to ExecutorService) is separated from decision to wait for which Future. I could also override
<T> RunnableFuture<T> AbstractExecutorService.newTaskFor(Callable<T> callable)
and create custom implementation of RunnableFuture with ability to attach listener to be notified when task is finished, then attach such listener to needed tasks and use CountDownLatch, but that means I have to override newTaskFor for every ExecutorService I use - and potentially there will be implementation which do not extend AbstractExecutorService. I could also try wrapping given ExecutorService for same purpose, but then I have to decorate all methods producing Futures.
All these solutions may work but seem very unnatural. It looks like I'm missing something simple, like
WaitHandle.WaitAny(WaitHandle[] waitHandles)
in c#. Are there any well known solutions for such kind of problem?
UPDATE:
Originally I did not have access to Future creation at all, so there were no elegant solution. After redesigning system I got access to Future creation and was able to add countDownLatch.countdown() to execution process, then I can countDownLatch.await() and everything works fine.
Thanks for other answers, I did not know about ExecutorCompletionService and it indeed can be helpful in similar tasks, but in this particular case it could not be used because some Futures are created without any executor - actual task is sent to another server via network, completes remotely and completion notification is received.
simple, check out ExecutorCompletionService.
ExecutorService.invokeAny
Why not just create a results queue and wait on the queue? Or more simply, use a CompletionService since that's what it is: an ExecutorService + result queue.
This is actually pretty easy with wait() and notifyAll().
First, define a lock object. (You can use any class for this, but I like to be explicit):
package com.javadude.sample;
public class Lock {}
Next, define your worker thread. He must notify that lock object when he's finished with his processing. Note that the notify must be in a synchronized block locking on the lock object.
package com.javadude.sample;
public class Worker extends Thread {
private Lock lock_;
private long timeToSleep_;
private String name_;
public Worker(Lock lock, String name, long timeToSleep) {
lock_ = lock;
timeToSleep_ = timeToSleep;
name_ = name;
}
#Override
public void run() {
// do real work -- using a sleep here to simulate work
try {
sleep(timeToSleep_);
} catch (InterruptedException e) {
interrupt();
}
System.out.println(name_ + " is done... notifying");
// notify whoever is waiting, in this case, the client
synchronized (lock_) {
lock_.notify();
}
}
}
Finally, you can write your client:
package com.javadude.sample;
public class Client {
public static void main(String[] args) {
Lock lock = new Lock();
Worker worker1 = new Worker(lock, "worker1", 15000);
Worker worker2 = new Worker(lock, "worker2", 10000);
Worker worker3 = new Worker(lock, "worker3", 5000);
Worker worker4 = new Worker(lock, "worker4", 20000);
boolean started = false;
int numNotifies = 0;
while (true) {
synchronized (lock) {
try {
if (!started) {
// need to do the start here so we grab the lock, just
// in case one of the threads is fast -- if we had done the
// starts outside the synchronized block, a fast thread could
// get to its notification *before* the client is waiting for it
worker1.start();
worker2.start();
worker3.start();
worker4.start();
started = true;
}
lock.wait();
} catch (InterruptedException e) {
break;
}
numNotifies++;
if (numNotifies == 4) {
break;
}
System.out.println("Notified!");
}
}
System.out.println("Everyone has notified me... I'm done");
}
}
As far as I know, Java has no analogous structure to the WaitHandle.WaitAny method.
It seems to me that this could be achieved through a "WaitableFuture" decorator:
public WaitableFuture<T>
extends Future<T>
{
private CountDownLatch countDownLatch;
WaitableFuture(CountDownLatch countDownLatch)
{
super();
this.countDownLatch = countDownLatch;
}
void doTask()
{
super.doTask();
this.countDownLatch.countDown();
}
}
Though this would only work if it can be inserted before the execution code, since otherwise the execution code would not have the new doTask() method. But I really see no way of doing this without polling if you cannot somehow gain control of the Future object before execution.
Or if the future always runs in its own thread, and you can somehow get that thread. Then you could spawn a new thread to join each other thread, then handle the waiting mechanism after the join returns... This would be really ugly and would induce a lot of overhead though. And if some Future objects don't finish, you could have a lot of blocked threads depending on dead threads. If you're not careful, this could leak memory and system resources.
/**
* Extremely ugly way of implementing WaitHandle.WaitAny for Thread.Join().
*/
public static joinAny(Collection<Thread> threads, int numberToWaitFor)
{
CountDownLatch countDownLatch = new CountDownLatch(numberToWaitFor);
foreach(Thread thread in threads)
{
(new Thread(new JoinThreadHelper(thread, countDownLatch))).start();
}
countDownLatch.await();
}
class JoinThreadHelper
implements Runnable
{
Thread thread;
CountDownLatch countDownLatch;
JoinThreadHelper(Thread thread, CountDownLatch countDownLatch)
{
this.thread = thread;
this.countDownLatch = countDownLatch;
}
void run()
{
this.thread.join();
this.countDownLatch.countDown();
}
}
If you can use CompletableFutures instead then there is CompletableFuture.anyOf that does what you want, just call join on the result:
CompletableFuture.anyOf(futures).join()
You can use CompletableFutures with executors by calling the CompletableFuture.supplyAsync or runAsync methods.
Since you don't care which one finishes, why not just have a single WaitHandle for all threads and wait on that? Whichever one finishes first can set the handle.
See this option:
public class WaitForAnyRedux {
private static final int POOL_SIZE = 10;
public static <T> T waitForAny(Collection<T> collection) throws InterruptedException, ExecutionException {
List<Callable<T>> callables = new ArrayList<Callable<T>>();
for (final T t : collection) {
Callable<T> callable = Executors.callable(new Thread() {
#Override
public void run() {
synchronized (t) {
try {
t.wait();
} catch (InterruptedException e) {
}
}
}
}, t);
callables.add(callable);
}
BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(POOL_SIZE);
ExecutorService executorService = new ThreadPoolExecutor(POOL_SIZE, POOL_SIZE, 0, TimeUnit.SECONDS, queue);
return executorService.invokeAny(callables);
}
static public void main(String[] args) throws InterruptedException, ExecutionException {
final List<Integer> integers = new ArrayList<Integer>();
for (int i = 0; i < POOL_SIZE; i++) {
integers.add(i);
}
(new Thread() {
public void run() {
Integer notified = null;
try {
notified = waitForAny(integers);
} catch (InterruptedException e) {
e.printStackTrace();
} catch (ExecutionException e) {
e.printStackTrace();
}
System.out.println("notified=" + notified);
}
}).start();
synchronized (integers) {
integers.wait(3000);
}
Integer randomInt = integers.get((new Random()).nextInt(POOL_SIZE));
System.out.println("Waking up " + randomInt);
synchronized (randomInt) {
randomInt.notify();
}
}
}