In my application we have used Thread pool, we have specified a timeout to the thread pool but it seems that the timeout is not called, below is the code :
import java.sql.Timestamp;
import java.util.ArrayList;
import java.util.Date;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
public class ThreadPoolDemo extends Thread{
public void run(){
System.out.println("Starting---" + new Timestamp((new Date()).getTime()) + "--" + Thread.currentThread().getName());
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Finishing---" + new Timestamp((new Date()).getTime()) + "--" +Thread.currentThread().getName());
}
public static void main (String[] args){
ArrayBlockingQueue<Runnable> threadQueue = new ArrayBlockingQueue<Runnable>(5);
ThreadPoolExecutor thumbnailGeneratorThreadPool = new ThreadPoolExecutor(1, 3,
5, TimeUnit.SECONDS, threadQueue);
thumbnailGeneratorThreadPool.allowCoreThreadTimeOut(true);
ArrayList fTasks = new ArrayList();
for (int i = 0; i < 15; i++) {
System.out.println("Submitting Thread : " + (i+1) + "- Current Queue size is : " + threadQueue.size());
ThreadPoolDemo tpd = new ThreadPoolDemo();
Future future = thumbnailGeneratorThreadPool.submit(tpd);
}
}
}
The out put of the code is :
Submitting Thread : 1- Current Queue size is : 0
Submitting Thread : 2- Current Queue size is : 0
Submitting Thread : 3- Current Queue size is : 1
Submitting Thread : 4- Current Queue size is : 2
Submitting Thread : 5- Current Queue size is : 3
Submitting Thread : 6- Current Queue size is : 4
Submitting Thread : 7- Current Queue size is : 5
Submitting Thread : 8- Current Queue size is : 5
Submitting Thread : 9- Current Queue size is : 5
Exception in thread "main" java.util.concurrent.RejectedExecutionException
at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1774)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:768)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:656)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:78)
at ThreadPoolDemo.main(ThreadPoolDemo.java:33)
Starting---2016-10-26 14:20:16.254--pool-1-thread-2
Starting---2016-10-26 14:20:16.254--pool-1-thread-3
Starting---2016-10-26 14:20:16.254--pool-1-thread-1
Finishing---2016-10-26 14:20:46.261--pool-1-thread-1
Finishing---2016-10-26 14:20:46.261--pool-1-thread-2
Finishing---2016-10-26 14:20:46.261--pool-1-thread-3
Starting---2016-10-26 14:20:46.261--pool-1-thread-2
Starting---2016-10-26 14:20:46.261--pool-1-thread-3
Starting---2016-10-26 14:20:46.261--pool-1-thread-1
Finishing---2016-10-26 14:21:16.265--pool-1-thread-1
Starting---2016-10-26 14:21:16.265--pool-1-thread-1
Finishing---2016-10-26 14:21:16.265--pool-1-thread-3
Starting---2016-10-26 14:21:16.265--pool-1-thread-3
Finishing---2016-10-26 14:21:16.265--pool-1-thread-2
Finishing---2016-10-26 14:21:46.277--pool-1-thread-1
Finishing---2016-10-26 14:21:46.277--pool-1-thread-3
Now in the ThreadPoolExecutor the keepAliveTime is set to 5 seconds.
However if we see the output the thread takes 30 seconds to complete. I'm not sure why the InterruptedException is not being called by the ThreadPoolExecutor on the thread.
I would like a mechanism to stop the threads if the thread is still active beyond the timeout specified.
As already indicated in the comments, you did not specify a time out. You only specified the keep-alive time of the ThreadPoolExecutor.
The keep-alive does not terminate or interrupt running threads, but only releases idle threads of the Executor (see getKeepAliveTime).
If you want to set a time out for your tasks, you have to use the invokeAll or invokeAny methods instead of submit.
See also
Killing thread after some specified time limit in Java and
How to timeout a thread
Related
I want to send n-number of requests to a REST endpoint in parallel.I want to make sure these get executed in different threads for performance and need to wait till all n requests finish.
Only way I could come up with is using CountDownLatch as follows (please check the main() method. This is testable code):
public static void main(String args[]) throws Exception {
int n = 10; //n is dynamic during runtime
final CountDownLatch waitForNRequests = new CountDownLatch(n);
//send n requests
for (int i =0;i<n;i++) {
var r = testRestCall(""+i);
r.publishOn(Schedulers.parallel()).subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
waitForNRequests.countDown();
});
}
waitForNRequests.await(); //wait till all n requests finish before goto the next line
System.out.println("All n requests finished");
Thread.sleep(10000);
}
public static Mono<ResponseEntity<Map>> testRestCall(String id) {
WebClient client = WebClient.create("https://reqres.in/api");
JSONObject request = new JSONObject();
request.put("name", "user"+ id);
request.put("job", "leader");
var res = client.post().uri("/users")
.contentType(MediaType.APPLICATION_JSON)
.body(BodyInserters.fromValue(request.toString()))
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.toEntity(Map.class)
.onErrorReturn(ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE).build());
return res;
}
This doesnt look good and I am sure there is an elegant solution without using Latches..etc
I tried following method,but I dont know how to resolve following issues:
Flux.merge() , contact() results in executing all n-requests in a single thread
How to wait till n-requests finish execution (fork-join)?
List<Mono<ResponseEntity<Map>>> lst = new ArrayList<>();
int n = 10; //n is dynamic during runtime
for (int i =0;i<n;i++) {
var r = testRestCall(""+i);
lst.add(r);
}
var t= Flux.fromIterable(lst).flatMap(Function.identity()); //tried merge() contact() as well
t.publishOn(Schedulers.parallel()).subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
///??? all requests execute in a single thread.How to parallelize ?
});
//???How to wait till all n requests finish before goto the next line
System.out.println("All n requests finished");
Thread.sleep(10000);
Update:
I found the reason why the Flux subscriber runs in the same thread, I need to create a ParallelFlux. So the correct order should be:
var t= Flux.fromIterable(lst).flatMap(Function.identity());
t.parallel()
.runOn(Schedulers.parallel())
.subscribe(res -> {
System.out.println(">>>>>>> Thread: " + Thread.currentThread().getName() + " response:" +res.getBody());
///??? all requests execute in a single thread.How to parallelize ?
});
Ref: https://projectreactor.io/docs/core/release/reference/#advanced-parallelizing-parralelflux
In reactive you think not about threads but about concurrency.
Reactor executes non-blocking/async tasks on a small number of threads using Schedulers abstraction to execute tasks. Schedulers have responsibilities very similar to ExecutorService. By default, for parallel scheduler number of threads is equal to number of CPU cores, but could be controlled by `reactor.schedulers.defaultPoolSize’ system property.
In your example instead of creating multiple Mono and then merge them, better to use Flux and then process elements in parallel controlling concurrency.
Flux.range(1, 10)
.flatMap(this::testRestCall)
By default, flatMap will process Queues.SMALL_BUFFER_SIZE = 256 number of in-flight inner sequences.
You could control concurrency flatMap(item -> process(item), concurrency) or use concatMap operator if you want to process sequentially. Check flatMap(..., int concurrency, int prefetch) for details.
Flux.range(1, 10)
.flatMap(i -> testRestCall(i), 5)
The following test shows that calls are executed in different threads
#Test
void testParallel() {
var flow = Flux.range(1, 10)
.flatMap(i -> testRestCall(i))
.log()
.then(Mono.just("complete"));
StepVerifier.create(flow)
.expectNext("complete")
.verifyComplete();
}
The result log
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-4] reactor.Mono.FlatMap.3 : | onComplete()
2022-12-30 21:31:25.170 INFO 43383 --- [ctor-http-nio-3] reactor.Mono.FlatMap.2 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-2] reactor.Mono.FlatMap.1 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-8] reactor.Mono.FlatMap.7 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [tor-http-nio-11] reactor.Mono.FlatMap.10 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-7] reactor.Mono.FlatMap.6 : | onComplete()
2022-12-30 21:31:25.169 INFO 43383 --- [ctor-http-nio-9] reactor.Mono.FlatMap.8 : | onComplete()
2022-12-30 21:31:25.170 INFO 43383 --- [ctor-http-nio-6] reactor.Mono.FlatMap.5 : | onComplete()
2022-12-30 21:31:25.378 INFO 43383 --- [ctor-http-nio-5] reactor.Mono.FlatMap.4 : | onComplete()
I have researched for this subject a lot but couldn't find any useful information. And I have decided to ask my first question on this platform. So, I am using a scheduled executor to repeat task in a specific period. Everything is fine. But there is a misunderstanding.... My code executes task but if task takes longer than schedule time then it waits to finish task and later starts execute new task. I want it to do that execute task when schedule time arrives and don't wait previous task to finish. How can I achieve this? I used SwingWorker on a swing project but this project is not a swing project. Thanks for reading.
Main method
LogFactory.log(LogFactory.INFO_LEVEL, Config.MODULE_NAME + " - Available processors for Thread Pool: " + AVAILABLE_PROCESSORS);
ScheduledExecutorService executor = Executors.newScheduledThreadPool(AVAILABLE_PROCESSORS);
LogFactory.log(LogFactory.INFO_LEVEL, Config.MODULE_NAME + " - [ScheduledExecutorService] instance created.");
MainWorker task = new MainWorker();
LogFactory.log(LogFactory.INFO_LEVEL, Config.MODULE_NAME + " - [Main worker] created...");
executor.scheduleWithFixedDelay(task, 0, Config.CHECK_INTERVAL, TimeUnit.SECONDS);
Main Worker
public class MainWorker implements Runnable {
private final NIFIncomingController controller = new NIFIncomingController();
#Override
public void run() {
LogFactory.log(LogFactory.INFO_LEVEL, Config.MODULE_NAME + " - [Task] executed - [" + Thread.currentThread().getName() + "]");
controller.run();
}
}
You can try to combine several executors in order to achieve the desired behavior. Please, find an example code below:
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class CombinedThreadPoolsExample {
private static final DateFormat DATE_FORMAT = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
private static final int INITIAL_DELAY = 0;
private static final int FIXED_DELAY_IN_MILLISECONDS = 1000;
private static final int TASK_EXECUTION_IN_MILLISECONDS = FIXED_DELAY_IN_MILLISECONDS * 2;
public static void main(String[] args) {
int availableProcessors = Runtime.getRuntime().availableProcessors();
System.out.println("Available processors: [" + availableProcessors + "].");
ExecutorService fixedThreadPool = Executors.newFixedThreadPool(availableProcessors);
Runnable runnableThatTakesMoreTimeThanSpecifiedDelay = new Runnable() {
#Override
public void run() {
System.out.println("Thread name: [" + Thread.currentThread().getName() + "], time: [" + DATE_FORMAT.format(new Date()) + "].");
try {
Thread.sleep(TASK_EXECUTION_IN_MILLISECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
ScheduledExecutorService singleThreadScheduledExecutor = Executors.newSingleThreadScheduledExecutor();
singleThreadScheduledExecutor.scheduleWithFixedDelay(new Runnable() {
#Override
public void run() {
fixedThreadPool.execute(runnableThatTakesMoreTimeThanSpecifiedDelay);
}
}, INITIAL_DELAY, FIXED_DELAY_IN_MILLISECONDS, TimeUnit.MILLISECONDS);
}
}
The first lines of the output are the following for my machine:
Available processors: [8].
Thread name: [pool-1-thread-1], time: [2017-12-25 11:22:00.103].
Thread name: [pool-1-thread-2], time: [2017-12-25 11:22:01.104].
Thread name: [pool-1-thread-3], time: [2017-12-25 11:22:02.105].
Thread name: [pool-1-thread-4], time: [2017-12-25 11:22:03.105].
Thread name: [pool-1-thread-5], time: [2017-12-25 11:22:04.106].
Thread name: [pool-1-thread-6], time: [2017-12-25 11:22:05.107].
Thread name: [pool-1-thread-7], time: [2017-12-25 11:22:06.107].
Thread name: [pool-1-thread-8], time: [2017-12-25 11:22:07.107].
Thread name: [pool-1-thread-1], time: [2017-12-25 11:22:08.108].
Thread name: [pool-1-thread-2], time: [2017-12-25 11:22:09.108].
Thread name: [pool-1-thread-3], time: [2017-12-25 11:22:10.108].
Thread name: [pool-1-thread-4], time: [2017-12-25 11:22:11.109].
However, beware of relying on such solution when a task execution can take a long time, compare to a pool size. Let's say we increase the time necessary for a task execution:
private static final int TASK_EXECUTION_IN_MILLISECONDS = FIXED_DELAY_IN_MILLISECONDS * 10;
This won't cause any exceptions during execution, but cannot enforce the specified delay between executions. It can be observed in the execution output after the aforementioned delay alteration:
Available processors: [8].
Thread name: [pool-1-thread-1], time: [2017-12-25 11:31:23.258].
Thread name: [pool-1-thread-2], time: [2017-12-25 11:31:24.260].
Thread name: [pool-1-thread-3], time: [2017-12-25 11:31:25.261].
Thread name: [pool-1-thread-4], time: [2017-12-25 11:31:26.262].
Thread name: [pool-1-thread-5], time: [2017-12-25 11:31:27.262].
Thread name: [pool-1-thread-6], time: [2017-12-25 11:31:28.263].
Thread name: [pool-1-thread-7], time: [2017-12-25 11:31:29.264].
Thread name: [pool-1-thread-8], time: [2017-12-25 11:31:30.264].
Thread name: [pool-1-thread-1], time: [2017-12-25 11:31:33.260].
Thread name: [pool-1-thread-2], time: [2017-12-25 11:31:34.261].
Thread name: [pool-1-thread-3], time: [2017-12-25 11:31:35.262].
This can be achieved by using a single executor service. Say your schedule time is Y
Identify the time that can be taken by the task in worst case. Say X
If it is not in control, then control it in the task implementation to identify a timeout.
if X > Y, then create another task object and double up the schedule time
So the code may look like
executorService.schedule(taskObject1, 0/*Initial Delay*/, 2Y);
executorService.schedule(taskObject2, Y/*Initial Delay*/, 2Y);
If X > n(Y) then create n+1 tasks with scheduled time as (n+1)Y and initial delay as 0, 2Y, 3Y......(n-1)Y respectively.
I'm trying to use resizer in akka routing with round-robin-pool. But it is not creating the instances. It is working on the instances which I mentioned in the lower-bound. I'm following the documents of akka version 2.5.3.
My configuration :
akka.actor.deployment {
/round-robin-resizer {
router = round-robin-pool
resizer {
lower-bound = 4
upper-bound = 30
pressure-threshold = 0
rampup-rate = 0.5
messages-per-resize = 1
}
}
Actor class :
return receiveBuilder()
.match(Integer.class, msg -> {
System.out.println("Message : " + msg + " Thread id : " + Thread.currentThread().getId());
Thread.sleep(5000);
})
.matchAny(msg -> {
System.out.println("Error Message : " + msg + " Thread id : " + Thread.currentThread().getId());
}).build();
}
Creation of actor :
ActorRef roundRobin = system.actorOf(FromConfig.getInstance().props(Props.create(RoutingActor.class)), "round-robin-resizer");
for (int i = 0; i < 15; i++) {
roundRobin.tell(i, ActorRef.noSender());
}
Output :
Message : 2 Thread id : 18
Message : 1 Thread id : 16
Message : 0 Thread id : 15
Message : 3 Thread id : 17
Message : 7 Thread id : 17
Message : 4 Thread id : 15
Message : 6 Thread id : 18
Message : 5 Thread id : 16
Message : 11 Thread id : 17
Message : 9 Thread id : 16
Message : 10 Thread id : 18
Message : 8 Thread id : 15
Message : 13 Thread id : 16
Message : 14 Thread id : 18
Message : 12 Thread id : 15
After every 4 result it is waiting for 5 seconds to complete the job of the previous instances.
See the thread IDs. For every creation of actor instance I'm letting my thread to sleep some time. At the time the new instance should be allocated on different thread. But this process in running till the first three instance. After that it is not creating the new instance as per the resizer. It is appending the message as per the normal flow of round robin pool.
You are getting confused with thread-id and actual actor instance. The number of actors instances does not match with the number of threads. Please refer to this answer in other similar question: Akka ConsistentHashingRoutingLogic not routing to the same dispatcher thread consistently
Need help making an observable start on the main thread, and then move on to a pool of threads allowing the source to continue emitting new items (regardless if they are still being processed in the pool of threads).
This is my example:
public static void main(String[] args) {
Observable<Integer> source = Observable.range(1,10);
source.map(i -> sleep(i, 10))
.doOnNext(i -> System.out.println("Emitting " + i + " on thread " + Thread.currentThread().getName()))
.observeOn(Schedulers.computation())
.map(i -> sleep(i * 10, 300))
.subscribe( i -> System.out.println("Received " + i + " on thread " + Thread.currentThread().getName()));
sleep(-1, 30000);
}
private static int sleep(int i, int time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
e.printStackTrace();
}
return i;
}
which always prints:
Emitting 1 on thread main
Emitting 2 on thread main
Emitting 3 on thread main
Received 10 on thread RxComputationScheduler-1
Emitting 4 on thread main
Emitting 5 on thread main
Emitting 6 on thread main
Received 20 on thread RxComputationScheduler-1
Emitting 7 on thread main
Emitting 8 on thread main
Emitting 9 on thread main
Received 30 on thread RxComputationScheduler-1
Emitting 10 on thread main
Received 40 on thread RxComputationScheduler-1
Received 50 on thread RxComputationScheduler-1
Received 60 on thread RxComputationScheduler-1
Received 70 on thread RxComputationScheduler-1
Received 80 on thread RxComputationScheduler-1
Received 90 on thread RxComputationScheduler-1
Received 100 on thread RxComputationScheduler-1
Although items are emitted on the main thread as supposed, I want them to move on to the computation/IO thread-pool afterwards.
Should be something like this:
I don't think you were slowing down the source emissions enough, and they were emitting so quickly that all items were emitted before the observeOn() had a chance to schedule them.
Try sleeping to 500ms instead of 10ms. You will then see interleaving like you would expect.
public class JavaLauncher {
public static void main(String[] args) {
Observable<Integer> source = Observable.range(1,10);
source.map(i -> sleep(i, 500))
.doOnNext(i -> System.out.println("Emitting " + i + " on thread " + Thread.currentThread().getName()))
.observeOn(Schedulers.computation())
.map(i -> sleep(i * 10, 250))
.subscribe( i -> System.out.println("Received " + i + " on thread " + Thread.currentThread().getName()));
sleep(-1, 30000);
}
private static int sleep(int i, int time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
e.printStackTrace();
}
return i;
}
}
OUTPUT
Emitting 1 on thread main
Emitting 2 on thread main
Emitting 3 on thread main
Received 10 on thread RxComputationThreadPool-3
Emitting 4 on thread main
Received 20 on thread RxComputationThreadPool-3
Emitting 5 on thread main
Emitting 6 on thread main
Received 30 on thread RxComputationThreadPool-3
Emitting 7 on thread main
Emitting 8 on thread main
Received 40 on thread RxComputationThreadPool-3
Emitting 9 on thread main
Emitting 10 on thread main
Received 50 on thread RxComputationThreadPool-3
Received 60 on thread RxComputationThreadPool-3
Received 70 on thread RxComputationThreadPool-3
Received 80 on thread RxComputationThreadPool-3
Received 90 on thread RxComputationThreadPool-3
Received 100 on thread RxComputationThreadPool-3
UPDATE - Parallelized Version
public class JavaLauncher {
public static void main(String[] args) {
Observable<Integer> source = Observable.range(1,10);
source.map(i -> sleep(i, 250))
.doOnNext(i -> System.out.println("Emitting " + i + " on thread " + Thread.currentThread().getName()))
.flatMap(i ->
Observable.just(i)
.subscribeOn(Schedulers.computation())
.map(i2 -> sleep(i2 * 10, 500))
)
.subscribe( i -> System.out.println("Received " + i + " on thread " + Thread.currentThread().getName()));
sleep(-1, 30000);
}
private static int sleep(int i, int time) {
try {
Thread.sleep(time);
} catch (InterruptedException e) {
e.printStackTrace();
}
return i;
}
}
OUTPUT
Emitting 1 on thread main
Emitting 2 on thread main
Emitting 3 on thread main
Received 10 on thread RxComputationThreadPool-3
Emitting 4 on thread main
Received 20 on thread RxComputationThreadPool-4
Received 30 on thread RxComputationThreadPool-1
Emitting 5 on thread main
Received 40 on thread RxComputationThreadPool-2
Emitting 6 on thread main
Received 50 on thread RxComputationThreadPool-3
Emitting 7 on thread main
Received 60 on thread RxComputationThreadPool-4
Emitting 8 on thread main
Received 70 on thread RxComputationThreadPool-1
Emitting 9 on thread main
Received 80 on thread RxComputationThreadPool-2
Emitting 10 on thread main
Received 90 on thread RxComputationThreadPool-3
Received 100 on thread RxComputationThreadPool-4
We've performed a performance test with Oracle Advanced Queue on our Oracle DB environment. We've created the queue and the queue table with the following script:
BEGIN
DBMS_AQADM.create_queue_table(
queue_table => 'verisoft.qt_test',
queue_payload_type => 'SYS.AQ$_JMS_MESSAGE',
sort_list => 'ENQ_TIME',
multiple_consumers => false,
message_grouping => 0,
comment => 'POC Authorizations Queue Table - KK',
compatible => '10.0',
secure => true);
DBMS_AQADM.create_queue(
queue_name => 'verisoft.q_test',
queue_table => 'verisoft.qt_test',
queue_type => dbms_aqadm.NORMAL_QUEUE,
max_retries => 10,
retry_delay => 0,
retention_time => 0,
comment => 'POC Authorizations Queue - KK');
DBMS_AQADM.start_queue('q_test');
END;
/
We've published 1000000 messages with 2380 TPS using a PL/SQL client. And we've consumed 1000000 messages with 292 TPS, using Oracle JMS API Client.
The consumer rate is almost 10 times slower than the publisher and that speed does not meet our requirements.
Below, is the piece of Java code that we use to consume messages:
if (q == null) initializeQueue();
System.out.println(listenerID + ": Listening on queue " + q.getQueueName() + "...");
MessageConsumer consumer = sess.createConsumer(q);
for (Message m; (m = consumer.receive()) != null;) {
new Timer().schedule(new QueueExample(m), 0);
}
sess.close();
con.close();
Do you have any suggestion on, how we can improve the performance at the consumer side?
Your use of Timer may be your primary issue. The Timer definition reads:
Corresponding to each Timer object is a single background thread that is used to execute all of the timer's tasks, sequentially. Timer tasks should complete quickly. If a timer task takes excessive time to complete, it "hogs" the timer's task execution thread. This can, in turn, delay the execution of subsequent tasks, which may "bunch up" and execute in rapid succession when (and if) the offending task finally completes.
I would suggest you use a ThreadPool.
// My executor.
ExecutorService executor = Executors.newCachedThreadPool();
public void test() throws InterruptedException {
for (int i = 0; i < 1000; i++) {
final int n = i;
// Instead of using Timer, create a Runnable and pass it to the Executor.
executor.submit(new Runnable() {
#Override
public void run() {
System.out.println("Run " + n);
}
});
}
executor.shutdown();
executor.awaitTermination(1, TimeUnit.DAYS);
}