I am using a ScheduledExecutorService to schedule and process jobs across several threads. In my application, a job can schedule a new job (on the same ScheduledExecutorService), as some kind of follow-up action.
In the main thread, I want to wait until all jobs are finished, as a synchronization point. There are the shutdown() and awaitTermination() methods, but this disallows any running or pending job to schedule a new job. In my case, I actually want to allow this, accepting the risk that we will never finish (or hit some timeout).
How do I wait for all jobs and possibly their follow-up jobs to finish?
It's possible by keeping track of the number of active jobs. Effectively, each job that you submit must be wrapped as follows (pseudo code):
increase active jobs by one
try {
run actual job
} finally {
decrease active jobs by one
}
Of course the ScheduledExecutorService needs to be fully encapsulated for this to work.
The next step is to find the right concurrent mechanism for this, that doesn't introduce any busy waiting.
A quick and dirty attempt using a Lock with a Condition that signals no more jobs:
private final Lock lock = new ReentrantLock();
private final Condition noJobs = lock.newCondition();
private long jobCount = 0;
private void increaseJobCount() {
lock.lock();
try {
jobCount++;
} finally {
lock.unlock();
}
}
private void decreaseJobCount() {
lock.lock();
try {
jobCount--;
if (jobCount == 0) {
noJobs.signalAll();
}
} finally {
lock.unlock();
}
}
public void awaitNoJobs() throws InterruptedException {
lock.lock();
try {
while (jobCount > 0) {
noJobs.await();
}
} finally {
lock.unlock();
}
}
I checked some other known concurrent classes, but apart from a BlockingQueue / BlockingDeque I couldn't find any. Those take up more memory though, as you'd have to add and remove something for each job that's submitted. A CountDownLatch comes close but doesn't allow increasing the count.
I was reading through the java.util.concurrent API, and found that
CountDownLatch: A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
CyclicBarrier: A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.
To me both seems equal, but I am sure there is much more to it.
For example, in CoundownLatch, the countdown value could not be reset, that can happen in the case of CyclicBarrier.
Is there any other difference between the two?
What are the use cases where someone would want to reset the value of countdown?
There's another difference.
When using a CyclicBarrier, the assumption is that you specify the number of waiting threads that trigger the barrier. If you specify 5, you must have at least 5 threads to call await().
When using a CountDownLatch, you specify the number of calls to countDown() that will result in all waiting threads being released. This means that you can use a CountDownLatch with only a single thread.
"Why would you do that?", you may say. Imagine that you are using a mysterious API coded by someone else that performs callbacks. You want one of your threads to wait until a certain callback has been called a number of times. You have no idea which threads the callback will be called on. In this case, a CountDownLatch is perfect, whereas I can't think of any way to implement this using a CyclicBarrier (actually, I can, but it involves timeouts... yuck!).
I just wish that CountDownLatch could be reset!
One major difference is that CyclicBarrier takes an (optional) Runnable task which is run once the common barrier condition is met.
It also allows you to get the number of clients waiting at the barrier and the number required to trigger the barrier. Once triggered the barrier is reset and can be used again.
For simple use cases - services starting etc... a CountdownLatch is fine. A CyclicBarrier is useful for more complex co-ordination tasks. An example of such a thing would be parallel computation - where multiple subtasks are involved in the computation - kind of like MapReduce.
One point that nobody has yet mentioned is that, in a CyclicBarrier, if a thread has a problem (timeout, interrupted...), all the others that have reached await() get an exception. See Javadoc:
The CyclicBarrier uses an all-or-none breakage model for failed synchronization attempts: If a thread leaves a barrier point prematurely because of interruption, failure, or timeout, all other threads waiting at that barrier point will also leave abnormally via BrokenBarrierException (or InterruptedException if they too were interrupted at about the same time).
I think that the JavaDoc has explained the differences explicitly.
Most people know that CountDownLatch can not be reset, however, CyclicBarrier can. But this is not the only difference, or the CyclicBarrier could be renamed to ResetbleCountDownLatch.
We should tell the differences from the perspective of their goals, which are described in JavaDoc
CountDownLatch: A synchronization aid that allows one or more threads to wait until a set of operations being performed in other threads completes.
CyclicBarrier: A synchronization aid that allows a set of threads to all wait for each other to reach a common barrier point.
In countDownLatch, there is one or more threads, that are waiting for a set of other threads to complete. In this situation, there are two types of threads, one type is waiting, another type is doing something, after finishes their tasks, they could be waiting or just terminated.
In CyclicBarrier, there are only one type of threads, they are waiting for each other, they are equal.
The main difference is documented right in the Javadocs for CountdownLatch. Namely:
A CountDownLatch is initialized with a
given count. The await methods block
until the current count reaches zero
due to invocations of the countDown()
method, after which all waiting
threads are released and any
subsequent invocations of await return
immediately. This is a one-shot
phenomenon -- the count cannot be
reset. If you need a version that
resets the count, consider using a
CyclicBarrier.
source 1.6 Javadoc
A CountDownLatch is used for one-time synchronization. While using a CountDownLatch, any thread is allowed to call countDown() as many times as they like. Threads which called await() are blocked until the count reaches zero because of calls to countDown() by other unblocked threads. The javadoc for CountDownLatch states:
The await methods block until the current count reaches zero due to
invocations of the countDown() method, after which all waiting threads
are released and any subsequent invocations of await return
immediately.
...
Another typical usage would be to divide a problem into N parts,
describe each part with a Runnable that executes that portion and
counts down on the latch, and queue all the Runnables to an Executor.
When all sub-parts are complete, the coordinating thread will be able
to pass through await. (When threads must repeatedly count down in
this way, instead use a CyclicBarrier.)
In contrast, the cyclic barrier is used for multiple sychronization points, e.g. if a set of threads are running a loop/phased computation and need to synchronize before starting the next iteration/phase. As per the javadoc for CyclicBarrier:
The barrier is called cyclic because it can be re-used after the
waiting threads are released.
Unlike the CountDownLatch, each call to await() belongs to some phase and can cause the thread to block until all parties belonging to that phase have invoked await(). There is no explicit countDown() operation supported by the CyclicBarrier.
This question has been adequately answered already, but I think I can value-add a little by posting some code.
To illustrate the behaviour of cyclic barrier, I have made some sample code. As soon as the barrier is tipped, it is automatically reset so that it can be used again (hence it is "cyclic"). When you run the program, observe that the print outs "Let's play" are triggered only after the barrier is tipped.
import java.util.concurrent.BrokenBarrierException;
import java.util.concurrent.CyclicBarrier;
public class CyclicBarrierCycles {
static CyclicBarrier barrier;
public static void main(String[] args) throws InterruptedException {
barrier = new CyclicBarrier(3);
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
System.out.println("Barrier automatically resets.");
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
new Worker().start();
}
}
class Worker extends Thread {
#Override
public void run() {
try {
CyclicBarrierCycles.barrier.await();
System.out.println("Let's play.");
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
e.printStackTrace();
}
}
}
When I was studying about Latches and cyclicbarriers I came up with this metaphors.
cyclicbarriers: Imagine a company has a meeting room. In order to start the meeting, a certain number of meeting attendees have to come to meeting (to make it official). the following is the code of a normal meeting attendee (an employee)
class MeetingAtendee implements Runnable {
CyclicBarrier myMeetingQuorumBarrier;
public MeetingAtendee(CyclicBarrier myMileStoneBarrier) {
this.myMeetingQuorumBarrier = myMileStoneBarrier;
}
#Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + " i joined the meeting ...");
myMeetingQuorumBarrier.await();
System.out.println(Thread.currentThread().getName()+" finally meeting stared ...");
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
System.out.println("Meeting canceled! every body dance <by chic band!>");
}
}
}
employee joins the meeting, waits for others to come to start meeting. also he gets exited if the meeting gets canceled :) then we have THE BOSS how doses not like to wait for others to show up and if he looses his patient, he cancels meeting.
class MeetingAtendeeTheBoss implements Runnable {
CyclicBarrier myMeetingQuorumBarrier;
public MeetingAtendeeTheBoss(CyclicBarrier myMileStoneBarrier) {
this.myMeetingQuorumBarrier = myMileStoneBarrier;
}
#Override
public void run() {
try {
System.out.println(Thread.currentThread().getName() + "I am THE BOSS - i joined the meeting ...");
//boss dose not like to wait too much!! he/she waits for 2 seconds and we END the meeting
myMeetingQuorumBarrier.await(1,TimeUnit.SECONDS);
System.out.println(Thread.currentThread().getName()+" finally meeting stared ...");
} catch (InterruptedException e) {
e.printStackTrace();
} catch (BrokenBarrierException e) {
System.out.println("what WHO canceled The meeting");
} catch (TimeoutException e) {
System.out.println("These employees waste my time!!");
}
}
}
On a normal day, employee come to meeting wait for other to show up and if some attendees don`t come they have to wait indefinitely! in some special meeting the boss comes and he does not like to wait.(5 persons need to start meeting but only boss comes and also an enthusiastic employee) so he cancels the meeting (angrily)
CyclicBarrier meetingAtendeeQuorum = new CyclicBarrier(5);
Thread atendeeThread = new Thread(new MeetingAtendee(meetingAtendeeQuorum));
Thread atendeeThreadBoss = new Thread(new MeetingAtendeeTheBoss(meetingAtendeeQuorum));
atendeeThread.start();
atendeeThreadBoss.start();
Output:
//Thread-1I am THE BOSS - i joined the meeting ...
// Thread-0 i joined the meeting ...
// These employees waste my time!!
// Meeting canceled! every body dance <by chic band!>
There is another scenario in which another outsider thread (an earth quake) cancels the meeting (call reset method). in this case all the waiting threads get woken up by an exception.
class NaturalDisasters implements Runnable {
CyclicBarrier someStupidMeetingAtendeeQuorum;
public NaturalDisasters(CyclicBarrier someStupidMeetingAtendeeQuorum) {
this.someStupidMeetingAtendeeQuorum = someStupidMeetingAtendeeQuorum;
}
void earthQuakeHappening(){
System.out.println("earth quaking.....");
someStupidMeetingAtendeeQuorum.reset();
}
#Override
public void run() {
earthQuakeHappening();
}
}
running code will result in funny output:
// Thread-1I am THE BOSS - i joined the meeting ...
// Thread-0 i joined the meeting ...
// earth quaking.....
// what WHO canceled The meeting
// Meeting canceled! every body dance <by chic band!>
You can also add a secretary to meeting room, if a meeting is held she will document every thing but she is not part of the meeting:
class MeetingSecretary implements Runnable {
#Override
public void run() {
System.out.println("preparing meeting documents");
System.out.println("taking notes ...");
}
}
Latches: if the angry boss wants to hold an exhibition for company customers, every thing needs to be ready (resources). we provide a to-do list every worker (Thread) dose his job and we check the to-do list (some workers do painting, others prepare sound system ...). when all the items in to-do list are complete (resources are provided) we can open the doors to customers.
public class Visitor implements Runnable{
CountDownLatch exhibitonDoorlatch = null;
public Visitor (CountDownLatch latch) {
exhibitonDoorlatch = latch;
}
public void run() {
try {
exhibitonDoorlatch .await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("customer visiting exebition");
}
}
And the workers how are preparing the exhibition:
class Worker implements Runnable {
CountDownLatch myTodoItem = null;
public Worker(CountDownLatch latch) {
this.myTodoItem = latch;
}
public void run() {
System.out.println("doing my part of job ...");
System.out.println("My work is done! remove it from todo list");
myTodoItem.countDown();
}
}
CountDownLatch preperationTodoList = new CountDownLatch(3);
// exhibition preparation workers
Worker electricalWorker = new Worker(preperationTodoList);
Worker paintingWorker = new Worker(preperationTodoList);
// Exhibition Visitors
ExhibitionVisitor exhibitionVisitorA = new ExhibitionVisitor(preperationTodoList);
ExhibitionVisitor exhibitionVisitorB = new ExhibitionVisitor(preperationTodoList);
ExhibitionVisitor exhibitionVisitorC = new ExhibitionVisitor(preperationTodoList);
new Thread(electricalWorker).start();
new Thread(paintingWorker).start();
new Thread(exhibitionVisitorA).start();
new Thread(exhibitionVisitorB).start();
new Thread(exhibitionVisitorC).start();
In a nutshell, just to understand key functional differences between the two :
public class CountDownLatch {
private Object mutex = new Object();
private int count;
public CountDownLatch(int count) {
this.count = count;
}
public void await() throws InterruptedException {
synchronized (mutex) {
while (count > 0) {
mutex.wait();
}
}
}
public void countDown() {
synchronized (mutex) {
if (--count == 0)
mutex.notifyAll();
}
}
}
and
public class CyclicBarrier {
private Object mutex = new Object();
private int count;
public CyclicBarrier(int count) {
this.count = count;
}
public void await() throws InterruptedException {
synchronized (mutex) {
count--;
while(count > 0)
mutex.wait();
mutex.notifyAll();
}
}
}
except, of course, features like non-blocking, timed waiting, diagnostics and everything which has been in details explained in the above answers.
The above classes are, however, fully functional and equivalent, within the provided functionality, to their correspondent namesakes.
On a different note, CountDownLatch's inner class subclasses AQS, while CyclicBarrier uses ReentrantLock (my suspicion is it could be other way around or both could use AQS or both use Lock -- without any loss of performance efficiency)
In CountDownLatch, threads waits for other threads to complete their execution. In CyclicBarrier, worker threads wait for each other to complete their execution.
You can not reuse same CountDownLatch instance once count reaches to zero and latch is open, on the other hand CyclicBarrier can be reused by resetting Barrier, Once barrier is broken.
One obvious difference is, only N threads can await on a CyclicBarrier of N to be release in one cycle. But unlimited number of threads can await on a CountDownLatch of N. The count down decrement can be done by one thread N times or N threads one time each or combinations.
In the case of CyclicBarrier, as soon as ALL child threads begins calling barrier.await(), the Runnable is executed in the Barrier. The barrier.await in each child thread will take different lengh of time to finish, and they all finish at the same time.
CountDownLatch is a count down of anything; CyclicBarrier is a count down for thread only
assume there are 5 worker threads and one shipper thread, and when workers produce 100 items, shipper will ship them out.
For CountDownLatch, the counter can be on workers or items
For CyclicBarrier, the counter can only on workers
If a worker falls infinite sleep, with CountDownLatch on items, Shipper can ship; However, with CyclicBarrier, Shipper can never be called
#Kevin Lee and #Jon I tried CyclicBarrier with Optional Runnable. Looks like it runs in the beginning and after the CyclicBarrier is tipped. Here is the code and output
static CyclicBarrier barrier;
public static void main(String[] args) throws InterruptedException {
barrier = new CyclicBarrier(3, new Runnable() {
#Override
public void run() {
System.out.println("I run in the beginning and after the CyclicBarrier is tipped");
}
});
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
System.out.println("Barrier automatically resets.");
new Worker().start();
Thread.sleep(1000);
new Worker().start();
Thread.sleep(1000);
new Worker().start();
}
Output
I run in the beginning and after the CyclicBarrier is tipped
Let's play.
Let's play.
Let's play.
Barrier automatically resets.
I run in the beginning and after the CyclicBarrier is tipped
Let's play.
Let's play.
Let's play.
I created some workflow how to wait for all thread which I created. This example works in 99 % of cases but sometimes method waitForAllDone is finished sooner then all thread are completed. I know it because after waitForAllDone I am closing stream which is using created thread so then occurs exception
Caused by: java.io.IOException: Stream closed
my thread start with:
#Override
public void run() {
try {
process();
} finally {
Factory.close(this);
}
}
closing:
protected static void close(final Client client) {
clientCount--;
}
when I creating thread I call this:
public RobWSClient getClient() {
clientCount++;
return new Client();
}
and clientCount variable inside factory:
private static volatile int clientCount = 0;
wait:
public void waitForAllDone() {
try {
while (clientCount > 0) {
Thread.sleep(10);
}
} catch (InterruptedException e) {
LOG.error("Error", e);
}
}
You need to protect the modification and reading of clientCount via synchronized. The main issue is that clientCount-- and clientCount++ are NOT an atomic operation and therefore two threads could execute clientCount-- / clientCount++ and end up with the wrong result.
Simply using volatile as you do above would ONLY work if ALL operations on the field were atomic. Since they are not, you need to use some locking mechanism. As Anton states, AtomicInteger is an excellent choice here. Note that it should be either final or volatile to ensure it is not thread-local.
That being said, the general rule post Java 1.5 is to use a ExecutorService instead of Threads. Using this in conjuction with Guava's Futures class could make waiting for all to complete to be as simple as:
Future<List<?>> future = Futures.successfulAsList(myFutureList);
future.get();
// all processes are complete
Futures.successfulAsList
I'm not sure that the rest of your your code has no issues, but you can't increment volatile variable like this - clientCount++; Use AtomicInteger instead
The best way to wait for threads to terminate, is to use one of the high-level concurrency facilities.
In this case, the easiest way would be to use an ExecutorService.
You would 'offer' a new task to the executor in this way:
...
ExecutorService executor = Executors.newFixedThreadPool(POOL_SIZE);
...
Client client = getClient(); //assuming Client implements runnable
executor.submit(client);
...
public void waitForAllDone() {
executor.awaitTermination(30, TimeUnit.SECOND) ; wait termination of all threads for 30 secs
...
}
In this way, you don't waste valuable CPU cycles in busy waits or sleep/awake cycles.
See ExecutorService docs for details.
I am looking for a way to execute batches of tasks in java. The idea is to have an ExecutorService based on a thread pool that will allow me to spread a set of Callable among different threads from a main thread. This class should provide a waitForCompletion method that will put the main thread to sleep until all tasks are executed. Then the main thread should be awaken, and it will perform some operations and resubmit a set of tasks.
This process will be repeated numerous times, so I would like to use ExecutorService.shutdown as this would require to create multiple instances of ExecutorService.
Currently I have implemented it in the following way using a AtomicInteger, and a Lock/Condition:
public class BatchThreadPoolExecutor extends ThreadPoolExecutor {
private final AtomicInteger mActiveCount;
private final Lock mLock;
private final Condition mCondition;
public <C extends Callable<V>, V> Map<C, Future<V>> submitBatch(Collection<C> batch){
...
for(C task : batch){
submit(task);
mActiveCount.incrementAndGet();
}
}
#Override
protected void afterExecute(Runnable r, Throwable t) {
super.afterExecute(r, t);
mLock.lock();
if (mActiveCount.decrementAndGet() == 0) {
mCondition.signalAll();
}
mLock.unlock();
}
public void awaitBatchCompletion() throws InterruptedException {
...
// Lock and wait until there is no active task
mLock.lock();
while (mActiveCount.get() > 0) {
try {
mCondition.await();
} catch (InterruptedException e) {
mLock.unlock();
throw e;
}
}
mLock.unlock();
}
}
Please not that I will not necessarily submit all the tasks from the batch at once, therefore CountDownLatch does not seem to be an option.
Is this a valid way to do it? Is there a more efficient/elegant way to implement that?
Thanks
I think the ExecutorService itself will be able to perform your requirements.
Call invokeAll([...]) and iterate over all of your Tasks. All Tasks are finished, if you can iterate through all Futures.
As the other answers point out, there doesn't seem to be any part of your use case that requires a custom ExecutorService.
It seems to me that all you need to do is submit a batch, wait for them all to finish while ignoring interrupts on the main thread, then submit another batch perhaps based on the results of the first batch. I believe this is just a matter of:
ExecutorService service = ...;
Collection<Future> futures = new HashSet<Future>();
for (Callable callable : tasks) {
Future future = service.submit(callable);
futures.add(future);
}
for(Future future : futures) {
try {
future.get();
} catch (InterruptedException e) {
// Figure out if the interruption means we should stop.
}
}
// Use the results of futures to figure out a new batch of tasks.
// Repeat the process with the same ExecutorService.
I agree with #ckuetbach that the default Java Executors should provide you with all of the functionality you need to execute a "batch" of jobs.
If I were you I would just submit a bunch of jobs, wait for them to finish with the ExecutorService.awaitTermination() and then just start up a new ExecutorService. Doing this to save on "thread creations" is premature optimization unless you are doing this 100s of times a second or something.
If you really are stuck on using the same ExecutorService for each of the batches then you can allocate a ThreadPoolExecutor yourself, and be in a loop looking at ThreadPoolExecutor.getActiveCount(). Something like:
BlockingQueue jobQueue = new LinkedBlockingQueue<Runnable>();
ThreadPoolExecutor executor = new ThreadPoolExecutor(NUM_THREADS, NUM_THREADS,
0L, TimeUnit.MILLISECONDS, jobQueue);
// submit your batch of jobs ...
// need to wait a bit for the jobs to start
Thread.sleep(100);
while (executor.getActiveCount() > 0 && jobQueue.size() > 0) {
// to slow the spin
Thread.sleep(1000);
}
// continue on to submit the next batch
I would like to ask basic question about Java threads. Let's consider a producer - consumer scenario. Say there is one producer, and n consumer. Consumer arrive at random time, and once they are served they go away, meaning each consumer runs on its own thread. Should I still use run forever condition for consumer ?
public class Consumer extends Thread {
public void run() {
while (true) {
}
}
}
Won't this keep thread running forever ?
I wouldn't extend Thread, instead I would implement Runnable.
If you want the thread to run forever, I would have it loop forever.
A common alternative is to use
while(!Thread.currentThread().isInterrupted()) {
or
while(!Thread.interrupted()) {
It will, so you might want to do something like
while(beingServed)
{
//check if the customer is done being served (set beingServed to false)
}
This way you'll escaped the loop when it's meant to die.
Why not use a boolean that represents the presence of the Consumer?
public class Consumer extends Thread {
private volatile boolean present;
public Consumer() {
present = true;
}
public void run() {
while (present) {
// Do Stuff
}
}
public void consumerLeft() {
present = false;
}
}
First, you can create for each consumer and after the consumer will finish it's job than the consumer will finish the run function and will die, so no need for infinite loop. however, creating thread for each consumer is not good idea since creation of thread is quite expensive in performance point of view. threads are very expensive resources. In addition, i agree with the answers above that it is better to implement runnable and not to extends thread. extend thread only when you wish to customize your thread.
I strongly suggest you will use thread pool and the consumer will be the runnable object that ran by the thread in the thread pool.
the code should look like this:
public class ConsumerMgr{
int poolSize = 2;
int maxPoolSize = 2;
long keepAliveTime = 10;
ThreadPoolExecutor threadPool = null;
final ArrayBlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(
5);
public ConsumerMgr()
{
threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize,
keepAliveTime, TimeUnit.SECONDS, queue);
}
public void runTask(Runnable task)
{
// System.out.println("Task count.."+threadPool.getTaskCount() );
// System.out.println("Queue Size before assigning the
// task.."+queue.size() );
threadPool.execute(task);
// System.out.println("Queue Size after assigning the
// task.."+queue.size() );
// System.out.println("Pool Size after assigning the
// task.."+threadPool.getActiveCount() );
// System.out.println("Task count.."+threadPool.getTaskCount() );
System.out.println("Task count.." + queue.size());
}
It is not a good idea to extend Thread (unless you are coding a new kind of thread - ie never).
The best approach is to pass a Runnable to the Thread's constructor, like this:
public class Consumer implements Runnable {
public void run() {
while (true) {
// Do something
}
}
}
new Thread(new Consumer()).start();
In general, while(true) is OK, but you have to handle being interrupted, either by normal wake or by spurious wakeup. There are many examples out there on the web.
I recommend reading Java Concurrency in Practice.
for producer-consumer pattern you better use wait() and notify(). See this tutorial. This is far more efficient than using while(true) loop.
If you want your thread to processes messages until you kill them (or they are killed in some way) inside while (true) there would be some synchronized call to your producer thread (or SynchronizedQueue, or queuing system) which would block until a message becomes available. Once a message is consumed, the loop restarts and waits again.
If you want to manually instantiate a bunch of thread which pull a message from a producer just once then die, don't use while (true).