I have to read a huge file contains text, around 3GB (and 40 Million lines). Just reading it happens really fast:
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
while ((line = br.readLine()) != null) {
//Nothing here
}
}
With each read line from above code i do some parsing on the string and process it further.(a huge task). I try to do that multiple threads.
A) I have tried BlockingQueue like this
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
String line;
BlockingQueue<String> queue = new ArrayBlockingQueue<>(100);
int numThreads = 5;
Consumer[] consumer = new Consumer[numThreads];
for (int i = 0; i < consumer.length; i++) {
consumer[i] = new Consumer(queue);
consumer[i].start();
}
while ((line = br.readLine()) != null) {
queue.put(line);
}
queue.put("exit");
} catch (FileNotFoundException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException | InterruptedException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
class Consumer extends Thread {
private final BlockingQueue<String> queue;
Consumer(BlockingQueue q) {
queue = q;
}
public void run() {
while (true) {
try {
String result = queue.take();
if (result.equals("exit")) {
queue.put("exit");
break;
}
System.out.println(result);
} catch (InterruptedException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
This approach took more time than normal single threaded processing.
I am not sure why - what am I doing wrong?
B) I have tried ExecutorService:
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
String line;
ExecutorService pool = Executors.newFixedThreadPool(10);
while ((line = br.readLine()) != null) {
pool.execute(getRunnable(line));
}
pool.shutdown();
} catch (FileNotFoundException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
private static Runnable getRunnable(String run){
Runnable task = () -> {
System.out.println(run);
};
return task;
}
This approach also takes more time than printing directly inside while loop. What am I doing wrong?
What is the correct way to do it?
How can I efficiently process the read line with multiple threads?
Answering one part here - why is the BlockingQueue option slower.
It is important to understand that threads don't come for "free". There is always certain overhead required to get them up and "manage" them.
And of course, when you are actually using more threads than your hardware can support "natively" then context switching is added to the bill.
Beyond that, also the BlockingQueue doesn't come free either. You see, in order to preserve order, that ArrayBlockingQueue probably has to synchronize somewhere. Worst case, that means locking and waiting. Yes, the JVM and JIT are usually pretty good about such things; but again, a certain "percentage" gets added to the bill.
But just for the record, that shouldn't matter. From the javadoc:
This class supports an optional fairness policy for ordering waiting producer and consumer threads. By default, this ordering is not guaranteed. However, a queue constructed with fairness set to true grants threads access in FIFO order. Fairness generally decreases throughput but reduces variability and avoids starvation.
As you are not setting "fairness"
BlockingQueue queue = new ArrayBlockingQueue<>(100);
that shouldn't affect you. On the other hand: I am pretty sure you expected those lines to be processed in order, so you would actually want to go for
BlockingQueue<String> queue = new ArrayBlockingQueue<>(100, true);
and thereby further slowing down the whole thing.
Finally: I agree with the comments given so far. Benchmarking such things is a complex undertaking; and many aspects influence the results. The most important question is definitely: where is your bottle neck?! Is it IO performance (then more threads don't help much!) - or is it really overall processing time (and then the "correct" number of threads for processing should definitely speed up things).
And regarding "how to do this in the correct way" - I suggest to check out this question on softwareengineering.SE.
How to process contents from large text file using multiple threads?
If your computer has enough RAM, I would do the following:
read the entire file into a variable (an ArrayList for example) - using only one thread to read the whole file.
then launch one ExecutorService (with a thread pool that uses no more than the maximum number of cores that your computer can run simultaneously)
int cores = Runtime.getRuntime().availableProcessors();
ExecutorService executorService = Executors.newFixedThreadPool(cores);
finally, divide the lines read, among a limited number of callables/runnables and submit those callables/runnables to your ExecutorService (so all of them can execute simultaneously in your ExecutorService).
unless your processing of lines uses I/O, I assume that you will reach near 100% CPU utilization, and none of your threads will be in waiting state.
do you want even faster processing?
scale vertically is the easiest option: buy even more RAM, better CPU (with more cores), use a Solid State Drive
May be all thread accessing same shared resource concurrently so result more contentious.
One thing you can try reader thread put all line in single key do submit in partition way so it will less contentious.
public void execute(Runnable command) {
final int key= command.getKey();
//Some code to check if it is runing
final int index = key != Integer.MIN_VALUE ? Math.abs(key) % size : 0;
workers[index].execute(command);
}
Create worker with queue so that if you want some task required sequentially then run.
private final AtomicBoolean scheduled = new AtomicBoolean(false);
private final BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maximumQueueSize);
public void execute(Runnable command) {
long timeout = 0;
TimeUnit timeUnit = TimeUnit.SECONDS;
if (command instanceof TimeoutRunnable) {
TimeoutRunnable timeoutRunnable = ((TimeoutRunnable) command);
timeout = timeoutRunnable.getTimeout();
timeUnit = timeoutRunnable.getTimeUnit();
}
boolean offered;
try {
if (timeout == 0) {
offered = workQueue.offer(command);
} else {
offered = workQueue.offer(command, timeout, timeUnit);
}
} catch (InterruptedException e) {
throw new RejectedExecutionException("Thread is interrupted while offering work");
}
if (!offered) {
throw new RejectedExecutionException("Worker queue is full!");
}
schedule();
}
private void schedule() {
//if it is already scheduled, we don't need to schedule it again.
if (scheduled.get()) {
return;
}
if (!workQueue.isEmpty() && scheduled.compareAndSet(false, true)) {
try {
executor.execute(this);
} catch (RejectedExecutionException e) {
scheduled.set(false);
throw e;
}
}
}
public void run() {
try {
Runnable r;
do {
r = workQueue.poll();
if (r != null) {
r.run();
}
}
while (r != null);
} finally {
scheduled.set(false);
schedule();
}
}
As suggested above there is no fixed rule for thread pool size .But there is some suggestion or best practice available can be used depending upon your use case.
CPU Bound Tasks
For CPU bound tasks, Goetz (2002, 2006) recommends
threads = number of CPUs + 1
IO Bound Tasks
Working out the optimal number for IO bound tasks is less obvious. During an IO bound task, a CPU will be left idle (waiting or blocking). This idle time can be better used in initiating another IO bound request.
Related
I'm writing sliding windows protocol:
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class ABC {
static boolean status_1 = true;
public static void main(String[] args) {
BlockingQueue<String> block1 = new LinkedBlockingQueue<String>(7); // size 7
Thread a1 = new Thread(new receive(block1));
Thread a2 = new Thread(new send(block1));
a2.start();
a1.start();
}
}
class receive implements Runnable {
BlockingQueue<String> block;
public receive(BlockingQueue<String> block) {
this.block = block;
}
#Override
public void run() {
while (true) {
try {
System.out.println("out: " + block.size() + " " + block.take());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
class send implements Runnable {
BlockingQueue<String> block;
public send(BlockingQueue<String> block) {
this.block = block;
}
#Override
public void run() {
InputStreamReader in = new InputStreamReader(System.in);
BufferedReader bufferedReader = new BufferedReader(in);
int i = 0;
String e;
while (true) {
try {
e = "" + i++;
System.out.println(e);
block.put(e);
if (i == 1000) {
break; //Test 1000 number
}
} catch (InterruptedException f) {
// TODO Auto-generated catch block
f.printStackTrace();
}
}
}
In my example I used BlockingQueue to do the task but it delayed alot. The receive thread keep full size.
Is there any queue in Java could do the task with better performance in real time UDP?
The thing is, that you have no guarnatees that "sending" thread will work with the "same speed" as the "receiving" thread. Parralel thread execution is non-deterministic. You assume otherwise.
You have introduced logical synchonization between threads by assumption that both threads will work with the same speed - 1 item put, 1 item taken, 1 item put, 1 item taken and so on.
So by accident this works for you in case of put but not with offer it is because according to docs https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html put will block until queue will have space to accept new element - that synchronization is missing when u use offer, resuylting in dropped packets.
So basicly what is happening here, is that sending thread occupy more CPU time thus production more data than receiver can consume resulting in dropped data.
It is not about queue performance at all.
Basicly your code is not the great example as there is no natural network latency etc. This code might work to some point , if we would introduce network latency. You can emulate that by adding Thread.sleep(ms) in producer thread between put call.
As a side not, stick to Java's naming convetion
I think your problem is the messuring method. You use block.size() to determine the queue fill grade. The size() method is a relatively long running operation on the queue, which leads to the send thread running away.
If you remove the size output you will see a quite fairly distributed output between send and receive.
By the way, using System.out also disturbs your experiment because of the synchronization to the console output stream. A better approach would be using independent output streams with some timing information.
In my app there are 2 phases, one download some big data, and the other manipulates it.
so i created 2 classes which implements runnable: ImageDownloader and ImageManipulator, and they share a downloadedBlockingQueue:
public class ImageDownloader implements Runnable {
private ArrayBlockingQueue<ImageBean> downloadedImagesBlockingQueue;
private ArrayBlockingQueue<String> imgUrlsBlockingQueue;
public ImageDownloader(ArrayBlockingQueue<String> imgUrlsBlockingQueue, ArrayBlockingQueue<ImageBean> downloadedImagesBlockingQueue) {
this.downloadedImagesBlockingQueue = downloadedImagesBlockingQueue;
this.imgUrlsBlockingQueue = imgUrlsBlockingQueue;
}
#Override
public void run() {
while (!this.imgUrlsBlockingQueue.isEmpty()) {
try {
String imgUrl = this.imgUrlsBlockingQueue.take();
ImageBean imageBean = doYourThing(imgUrl);
this.downloadedImagesBlockingQueue.add(imageBean);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
public class ImageManipulator implements Runnable {
private ArrayBlockingQueue<ImageBean> downloadedImagesBlockingQueue;
private AtomicInteger capacity;
public ImageManipulator(ArrayBlockingQueue<ImageBean> downloadedImagesBlockingQueue,
AtomicInteger capacity) {
this.downloadedImagesBlockingQueue = downloadedImagesBlockingQueue;
this.capacity = capacity;
}
#Override
public void run() {
while (capacity.get() > 0) {
try {
ImageBean imageBean = downloadedImagesBlockingQueue.take(); // <- HERE I GET THE DEADLOCK
capacity.decrementAndGet();
} catch (InterruptedException e) {
e.printStackTrace();
}
// ....
}
}
}
public class Main {
public static void main(String[] args) {
String[] imageUrls = new String[]{"url1", "url2"};
int capacity = imageUrls.length;
ArrayBlockingQueue<String> imgUrlsBlockingQueue = initImgUrlsBlockingQueue(imageUrls, capacity);
ArrayBlockingQueue<ImageBean> downloadedImagesBlockingQueue = new ArrayBlockingQueue<>(capacity);
ExecutorService downloaderExecutor = Executors.newFixedThreadPool(3);
for (int i = 0; i < 3; i++) {
Runnable worker = new ImageDownloader(imgUrlsBlockingQueue, downloadedImagesBlockingQueue);
downloaderExecutor.execute(worker);
}
downloaderExecutor.shutdown();
ExecutorService manipulatorExecutor = Executors.newFixedThreadPool(3);
AtomicInteger manipulatorCapacity = new AtomicInteger(capacity);
for (int i = 0; i < 3; i++) {
Runnable worker = new ImageManipulator(downloadedImagesBlockingQueue, manipulatorCapacity);
manipulatorExecutor.execute(worker);
}
manipulatorExecutor.shutdown();
while (!downloaderExecutor.isTerminated() && !manipulatorExecutor.isTerminated()) {
}
}
}
The deadlock happens because this scenario:
t1 checks capacity its 1.
t2 checks its 1.
t3 checks its 1.
t2 takes, sets capacity to 0, continue with flow and eventually exits.
t1 and t3 now on deadlock, cause there will be no adding to the downloadedImagesBlockingQueue.
Eventually i want something like that: when the capacity is reached && the queue is empty = break the "while" loop, and terminate gracefully.
to set "is queue empty" as only condition won't work, cause in the start it is empty, until some ImageDownloader puts a imageBean into the queue.
There area a couple of things you can do to prevent deadlock:
Use a LinkedBlockingQueue which has a capacity
Use offer to add to the queue which does not block
Use drainTo or poll to take items from the queue which are not blocking
There are also some tips you might want to consider:
Use a ThreadPool:
final ExecutorService executorService = Executors.newFixedThreadPool(4);
If you use a fixed size ThreadPool you can add "poison pill"s when you finished adding data to the queue corresponding to the size of your ThreadPool and check it when you poll
Using a ThreadPool is as simple as this:
final ExecutorService executorService = Executors.newFixedThreadPool(4);
final Future<?> result = executorService.submit(new Runnable() {
#Override
public void run() {
}
});
There is also the less known ExecutorCompletionService which abstracts this whole process. More info here.
You don't need the capacity in your consumer. It's now read and updated in multiple threads, which cause the synchronization issue.
initImgUrlsBlockingQueue creates the url blocking queue with capacity number of URL items. (Right?)
ImageDownloader consumes the imgUrlsBlockingQueue and produce images, it terminates when all the URLs are downloaded, or, if capacity means number of images that should be downloaded because there may be some failure, it terminates when it added capacity number of images.
Before ImageDownloader terminates, it add a marker in to the downloadedImagesBlockingQueue, for example, a null element, a static final ImageBean static final ImageBean marker = new ImageBean().
All ImageManipulator drains the queue use the following construct, and when it sees the null element, it add it to the queue again and terminate.
// use identity comparison
while ((imageBean = downloadedImagesBlockingQueue.take()) != marker) {
// process image
}
downloadedImagesBlockingQueue.add(marker);
Note that the BlockingQueue promises its method call it atomic, however, if you check it's capacity first, and consume an element according to the capacity, the action group won't be atomic.
Well i used some of the features suggested, but this is the complete solution for me, the one which does not busy waiting and wait until the Downloader notify it.
public ImageManipulator(LinkedBlockingQueue<ImageBean> downloadedImagesBlockingQueue,
LinkedBlockingQueue<ImageBean> manipulatedImagesBlockingQueue,
AtomicInteger capacity,
ManipulatedData manipulatedData,
ReentrantLock downloaderReentrantLock,
ReentrantLock manipulatorReentrantLock,
Condition downloaderNotFull,
Condition manipulatorNotFull) {
this.downloadedImagesBlockingQueue = downloadedImagesBlockingQueue;
this.manipulatedImagesBlockingQueue = manipulatedImagesBlockingQueue;
this.capacity = capacity;
this.downloaderReentrantLock = downloaderReentrantLock;
this.manipulatorReentrantLock = manipulatorReentrantLock;
this.downloaderNotFull = downloaderNotFull;
this.manipulatorNotFull = manipulatorNotFull;
this.manipulatedData = manipulatedData;
}
#Override
public void run() {
while (capacity.get() > 0) {
downloaderReentrantLock.lock();
if (capacity.get() > 0) { //checks if the value is updated.
ImageBean imageBean = downloadedImagesBlockingQueue.poll();
if (imageBean != null) { // will be null if no downloader finished is work (successfully downloaded or not)
capacity.decrementAndGet();
if (capacity.get() == 0) { //signal all the manipulators to wake up and stop waiting for downloaded images.
downloaderNotFull.signalAll();
}
downloaderReentrantLock.unlock();
if (imageBean.getOriginalImage() != null) { // the downloader will set it null iff it failes to download it.
// business logic
}
manipulatedImagesBlockingQueue.add(imageBean);
signalAllPersisters(); // signal the persisters (which has the same lock/unlock as this manipulator.
} else {
try {
downloaderNotFull.await(); //manipulator will wait for downloaded image - downloader will signalAllManipulators (same as signalAllPersisters() here) when an imageBean will be inserted to queue.
downloaderReentrantLock.unlock();
} catch (InterruptedException e) {
logger.log(Level.ERROR, e.getMessage(), e);
}
}
}
}
logger.log(Level.INFO, "Manipulator: " + Thread.currentThread().getId() + " Ended Gracefully");
}
private void signalAllPersisters() {
manipulatorReentrantLock.lock();
manipulatorNotFull.signalAll();
manipulatorReentrantLock.unlock();
}
For full flow you can check this project on my github: https://github.com/roy-key/image-service/
Your issue is that you are trying to use a counter to track queue elements and aren't composing operations that need to be atomic. You are doing check, take, decrement. This allows the queue size and counter to desynchronize and your threads block forever. It would be better to write a synchronization primitive that is 'closeable' so that you don't have to keep an associated counter. However, a quick fix would be to change it so you are get and decrementing the counter atomically:
while (capacity.getAndDecrement() > 0) {
try {
ImageBean imageBean = downloadedImagesBlockingQueue.take();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
In this case if there are 3 threads and only one element left in the queue then only one thread will atomically decrement the counter and see that it can take without blocking. Both other threads will see 0 or <0 and break out of the loop.
You also need to make all of your class instance variables final so that they have the correct memory visibility. You should also determine how you are going to handle interrupts rather than relying on the default print trace template.
Below is my code to extract text from a text file and displaying it on the console.
Could some one please tell me how to make this program run on multiple threads simultaneoulsly?
I would also like to know if multiple threads are being used in performing the task as the time taken to run the task is varied every time i run.??
//Code
import java.io.*;
import java.util.*;
class Extract{
static int i=0;
FileInputStream in;
BufferedReader br;
ArrayList<String> stringList;
String li;
Extract() throws FileNotFoundException
{
FileInputStream in = new FileInputStream("C:\\Users\\sputta\\workspace\\Sample\\src\\threads.txt");
br = new BufferedReader(new InputStreamReader(in));
stringList = new ArrayList<String>();
li=" ";
}
void call()
{
try{
while(li!=null)
{
String str = br.readLine();
stringList.add(str);
li=stringList.get(i);
if(li!=null)
{
System.out.println(li);
i++;
}
}
Thread.sleep(1000);
in.close();
}
catch(Exception e)
{
System.out.println(e);
}
}
}
class Caller implements Runnable {
Extract target;
Thread t;
public Caller(Extract targ)
{
target = targ;
t = new Thread(this);
t.start();
System.out.println(t.isAlive());
}
public void run()
{
synchronized(target) { // synchronized block
target.call();
}
}
}
public class Sample {
public static void main(String args[]) throws FileNotFoundException
{
long startTime = System.currentTimeMillis();
System.out.println(startTime);
Extract target = new Extract();
Caller ob1 = new Caller(target);
Caller ob2 = new Caller(target);
Caller ob3 = new Caller(target);
try {
ob1.t.join();
ob2.t.join();
ob3.t.join();
}
catch(InterruptedException e)
{
System.out.println("Interrupted");
}
}
}
It does not make much sense performance-wise to have multiple threads reading from the same file, due to the inevitable input/output (I/O) bottleneck.
Two things that can be done to improve the situation:
"Split" the file into smaller pieces and assign each such "split" to a different thread. This is the approach followed by Hadoop, but it does require copying each "split" before processing, so it is only beneficial for large files (say, at least 100 MB each, or much more).
Use 1 thread to read from the file into a "prefetch" buffer, in memory, and then process the input from the buffer, via multiple other threads. A variation of this approach would be for the prefetch thread to "feed" each of the "consumer" threads with data, before each of them starts. Obviously, the relative allocation of prefetch vs. processing across the threads, will yield varying results, so further tuning would be necessary, depending on the application.
Both approaches have limitations and do not guarantee performance improvements in all cases.
Reading a text file line-by-line from a single thread can be done at a speed of over 1 million lines/sec, but still the bottleneck will remain in I/O, as already discussed.
The producer is finite, as should be the consumer.
The problem is when to stop, not how to run.
Communication can happen over any type of BlockingQueue.
Can't rely on poisoning the queue(PriorityBlockingQueue)
Can't rely on locking the queue(SynchronousQueue)
Can't rely on offer/poll exclusively(SynchronousQueue)
Probably even more exotic queues in existence.
Creates a queued seq on another (presumably lazy) seq s. The queued
seq will produce a concrete seq in the background, and can get up to
n items ahead of the consumer. n-or-q can be an integer n buffer
size, or an instance of java.util.concurrent BlockingQueue. Note
that reading from a seque can block if the reader gets ahead of the
producer.
http://clojure.github.com/clojure/clojure.core-api.html#clojure.core/seque
My attempts so far + some tests: https://gist.github.com/934781
Solutions in Java or Clojure appreciated.
class Reader {
private final ExecutorService ex = Executors.newSingleThreadExecutor();
private final List<Object> completed = new ArrayList<Object>();
private final BlockingQueue<Object> doneQueue = new LinkedBlockingQueue<Object>();
private int pending = 0;
public synchronized Object take() {
removeDone();
queue();
Object rVal;
if(completed.isEmpty()) {
try {
rVal = doneQueue.take();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
pending--;
} else {
rVal = completed.remove(0);
}
queue();
return rVal;
}
private void removeDone() {
Object current = doneQueue.poll();
while(current != null) {
completed.add(current);
pending--;
current = doneQueue.poll();
}
}
private void queue() {
while(pending < 10) {
pending++;
ex.submit(new Runnable() {
#Override
public void run() {
doneQueue.add(compute());
}
private Object compute() {
//do actual computation here
return new Object();
}
});
}
}
}
Not exactly an answer I'm afraid, but a few remarks and more questions. My first answer would be: use clojure.core/seque. The producer needs to communicate end-of-seq somehow for the consumer to know when to stop, and I assume the number of produced elements is not known in advance. Why can't you use an EOS marker (if that's what you mean by queue poisoning)?
If I understand your alternative seque implementation correctly, it will break when elements are taken off the queue outside your function, since channel and q will be out of step in that case: channel will hold more #(.take q) elements than there are elements in q, causing it to block. There might be ways to ensure channel and q are always in step, but that would probably require implementing your own Queue class, and it adds so much complexity that I doubt it's worth it.
Also, your implementation doesn't distinguish between normal EOS and abnormal queue termination due to thread interruption - depending on what you're using it for you might want to know which is which. Personally I don't like using exceptions in this way — use exceptions for exceptional situations, not for normal flow control.
In NetBeans, there's a new hint that says: Thread.sleep called in loop.
Question 1: How/when can it be a problem to sleep in a loop?
Question 2: If it's a problem, what should I do instead?
UPDATE: Question 3: Here's some code. Tell me in this case if I should be using something else instead of Thread.Sleep in a loop. In short, this is used by a server which listens for client TCP connections. The sleep is used here in case the max number of sessions with clients is reached. In this situation, I want the application to wait until a free session becomes available.
public class SessionManager {
private static final int DEFAULT_PORT = 7500;
private static final int SLEEP_TIME = 200;
private final DatabaseManager database = new DatabaseManager();
private final ServerSocket serverSocket = new ServerSocket(DEFAULT_PORT);
public SessionManager() throws IOException, SQLException
{
}
public void listen()
{
while (true)
if (Session.getSessionCount() < Session.getMaxSessionCount())
try
{
new Thread(new Session(database, serverSocket.accept())).start();
}
catch (IOException ex) { ex.printStackTrace(); }
else
try
{
Thread.sleep(SLEEP_TIME);
}
catch (InterruptedException ex) { ex.printStackTrace(); }
}
public static void main(String[] args) throws IOException, SQLException
{
new SessionManager().listen();
}
}
Calling sleep in a loop typically leads to poor performance. For example:
while (true) {
if (stream.available() > 0) {
// read input
}
sleep(MILLISECONDS);
}
If MILLISECONDS is too large, then this code will take a long time to realize that input is available.
If MILLISECONDS is too small, then this code will waste a lot of system resources check for input that hasn't arrived yet.
Other uses of sleep in a loop are typically questionable as well. There's usually a better way.
If it's a problem, what should I do instead?
Post the code and maybe we can give you a sensible answer.
EDIT
IMO, a better way to solve the problem is to use a ThreadPoolExecutor.
Something like this:
public void listen() {
BlockingQueue queue = new SynchronousQueue();
ThreadPoolExecutor executor = new ThreadPoolExecutor(
1, Session.getMaxSessionCount(), 100, TimeUnit.SECONDS, queue);
while (true) {
try {
queue.submit(new Session(database, serverSocket.accept()));
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
This configures the executor to match the way your code currently works. There are a number of other ways you could do it; see the javadoc link above.
As others have said it depends on the usage. A legitimate use would be a program that is designed to do something every 10 seconds (but is not so critical that exact timing is needed). We have lots of these "utility apps" that import data and other such tasks every few minutes. This is an easy way to perform these tasks and we typically will set the sleep interval to be very low and use a counter so that the program stays responsive and can exit easily.
int count = 0;
while (true) {
try {
// Wait for 1 second.
Thread.sleep(1000);
}
catch (InterruptedException ex) {}
// Check to see if the program should exit due to other conditions.
if (shouldExit())
break;
// Is 10 seconds up yet? If not, just loop back around.
count++;
if (count < 10) continue;
// 10 seconds is up. Reset the counter and do something important.
count = 0;
this.doSomething();
}
I think I come across one completely legitimate use of sleep() method in loop.
We have one-way connection between server and client. So when client wants to achieve asynchronous communication with server, he sends message to the server and than periodically polls for some response from server. There needs to be some Timeout interval.
Response resp = null;
for (int i = 0; i < POLL_REPEAT && resp == null; i++) {
try {
Thread.sleep(POLL_INTERVAL);
} catch (InterruptedException ie) {
}
resp = server.getResponse(workflowId);
}
POLL_REPEAT * POLL_INTERVAL ~ TIMEOUT interval
How/when can it be a problem to sleep in a loop?
People sometimes employ it in place of proper synchronization methods (like wait/notify).
If it's a problem, what should I do instead?
Depends on what you're doing. Although it's dificult for me to imagine situation where doing this is the best approach, I guess that's possible too.
You can check Sun's concurrency tutorial on this subject.