NetBeans / Java / New hint: Thread.sleep called in loop - java

In NetBeans, there's a new hint that says: Thread.sleep called in loop.
Question 1: How/when can it be a problem to sleep in a loop?
Question 2: If it's a problem, what should I do instead?
UPDATE: Question 3: Here's some code. Tell me in this case if I should be using something else instead of Thread.Sleep in a loop. In short, this is used by a server which listens for client TCP connections. The sleep is used here in case the max number of sessions with clients is reached. In this situation, I want the application to wait until a free session becomes available.
public class SessionManager {
private static final int DEFAULT_PORT = 7500;
private static final int SLEEP_TIME = 200;
private final DatabaseManager database = new DatabaseManager();
private final ServerSocket serverSocket = new ServerSocket(DEFAULT_PORT);
public SessionManager() throws IOException, SQLException
{
}
public void listen()
{
while (true)
if (Session.getSessionCount() < Session.getMaxSessionCount())
try
{
new Thread(new Session(database, serverSocket.accept())).start();
}
catch (IOException ex) { ex.printStackTrace(); }
else
try
{
Thread.sleep(SLEEP_TIME);
}
catch (InterruptedException ex) { ex.printStackTrace(); }
}
public static void main(String[] args) throws IOException, SQLException
{
new SessionManager().listen();
}
}

Calling sleep in a loop typically leads to poor performance. For example:
while (true) {
if (stream.available() > 0) {
// read input
}
sleep(MILLISECONDS);
}
If MILLISECONDS is too large, then this code will take a long time to realize that input is available.
If MILLISECONDS is too small, then this code will waste a lot of system resources check for input that hasn't arrived yet.
Other uses of sleep in a loop are typically questionable as well. There's usually a better way.
If it's a problem, what should I do instead?
Post the code and maybe we can give you a sensible answer.
EDIT
IMO, a better way to solve the problem is to use a ThreadPoolExecutor.
Something like this:
public void listen() {
BlockingQueue queue = new SynchronousQueue();
ThreadPoolExecutor executor = new ThreadPoolExecutor(
1, Session.getMaxSessionCount(), 100, TimeUnit.SECONDS, queue);
while (true) {
try {
queue.submit(new Session(database, serverSocket.accept()));
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
This configures the executor to match the way your code currently works. There are a number of other ways you could do it; see the javadoc link above.

As others have said it depends on the usage. A legitimate use would be a program that is designed to do something every 10 seconds (but is not so critical that exact timing is needed). We have lots of these "utility apps" that import data and other such tasks every few minutes. This is an easy way to perform these tasks and we typically will set the sleep interval to be very low and use a counter so that the program stays responsive and can exit easily.
int count = 0;
while (true) {
try {
// Wait for 1 second.
Thread.sleep(1000);
}
catch (InterruptedException ex) {}
// Check to see if the program should exit due to other conditions.
if (shouldExit())
break;
// Is 10 seconds up yet? If not, just loop back around.
count++;
if (count < 10) continue;
// 10 seconds is up. Reset the counter and do something important.
count = 0;
this.doSomething();
}

I think I come across one completely legitimate use of sleep() method in loop.
We have one-way connection between server and client. So when client wants to achieve asynchronous communication with server, he sends message to the server and than periodically polls for some response from server. There needs to be some Timeout interval.
Response resp = null;
for (int i = 0; i < POLL_REPEAT && resp == null; i++) {
try {
Thread.sleep(POLL_INTERVAL);
} catch (InterruptedException ie) {
}
resp = server.getResponse(workflowId);
}
POLL_REPEAT * POLL_INTERVAL ~ TIMEOUT interval

How/when can it be a problem to sleep in a loop?
People sometimes employ it in place of proper synchronization methods (like wait/notify).
If it's a problem, what should I do instead?
Depends on what you're doing. Although it's dificult for me to imagine situation where doing this is the best approach, I guess that's possible too.
You can check Sun's concurrency tutorial on this subject.

Related

java multithread performance on sync object

I am trying to test the multithreading performance with sync'ed object. However,
with 1 thread or with 16 threads the execution time is the same.
The rest of the code is here.
https://codeshare.io/5oJ6Ng
public void run() {
start = new Date().getTime();
System.out.println(start);
while (threadlist.size() < 9000) { //16 or more
// try{Thread.sleep(100);}catch (Exception f){}
Thread t = new Thread(new Runnable() {
public void run() {
while (add(1,3) < 1000000);
end = new Date().getTime();
System.out.println((end-start));
}
});
threadlist.add(t);
while( threadlist.iterator().hasNext()){
threadlist.iterator().next().start();
try{threadlist.iterator().next().join();}catch (Exception a){}
}
}
}
There are some issues with your code. First:
public void run() {
while (true) {
add(1, 3);
}
}
Those threads never stop working, I would suggest rewriting your logic to:
public void (run) {
while(add(1,3) < 1000000);
System.out.println("now 1000000");
}
public int add(int val1, int val2) {
synchronized (this) {
this.sum1 += val1;
this.sum2 += val2;
return this.sum1 + this.sum2;
}
}
}
You start the threads, but you never call join, eventually you will need to do that.
You are only creating 1 thread instead of the 16 that you wanted:
if (threadlist.size() < 1)
you want
if (threadlist.size() < 16)
Finally, do not expect any performance gain with this code, since you are synchronizing on the object:
synchronized (this){...}
So basically your add method is being run sequentially and not in parallel, since threads will wait on synchronized (this) and call only run on at the time inside your the block of code wrapped by the synchronized statement.
Try to measure your time by adding start = new Date().getTime(); before the parallel region, and end = new Date().getTime(); after.
You can simply your code to:
public void run() {
start = new Date().getTime();
System.out.println(start);
while (threadlist.size() < 16) {
Thread t = new Thread(() -> {
while (add(1,3) < 1);
System.out.println("now 1000000");
});
threadlist.add(t);
}
threadlist.forEach(Thread::start);
threadlist.forEach(thr-> {
try { thr.join();}
catch (InterruptedException e) { e.printStackTrace();}
});
end = new Date().getTime();
System.out.println("Time taken : "+(end-start));
public int add(int val1, int val2) {
synchronized (this) {
this.sum1 += val1;
this.sum2 += val2;
return this.sum1 + this.sum2;
}
}
}
You've significantly updated your code since #dreamcrash answered.
The current version has the following issues:
while( threadlist.iterator().hasNext()) {
threadlist.iterator().next().start();
try{threadlist.iterator().next().join();}catch (Exception a){}
}
This starts a thread and then will immediately sit around, twiddling its thumbs until that thread is completely done with its job, and will then fire up another thread. Therefore, you never more than 1 active thread at a time.
catch (Exception a){}
You're learning / debugging, and you do this? Oh dear. Don't. Don't ever write a catch block like that. Update your IDE or your muscle memory: The right "I dont want to think about exceptions right now" code is catch (Exception a) { throw new RuntimeException("Unhandled", a);}. To be clear, this isn't the problem, but this is such a bad habit, it needed to be called out.
synchronized (this) {
I really doubt if you fix the 'join' issue I mentioned earlier this will ever run any faster. This synchronized call is important, but it also causes so much blockage that you're likely to see zero actual benefit here.
More generally the calculation you are trying to speed up involves an accumulator.
accumulator is another word for 'parallelising is utterly impossible here, it is hopeless'.
The algorithm cannot involve accumulators if you want to parallellize it, which is what multithreading (at least, if the aim of the multiple threads is to speed things up) is doing. This algorithm cannot be made any faster with threads. period.
Usually algorithms can be rewritten to stop relying on accumulators. But this is clearly an exercise to see an effect, so, just find anything else, really. Don't lock on a single object for the entire calculation: Only one thread is ever actually doing work, all the 999 others are just waiting.

How to interrupt thread to do work and then sleep after doing work?

I want to have a thread which does some I/O work when it is interrupted by a main thread and then go back to sleep/wait until the interrupt is called back again.
So, I have come up with an implementation which seems to be not working. The code snippet is below.
Note - Here the flag is a public variable which can be accessed via the thread class which is in the main class
// in the main function this is how I am calling it
if(!flag) {
thread.interrupt()
}
//this is how my thread class is implemented
class IOworkthread extends Thread {
#Override
public void run() {
while(true) {
try {
flag = false;
Thread.sleep(1000);
} catch (InterruptedException e) {
flag = true;
try {
// doing my I/O work
} catch (Exception e1) {
// print the exception message
}
}
}
}
}
In the above snippet, the second try-catch block catches the InterruptedException. This means that both of the first and second try-catch block are catching the interrupt. But I had only called interrupt to happen during the first try-catch block.
Can you please help me with this?
EDIT
If you feel that there can be another solution for my objective, I will be happy to know about it :)
If it's important to respond fast to the flag you could try the following:
class IOworkthread extends Thread {//implements Runnable would be better here, but thats another story
#Override
public void run() {
while(true) {
try {
flag = false;
Thread.sleep(1000);
}
catch (InterruptedException e) {
flag = true;
}
//after the catch block the interrupted state of the thread should be reset and there should be no exceptions here
try {
// doing I/O work
}
catch (Exception e1) {
// print the exception message
// here of course other exceptions could appear but if there is no Thread.sleep() used here there should be no InterruptedException in this block
}
}
}
}
This should do different because in the catch block when the InterruptedException is caught, the interrupted flag of the thread is reset (at the end of the catch block).
It does sound like a producer/consumer construct. You seem to kind of have it the wrong way around, the IO should be driving the algorithm. Since you stay very abstract in what your code actually does, I'll need to stick to that.
So let's say your "distributed algorithm" works on data of type T; that means that it can be described as a Consumer<T> (the method name in this interface is accept(T value)). Since it can run concurrently, you want to create several instances of that; this is usually done using an ExecutorService. The Executors class provides a nice set of factory methods for creating one, let's use Executors.newFixedThreadPool(parallelism).
Your "IO" thread runs to create input for the algorithm, meaning it is a Supplier<T>. We can run it in an Executors.newSingleThreadExecutor().
We connect these two using a BlockingQueue<T>; this is a FIFO collection. The IO thread puts elements in, and the algorithm instances take out the next one that becomes available.
This makes the whole setup look something like this:
void run() {
int parallelism = 4; // or whatever
ExecutorService algorithmExecutor = Executors.newFixedThreadPool(parallelism);
ExecutorService ioExecutor = Executors.newSingleThreadExecutor();
// this queue will accept up to 4 elements
// this might need to be changed depending on performance of each
BlockingQueue<T> queue = new ArrayBlockingQueue<T>(parallelism);
ioExecutor.submit(new IoExecutor(queue));
// take element from queue
T nextElement = getNextElement(queue);
while (nextElement != null) {
algorithmExecutor.submit(() -> new AlgorithmInstance().accept(nextElement));
nextElement = getNextElement(queue);
if (nextElement == null) break;
}
// wait until algorithms have finished running and cleanup
algorithmExecutor.awaitTermination(Integer.MAX_VALUE, TimeUnit.YEARS);
algorithmExecutor.shutdown();
ioExecutor.shutdown(); // the io thread should have terminated by now already
}
T getNextElement(BlockingQueue<T> queue) {
int timeOut = 1; // adjust depending on your IO
T result = null;
while (true) {
try {
result = queue.poll(timeOut, TimeUnits.SECONDS);
} catch (TimeoutException e) {} // retry indefinetely, we will get a value eventually
}
return result;
}
Now this doesn't actually answer your question because you wanted to know how the IO thread can be notified when it can continue reading data.
This is achieved by the limit to the BlockingQueue<> which will not accept elements after this has been reached, meaning the IO thread can just keep reading and try to put in elements.
abstract class IoExecutor<T> {
private final BlockingQueue<T> queue;
public IoExecutor(BlockingQueue<T> q) { queue = q; }
public void run() {
while (hasMoreData()) {
T data = readData();
// this will block if the queue is full, so IO will pause
queue.put(data);
}
// put null into queue
queue.put(null);
}
protected boolean hasMoreData();
protected abstract T readData();
}
As a result during runtime you should at all time have 4 threads of the algorithm running, as well as (up to) 4 items in the queue waiting for one of the algorithm threads to finish and pick them up.

How to process contents from large text file using multiple threads?

I have to read a huge file contains text, around 3GB (and 40 Million lines). Just reading it happens really fast:
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
while ((line = br.readLine()) != null) {
//Nothing here
}
}
With each read line from above code i do some parsing on the string and process it further.(a huge task). I try to do that multiple threads.
A) I have tried BlockingQueue like this
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
String line;
BlockingQueue<String> queue = new ArrayBlockingQueue<>(100);
int numThreads = 5;
Consumer[] consumer = new Consumer[numThreads];
for (int i = 0; i < consumer.length; i++) {
consumer[i] = new Consumer(queue);
consumer[i].start();
}
while ((line = br.readLine()) != null) {
queue.put(line);
}
queue.put("exit");
} catch (FileNotFoundException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException | InterruptedException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
class Consumer extends Thread {
private final BlockingQueue<String> queue;
Consumer(BlockingQueue q) {
queue = q;
}
public void run() {
while (true) {
try {
String result = queue.take();
if (result.equals("exit")) {
queue.put("exit");
break;
}
System.out.println(result);
} catch (InterruptedException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
}
This approach took more time than normal single threaded processing.
I am not sure why - what am I doing wrong?
B) I have tried ExecutorService:
try (BufferedReader br = new BufferedReader(new FileReader("file.txt"))) {
String line;
ExecutorService pool = Executors.newFixedThreadPool(10);
while ((line = br.readLine()) != null) {
pool.execute(getRunnable(line));
}
pool.shutdown();
} catch (FileNotFoundException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(ReadFileTest.class.getName()).log(Level.SEVERE, null, ex);
}
private static Runnable getRunnable(String run){
Runnable task = () -> {
System.out.println(run);
};
return task;
}
This approach also takes more time than printing directly inside while loop. What am I doing wrong?
What is the correct way to do it?
How can I efficiently process the read line with multiple threads?
Answering one part here - why is the BlockingQueue option slower.
It is important to understand that threads don't come for "free". There is always certain overhead required to get them up and "manage" them.
And of course, when you are actually using more threads than your hardware can support "natively" then context switching is added to the bill.
Beyond that, also the BlockingQueue doesn't come free either. You see, in order to preserve order, that ArrayBlockingQueue probably has to synchronize somewhere. Worst case, that means locking and waiting. Yes, the JVM and JIT are usually pretty good about such things; but again, a certain "percentage" gets added to the bill.
But just for the record, that shouldn't matter. From the javadoc:
This class supports an optional fairness policy for ordering waiting producer and consumer threads. By default, this ordering is not guaranteed. However, a queue constructed with fairness set to true grants threads access in FIFO order. Fairness generally decreases throughput but reduces variability and avoids starvation.
As you are not setting "fairness"
BlockingQueue queue = new ArrayBlockingQueue<>(100);
that shouldn't affect you. On the other hand: I am pretty sure you expected those lines to be processed in order, so you would actually want to go for
BlockingQueue<String> queue = new ArrayBlockingQueue<>(100, true);
and thereby further slowing down the whole thing.
Finally: I agree with the comments given so far. Benchmarking such things is a complex undertaking; and many aspects influence the results. The most important question is definitely: where is your bottle neck?! Is it IO performance (then more threads don't help much!) - or is it really overall processing time (and then the "correct" number of threads for processing should definitely speed up things).
And regarding "how to do this in the correct way" - I suggest to check out this question on softwareengineering.SE.
How to process contents from large text file using multiple threads?
If your computer has enough RAM, I would do the following:
read the entire file into a variable (an ArrayList for example) - using only one thread to read the whole file.
then launch one ExecutorService (with a thread pool that uses no more than the maximum number of cores that your computer can run simultaneously)
int cores = Runtime.getRuntime().availableProcessors();
ExecutorService executorService = Executors.newFixedThreadPool(cores);
finally, divide the lines read, among a limited number of callables/runnables and submit those callables/runnables to your ExecutorService (so all of them can execute simultaneously in your ExecutorService).
unless your processing of lines uses I/O, I assume that you will reach near 100% CPU utilization, and none of your threads will be in waiting state.
do you want even faster processing?
scale vertically is the easiest option: buy even more RAM, better CPU (with more cores), use a Solid State Drive
May be all thread accessing same shared resource concurrently so result more contentious.
One thing you can try reader thread put all line in single key do submit in partition way so it will less contentious.
public void execute(Runnable command) {
final int key= command.getKey();
//Some code to check if it is runing
final int index = key != Integer.MIN_VALUE ? Math.abs(key) % size : 0;
workers[index].execute(command);
}
Create worker with queue so that if you want some task required sequentially then run.
private final AtomicBoolean scheduled = new AtomicBoolean(false);
private final BlockingQueue<Runnable> workQueue = new LinkedBlockingQueue<Runnable>(maximumQueueSize);
public void execute(Runnable command) {
long timeout = 0;
TimeUnit timeUnit = TimeUnit.SECONDS;
if (command instanceof TimeoutRunnable) {
TimeoutRunnable timeoutRunnable = ((TimeoutRunnable) command);
timeout = timeoutRunnable.getTimeout();
timeUnit = timeoutRunnable.getTimeUnit();
}
boolean offered;
try {
if (timeout == 0) {
offered = workQueue.offer(command);
} else {
offered = workQueue.offer(command, timeout, timeUnit);
}
} catch (InterruptedException e) {
throw new RejectedExecutionException("Thread is interrupted while offering work");
}
if (!offered) {
throw new RejectedExecutionException("Worker queue is full!");
}
schedule();
}
private void schedule() {
//if it is already scheduled, we don't need to schedule it again.
if (scheduled.get()) {
return;
}
if (!workQueue.isEmpty() && scheduled.compareAndSet(false, true)) {
try {
executor.execute(this);
} catch (RejectedExecutionException e) {
scheduled.set(false);
throw e;
}
}
}
public void run() {
try {
Runnable r;
do {
r = workQueue.poll();
if (r != null) {
r.run();
}
}
while (r != null);
} finally {
scheduled.set(false);
schedule();
}
}
As suggested above there is no fixed rule for thread pool size .But there is some suggestion or best practice available can be used depending upon your use case.
CPU Bound Tasks
For CPU bound tasks, Goetz (2002, 2006) recommends
threads = number of CPUs + 1
IO Bound Tasks
Working out the optimal number for IO bound tasks is less obvious. During an IO bound task, a CPU will be left idle (waiting or blocking). This idle time can be better used in initiating another IO bound request.

Best queue for sliding windows in Java

I'm writing sliding windows protocol:
import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;
public class ABC {
static boolean status_1 = true;
public static void main(String[] args) {
BlockingQueue<String> block1 = new LinkedBlockingQueue<String>(7); // size 7
Thread a1 = new Thread(new receive(block1));
Thread a2 = new Thread(new send(block1));
a2.start();
a1.start();
}
}
class receive implements Runnable {
BlockingQueue<String> block;
public receive(BlockingQueue<String> block) {
this.block = block;
}
#Override
public void run() {
while (true) {
try {
System.out.println("out: " + block.size() + " " + block.take());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
class send implements Runnable {
BlockingQueue<String> block;
public send(BlockingQueue<String> block) {
this.block = block;
}
#Override
public void run() {
InputStreamReader in = new InputStreamReader(System.in);
BufferedReader bufferedReader = new BufferedReader(in);
int i = 0;
String e;
while (true) {
try {
e = "" + i++;
System.out.println(e);
block.put(e);
if (i == 1000) {
break; //Test 1000 number
}
} catch (InterruptedException f) {
// TODO Auto-generated catch block
f.printStackTrace();
}
}
}
In my example I used BlockingQueue to do the task but it delayed alot. The receive thread keep full size.
Is there any queue in Java could do the task with better performance in real time UDP?
The thing is, that you have no guarnatees that "sending" thread will work with the "same speed" as the "receiving" thread. Parralel thread execution is non-deterministic. You assume otherwise.
You have introduced logical synchonization between threads by assumption that both threads will work with the same speed - 1 item put, 1 item taken, 1 item put, 1 item taken and so on.
So by accident this works for you in case of put but not with offer it is because according to docs https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html put will block until queue will have space to accept new element - that synchronization is missing when u use offer, resuylting in dropped packets.
So basicly what is happening here, is that sending thread occupy more CPU time thus production more data than receiver can consume resulting in dropped data.
It is not about queue performance at all.
Basicly your code is not the great example as there is no natural network latency etc. This code might work to some point , if we would introduce network latency. You can emulate that by adding Thread.sleep(ms) in producer thread between put call.
As a side not, stick to Java's naming convetion
I think your problem is the messuring method. You use block.size() to determine the queue fill grade. The size() method is a relatively long running operation on the queue, which leads to the send thread running away.
If you remove the size output you will see a quite fairly distributed output between send and receive.
By the way, using System.out also disturbs your experiment because of the synchronization to the console output stream. A better approach would be using independent output streams with some timing information.

Delaying an exception

I have a method that periodically (e.g. once in every 10 secs) try to connect to a server and read some data from it. The server might not be available all the time. If the server is not available the method throws an exception.
What would be the best way to implement a wrapper method that doesn't throw an exception except if the server wasn't available for at least one minute?
Keep track of when the last time you successfully reached the server was. If the server throws an exception, catch it and compare to the last time you reached the server. If that time is more than a minute, rethrow the exception.
In pseudocode.
//Create Timer
//Start Timer
bool connected = false;
while (!connected)
try {
//Connect To DB
connected = true;
}
catch (Exception ex) {
if (more than 1 minute has passed)
throw new Exception(ex);
}
}
You will have to record the time that you originally try to connect to the server and then catch the exception. if the time that the exception is caught is more than the original time + 1 minute, rethrow the exception. If not, retry.
Ideally you can put a timeout on the call to the server. Failing that do a thread.sleep(600) in the catch block and try it again and fail if the second one doesn't return.
Remember that exception handling is just a very specialized use of the usual "return" system. (For more technical details, read up on "monads".) If the exceptional situation you want to signal does not fit naturally into Java's exception handling system, it may not be appropriate to use exceptions.
You can keep track of error conditions the usual way: Keep a state variable, update it as needed with success/failure info, and respond appropriately as the state changes.
You could have a retry count, and if the desired count (6 in your case) had been met then throw an exception
int count = 0;
CheckServer(count);
public void CheckServer(count) {
try
{
// connect to server
}
catch(Exception e)
{
if(count < MAX_ATTEMPTS) {
// wait 10 seconds
CheckServer(count++)
}
else {
throw e;
}
}
}
You can set a boolean variable for whether or not the server connection has succeeded, and check it in your exception handler, like so:
class ServerTester : public Object
{
private bool failing;
private ServerConnection serverConnection;
private Time firstFailure;
public ServerTester(): failing(false)
{
}
public void TestServer() throws ServerException
{
try
{
serverConnection.Connect();
failing = false;
}
catch (ServerException e)
{
if (failing)
{
if (Time::GetTime() - firstFailure > 60)
{
failing = false;
throw e;
}
}
else
{
firstFailure = Time::GetTime();
failing = true;
}
}
}
}
I don't know what the actual time APIs are, since it's been a while since I last used Java. This will do what you ask, but something about it doesn't seem right. Polling for exceptions strikes me as a bit backwards, but since you're dealing with a server, I can't think of any other way off the top of my head.

Categories

Resources