I have a thread sleep problem. Inside the thread run method i have a synchronized block, and a sleep time.
Each thread increments or decrements the shared class "value" in 5 units, and then sleeps.
public class borr {
public static void main(String[] args) {
int times=5;
int sleeptime=1000;
int initial=50;
Shared shared = new Shared(initial);
ThreadClass tIncrement = new ThreadClass(shared,times,sleeptime,true);
ThreadClass tDecrement = new ThreadClass(shared,times,sleeptime,false);
tIncrement.start();
tDecrement.start();
}
}
class Shared{
int value=0;
public Shared(int value) {
super();
this.value = value;
}
public int getValue() {
return value;
}
public void setValue(int value) {
this.value = value;
}
}
class ThreadClass extends Thread{
Shared shared;
int times=0;
int sleeptime=0;
boolean inc;
public ThreadClass(Shared shared, int times, int sleeptime, boolean inc) {
super();
this.shared = shared;
this.times = times;
this.sleeptime = sleeptime;
this.inc = inc;
}
public void run() {
int aux;
if(inc) {
for(int i=0;i<times;i++) {
synchronized(shared) {
aux=shared.getValue()+1;
shared.setValue(aux);
System.out.println("Increment, new value"+shared.getValue());
try {
Thread.sleep(sleeptime);
}catch(Exception e) {
e.printStackTrace();
}
}
}
}
else {
for(int i=0;i<times;i++) {
synchronized(shared) {
aux=shared.getValue()-1;
shared.setValue(aux);
System.out.println("Decrement, new value"+shared.getValue());
try {
Thread.sleep(sleeptime);
}catch(Exception e) {
e.printStackTrace();
}
}
}
}
}
}
But if I move the Thread.sleep out of the synchronized block, like this, the output is increment, decrement, increment, decrement. When it stops sleeping and starts a new iteration of the loop, shouldn't the other thread try to enter? instead, it continues looping until that thread is finished:
for(int i=0;i<times;i++) {
synchronized(shared) {
aux=shared.getValue()-1;
shared.setValue(aux);
System.out.println("Decrement, new value"+shared.getValue());
}
try {
Thread.sleep(sleeptime);
}catch(Exception e) {
e.printStackTrace();
}
}
This is bad:
for(...) {
synchronized(some_lock_object) {
...
}
}
The reason it's bad is, Once some thread, A, gets into that loop, then every time it unlocks the lock, The very next thing it does is to lock the lock again.
If the loop body takes any significant amount of time to execute, then any other thread, B, that's waiting for the lock will be put into a wait state by the operating system. Each time thread A releases the lock, thread B will start to wake up, but thread A will be able to re-acquire it before thread B gets a chance.
This is a classic example of starvation.
One way around the problem would be to use a ReentrantLock with a fair ordering policy instead of using a synchronized block. When threads compete for a fair lock, the winner always is the one that's been waiting the longest.
But, fair locks are expensive to implement. A far better solution is to always keep the body of any synchronized block as short and as sweet as possible. Usually, a thread should keep a lock locked for no longer than it takes to assign a small number of fields in some object.
In variant A you use two threads that ...
repeat 5 times
enter a sync block
increment
wait 1 second
repeat 5 times
enter a sync block
decrement
wait 1 second
In variant B you use two threads that ...
repeat 5 times
enter a sync block
increment
wait 1 second
repeat 5 times
enter a sync block
decrement
wait 1 second
In variant A both threads are active (= stay in a sync block) all the time.
In variant B both threads are sleeping most of the time.
As there is absolutely no guarantee which threads are executed next, it is not surprising that variant A and B behave so differently. While in A both threads could - in theory - be active in parallel, the second thread has not much chance to be active as not being in a synchronization context does not guarantee that a context switch is performed at that moment (and another thread is run). In variant B that is completely different: As both threads sleep most of the time, the runtime environment has no other chance as running another thread while one is sleeping. A sleep will trigger switching to another thread as the VM tries to make the best of existing CPU resources.
Nevertheless: The result AFTER both threads have been run will be exactly the same. This is the only determinism you can rely on. Everything else depends on specific implementation details how the VM will handle threads and synchronizations blocks and can even vary from OS to OS or one implementation of a VM to another.
But if i move the Thread.sleep out of the synchronized block, like this, the output is increment, decrement, increment, decrement. The sleep is still inside each iteration of the loop so, shouldnt the result be the same in both cases?:
when it stops sleeping and starts a new iteration of the loop, shouldn't the other thread try to enter.
They both try to enter.
And the other one is already in a wait status (i.e. not actively running) because it tried to enter before. Whereas the thread that has just released the lock can run on and get the now uncontested lock right back.
This is a race condition. When both threads want the lock at the same time, the system is free to choose one. It seems it picks the one that a few instructions ago just released it. Maybe you can change this by yield()ing. Maybe not. But either way, it is not specified/deterministic/fair. If you care about execution order, you need to explicitly schedule things yourself.
Related
I am learning multi-threads programming in java recently. And I don't understand why the following test case will fail. Any explanation will be much appreciated.
Here is MyCounter.java.
public class MyCounter {
private int count;
public synchronized void incrementSynchronized() throws InterruptedException {
int temp = count;
wait(100); // <-----
count = temp + 1;
}
public int getCount() {
return count;
}
}
This is my unit test class.
public class MyCounterTest {
#Test
public void testSummationWithConcurrency() throws InterruptedException {
int numberOfThreads = 100;
ExecutorService service = Executors.newFixedThreadPool(10);
CountDownLatch latch = new CountDownLatch(numberOfThreads);
MyCounter counter = new MyCounter();
for (int i = 0; i < numberOfThreads; i++) {
service.submit(() -> {
try {
counter.incrementSynchronized();
} catch (InterruptedException e) {
e.printStackTrace();
}
latch.countDown();
});
}
latch.await();
assertEquals(numberOfThreads, counter.getCount());
}
}
But if I remove wait(100) from the synchronized method incrementSynchronized, the test will succeed. I don't understand why wait(100) will affect the result.
Solomons suggestion to use sleep is a good one. If you use sleep instead of wait, you should see the test pass.
Using wait causes the thread to relinquish the lock, allowing other threads to proceed and overwrite the value in count. When the thread's wait times out, it acquires the lock again, then writes a value to count that may be stale by now.
The typical usage of wait is when your thread can't do anything useful until some condition is met. Some other thread eventually satisfies that condition and a notification gets sent that will inform the thread it can resume work. In the meantime, since there is nothing useful the thread can do, it releases the lock it is holding (because other threads need the lock in order to make progress meeting the condition that the thread is waiting for) and goes dormant.
Sleep doesn't release the lock so there won't be interference from other threads. For either the sleeping case or the case where you delete the wait call, the lock is held for the duration of the operation, nothing else can change count, so it is threadsafe.
Be aware that in real life, outside of learning exercises, sleeping with a lock held is usually not a great idea. You want to minimize the time that a task holds a lock so you can get more throughput. Threads denying each other the use of a lock is not helpful.
Also be aware that getCount needs to be synchronized as well, since it is reading a value written by another thread.
ReentrantReadWriteLock has a fair and non-fair(default) mode, but the document is so hard for me to understand it.
How can I understand it? It's great if there is some code example to demo it.
UPDATE
If I have a writing thread, and many many reading thread, which mode is better to use? If I use non-fair mode, is it possible the writing thread has little chance to get the lock?
Non-fair means that when the lock is ready to be obtained by a new thread, the lock gives no guarantees to the fairness of who obtains the lock (assuming there are multiple threads requesting the lock at the time). In other words, it is conceivable that one thread might be continuously starved because other threads always manage to arbitrarily get the lock instead of it.
Fair mode acts more like first-come-first-served, where threads are guaranteed some level of fairness that they will obtain the lock in a fair manner (e.g. before a thread that started waiting long after).
Edit
Here is an example program that demonstrates the fairness of locks (in that write lock requests for a fair lock are first come, first served). Compare the results when FAIR = true (the threads are always served in order) versus FAIR = false (the threads are sometimes served out of order).
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class FairLocking {
public static final boolean FAIR = true;
private static final int NUM_THREADS = 3;
private static volatile int expectedIndex = 0;
public static void main(String[] args) throws InterruptedException {
ReentrantReadWriteLock.WriteLock lock = new ReentrantReadWriteLock(FAIR).writeLock();
// we grab the lock to start to make sure the threads don't start until we're ready
lock.lock();
for (int i = 0; i < NUM_THREADS; i++) {
new Thread(new ExampleRunnable(i, lock)).start();
// a cheap way to make sure that runnable 0 requests the first lock
// before runnable 1
Thread.sleep(10);
}
// let the threads go
lock.unlock();
}
private static class ExampleRunnable implements Runnable {
private final int index;
private final ReentrantReadWriteLock.WriteLock writeLock;
public ExampleRunnable(int index, ReentrantReadWriteLock.WriteLock writeLock) {
this.index = index;
this.writeLock = writeLock;
}
public void run() {
while(true) {
writeLock.lock();
try {
// this sleep is a cheap way to make sure the previous thread loops
// around before another thread grabs the lock, does its work,
// loops around and requests the lock again ahead of it.
Thread.sleep(10);
} catch (InterruptedException e) {
//ignored
}
if (index != expectedIndex) {
System.out.printf("Unexpected thread obtained lock! " +
"Expected: %d Actual: %d%n", expectedIndex, index);
System.exit(0);
}
expectedIndex = (expectedIndex+1) % NUM_THREADS;
writeLock.unlock();
}
}
}
}
Edit (again)
Regarding your update, with non-fair locking it's not that there's a possibility that a thread will have a low chance of getting a lock, but rather that there's a low chance that a thread will have to wait a bit.
Now, typically as the starvation period increases, the probability of that length of time actually occuring decreases...just as flipping a coin "heads" 10 consecutive times is less likely to occur than flipping a coin "heads" 9 consecutive times.
But if the selection algorithm for multiple waiting threads was something non-randomized, like "the thread with the alphabetically-first name always gets the lock" then you might have a real problem because the probability does not necessarily decrease as the thread gets more and more starved...if a coin is weighted to "heads" 10 consecutive heads is essentially as likely as 9 consecutive heads.
I believe that in implementations of non-fair locking a somewhat "fair" coin is used. So the question really becomes fairness (and thus, latency) vs throughput. Using non-fair locking typically results in better throughput but at the expense of the occasional spike in latency for a lock request. Which is better for you depends on your own requirements.
When some threads waiting for a lock, and the lock has to select one thread to get the access to the critical section:
In non-fair mode, it selects thread without any criteria.
In fair mode, it selects thread that has waiting for the most time.
Note: Take into account that the behavior explained previously is only used with the lock() and unlock() methods. As the tryLock() method doesn't put the thread to sleep if the Lock interface is used, the fair attribute doesn't affect its functionality.
public class MyStack2 {
private int[] values = new int[10];
private int index = 0;
public synchronized void push(int x) {
if (index <= 9) {
values[index] = x;
Thread.yield();
index++;
}
}
public synchronized int pop() {
if (index > 0) {
index--;
return values[index];
} else {
return -1;
}
}
public synchronized String toString() {
String reply = "";
for (int i = 0; i < values.length; i++) {
reply += values[i] + " ";
}
return reply;
}
}
public class Pusher extends Thread {
private MyStack2 stack;
public Pusher(MyStack2 stack) {
this.stack = stack;
}
public void run() {
for (int i = 1; i <= 5; i++) {
stack.push(i);
}
}
}
public class Test {
public static void main(String args[]) {
MyStack2 stack = new MyStack2();
Pusher one = new Pusher(stack);
Pusher two = new Pusher(stack);
one.start();
two.start();
try {
one.join();
two.join();
} catch (InterruptedException e) {
}
System.out.println(stack.toString());
}
}
Since the methods of MyStack2 class are synchronised, I was expecting the output as
1 2 3 4 5 1 2 3 4 5. But the output is indeterminate. Often it gives : 1 1 2 2 3 3 4 4 5 5
As per my understanding, when thread one is started it acquires a lock on the push method. Inside push() thread one yields for sometime. But does it release the lock when yield() is called? Now when thread two is started, would thread two acquire a lock before thread one completes execution? Can someone explain when does thread one release the lock on stack object?
A synchronized method will only stop other threads from executing it while it is being executed. As soon as it returns other threads can (and often will immediately) get access.
The scenario to get your 1 1 2 2 ... could be:
Thread 1 calls push(1) and is allowed in.
Thread 2 calls push(1) and is blocked while Thread 1 is using it.
Thread 1 exits push(1).
Thread 2 gains access to push and pushes 1 but at the same time Thread 1 calls push(2).
Result 1 1 2 - you can clearly see how it continues.
When you say:
As per my understanding, when thread one is started it acquires a lock on the push method.
that is not quite right, in that the lock isn't just on the push method. The lock that the push method uses is on the instance of MyStack2 that push is called on. The methods pop and toString use the same lock as push. When a thread calls any of these methods on an object, it has to wait until it can acquire the lock. A thread in the middle of calling push will block another thread from calling pop. The threads are calling different methods to access the same data structure, using the same lock for all the methods that access the structure prevents the threads from accessing the data structure concurrently.
Once a thread gives up the lock on exiting a synchronized method the scheduler decides which thread gets the lock next. Your threads are acquiring locks and letting them go multiple times, every time a lock is released there is a decision for the scheduler to make. You can't make any assumptions about which will get picked, it can be any of them. Output from multiple threads is typically jumbled up.
It seems like you may have some confusion on exactly what the synchronized and yield keywords mean.
Synchronized means that only one thread can enter that code block at a time. Imagine it as a gate and you need a key to get through. Each thread as it enters takes the only key, and returns it when they are done. This allows the next thread to get the key and execute the code inside. It doesn't matter how long they are in the synchronized method, only one thread can enter at a time.
Yield suggests (and yes its only a suggestion) to the compiler that the current thread can give up its allotted time and another thread can begin execution. It doesn't always happen that way, however.
In your code, even though the current thread suggest to the compiler that it can give up its execution time, it still holds the key to the synchronized methods, and therefore the new thread cannot enter.
The unpredictable behavior comes from the yield not giving up the execution time as you predicted.
Hope that helped!
The program creates thread t0 which spawns thread t1 and subsequently threads t2 and t3 are created.After the execution of thread t3and the application never returns to the other threads spawned earlier(t0,t1,t2) and they are left stuck.
Why are the threads t0, t1, and t2 suspended?
public class Cult extends Thread
{
private String[] names = {"t1", "t2", "t3"};
static int count = 0;
public void run()
{
for(int i = 0; i < 100; i++)
{
if(i == 5 && count < 3)
{
Thread t = new Cult(names[count++]);
t.start();
try{
Thread.currentThread().join();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
System.out.print(Thread.currentThread().getName() + " ");
}
}
public static void main(String[] a`)
{
new Cult("t0").start();
}
}
The most important point you missed:
Thread.currentThread().join();
Method join in source code uses isAlive method.
public final synchronized void join(long millis)
...
if (millis == 0) {
while (isAlive()) {
wait(0);
}
...
}
It means that Thread.currentThread().join() will return only when Thread.currentThread() is dead.
But in your case it's impossible because of your running code in Thread.currentThread() has itself
this peace of code Thread.currentThread().join(). That's why after Thread 3 completion your program should hang and nothing happens thereafter.
Why are the threads t0, t1, and t2 suspended? The execution of thread t3 completes.
t3 completes because it is not trying to fork a 4th thread and therefore is not trying to join() with it's own thread. The following line will never return so t0, t1, and t2 all stop there and wait forever:
Thread.currentThread().join();
This is asking the current thread to wait for itself to finish which doesn't work. I suspect that you meant to say t.join(); which is waiting for the thread that was just forked to finish.
Here are some other thoughts about your code in no apparent order:
You should consider implements Runnable instead of extends Thread. See here: "implements Runnable" vs. "extends Thread"
You are using the shared static variable count in multiple threads without any protection of locking. The best solution is to use an AtomicInteger instead of a int. You probably don't have a problem here because each thread is modifying count and then forking another thread but if you tried to fork 2 threads, this would be a real problem because of data race conditions.
I'm not sure why you are only spawning another thread if(i == 5 && count < 3). i is only going to be 5 once in that loop. Is that really what you intended?
String[] names = {"t1", "t2", "t3"}; fields are recommended to be declared at the top of classes. Otherwise they get buried in the code and get lost.
In main you start a Cult thread and then the main thread finishes. This is unnecessary and you can just call cult.run(); in main instead and use the main thread.
Cult(String s) { super(s); } there is no point in having a constructor that calls the super constructor with the same arguments. This can be removed.
This is debatable but I tend to put main method at the top of the class and not bury it since it is the "entrance" method. Same thing with constructors. Those should be above the run() method.
catch(Exception e) {} is a really bad pattern. At the very least you should do a e.printStackTrace(); or log it somehow. Catching and just dropping exceptions hides a lot of problems. Also, catching Exception should be changed to catch(InterruptedException e). You want to restrict your catch blocks just the exceptions thrown by the block otherwise this may again hide problems in the future if you copy and paste that block somewhere.
More a good practice but never use constants like 3 that have to match another data item. In this case it would be better to use names.length which is 3. THis means that you don't need to change 2 places in the code if you want to increase the number of threads. You could also have the name be "t" + count and get rid of the names array altogether.
I know that similar questions have been discussed in this site, but I have not still got further by their aid considering a specific example. I can grasp the difference of notify() and notifyAll() regarding Thread "awakeining" in theory but I cannot perceive how they influence the functionality of program when either of them is used instead of the other. Therefore I set the following code and I would like to know what is the impact of using each one of them. I can say from the start that they give the same output (Sum is printed 3 times).
How do they differ virtually? How could someone modify the program, in order for the applying notify or notifyAll to play a crucial role to its functionality (to give different results)?
Task:
class MyWidget implements Runnable {
private List<Integer> list;
private int sum;
public MyWidget(List<Integer> l) {
list = l;
}
public synchronized int getSum() {
return sum;
}
#Override
public void run() {
synchronized (this) {
int total = 0;
for (Integer i : list)
total += i;
sum = total;
notifyAll();
}
}
}
Thread:
public class MyClient extends Thread {
MyWidget mw;
public MyClient(MyWidget wid) {
mw = wid;
}
public void run() {
synchronized (mw) {
while (mw.getSum() == 0) {
try {
mw.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
System.out.println("Sum calculated from Thread "
+ Thread.currentThread().getId() + " : " + mw.getSum());
}
}
public static void main(String[] args) {
Integer[] array = { 4, 6, 3, 8, 6 };
List<Integer> integers = Arrays.asList(array);
MyWidget wid = new MyWidget(integers);
Thread widThread = new Thread(wid);
Thread t1 = new MyClient(wid);
Thread t2 = new MyClient(wid);
Thread t3 = new MyClient(wid);
widThread.start();
t1.start();
t2.start();
t3.start();
}
}
UPDATE:
I write it explicitly. The result is the same whether one uses notify or notifyAll:
Sum calculated from Thread 12 : 27
Sum calculated from Thread 11 : 27
Sum calculated from Thread 10 : 27
Therefore my question: What is the difference?
The difference is subtler than your example aims to provoke. In the words of Josh Bloch (Effective Java 2nd Ed, Item 69):
... there may be cause to use notifyAll in place of notify. Just as placing the wait invocation in a loop protects against accidental or malicious notifications on a publicly accessible object, using notifyAll in place of notify protects against accidental or malicious waits by an unrelated thread. Such waits could otherwise “swallow” a critical notification, leaving its intended recipient waiting indefinitely.
So the idea is that you must consider other pieces of code entering wait on the same monitor you are waiting on, and those other threads swallowing the notification without reacting in the designed way.
Other pitfalls apply as well, which can result in thread starvation, such as that several threads may wait for different conditions, but notify always happens to wake the same thread, and the one whose condition is not satisfied.
Even though not immediately related to your question, I feel it is important to quote this conclusion as well (emphasis by original author):
In summary, using wait and notify directly is like programming in “concurrency assembly language,” as compared to the higher-level language provided by java.util.concurrent. There is seldom, if ever, a reason to use wait and notify in new code. If you maintain code that uses wait and notify, make sure that it always invokes wait from within a while loop using the standard idiom. The notifyAll method should generally be used in preference to notify. If notify is used, great care must be taken to ensure liveness.
This is made clear in all sorts of docs. The difference is that notify() selects (randomly) one thread, waiting for a given lock, and starts it. notifyAll() instead, restarts all threads waiting for the lock.
Best practice suggests that threads always wait in a loop, exited only when the condition on which they are waiting is satisfied. If all threads do that, then you can always use notifyAll(), guaranteeing that every thread whose wait condition has been satisfied, is restarted.
Edited to add hopefully enlightening code:
This program:
import java.util.concurrent.CountDownLatch;
public class NotifyExample {
static final int N_THREADS = 10;
static final char[] lock = new char[0];
static final CountDownLatch latch = new CountDownLatch(N_THREADS);
public static void main(String[] args) {
for (int i = 0; i < N_THREADS; i++) {
final int id = i;
new Thread() {
#Override public void run() {
synchronized (lock) {
System.out.println("waiting: " + id);
latch.countDown();
try { lock.wait(); }
catch (InterruptedException e) {
System.out.println("interrupted: " + id);
}
System.out.println("awake: " + id);
}
}
}.start();
}
try { latch.await(); }
catch (InterruptedException e) {
System.out.println("latch interrupted");
}
synchronized (lock) { lock.notify(); }
}
}
produced this output, in one example run:
waiting: 0
waiting: 4
waiting: 3
waiting: 6
waiting: 2
waiting: 1
waiting: 7
waiting: 5
waiting: 8
waiting: 9
awake: 0
None of the other 9 threads will ever awaken, unless there are further calls to notify.
notify wakes (any) one thread in the wait set, notifyAll wakes all threads in the waiting set. notifyAll should be used most of the time. If you are not sure which to use, then use notifyAll.
In some cases, all waiting threads can take useful action once the wait finishes. An example would be a set of threads waiting for a certain task to finish; once the task has finished, all waiting threads can continue with their business. In such a case you would use notifyAll() to wake up all waiting threads at the same time.
Another case, for example mutually exclusive locking, only one of the waiting threads can do something useful after being notified (in this case acquire the lock). In such a case, you would rather use notify(). Properly implemented, you could use notifyAll() in this situation as well, but you would unnecessarily wake threads that can't do anything anyway.
Javadocs on notify.
Javadocs on notifyAll.
Once only one thread is waiting to sum to not be zero, there is no difference. If there are several threads waiting, notify will wake up only one of them, and all the other will wait forever.
Run this test to better understand the difference:
public class NotifyTest implements Runnable {
#Override
public void run ()
{
synchronized (NotifyTest.class)
{
System.out.println ("Waiting: " + this);
try
{
NotifyTest.class.wait ();
}
catch (InterruptedException ex)
{
return;
}
System.out.println ("Notified: " + this);
}
}
public static void main (String [] args) throws Exception
{
for (int i = 0; i < 10; i++)
new Thread (new NotifyTest ()).start ();
Thread.sleep (1000L); // Let them go into wait ()
System.out.println ("Doing notify ()");
synchronized (NotifyTest.class)
{
NotifyTest.class.notify ();
}
Thread.sleep (1000L); // Let them print their messages
System.out.println ("Doing notifyAll ()");
synchronized (NotifyTest.class)
{
NotifyTest.class.notifyAll ();
}
}
}
I found what is going on with my program. The three Threads print the result even with the notify(), because they do not manage to enter the waiting state. The calculation in the widThread is performed quickly enough to preempt the entering of the other Threads in the waiting state, since it depends on the condition mw.getSum() == 0 (while loop). The widThread calculates the sum, so that the remaining Threads do not ever "see" its value as 0.
If the while loop is removed and the start of widThread comes after the start of the other Threads, then by notify() only one Thread prints the result and the others are waiting forever, as the theory and the other answers indicate.