public class MyStack2 {
private int[] values = new int[10];
private int index = 0;
public synchronized void push(int x) {
if (index <= 9) {
values[index] = x;
Thread.yield();
index++;
}
}
public synchronized int pop() {
if (index > 0) {
index--;
return values[index];
} else {
return -1;
}
}
public synchronized String toString() {
String reply = "";
for (int i = 0; i < values.length; i++) {
reply += values[i] + " ";
}
return reply;
}
}
public class Pusher extends Thread {
private MyStack2 stack;
public Pusher(MyStack2 stack) {
this.stack = stack;
}
public void run() {
for (int i = 1; i <= 5; i++) {
stack.push(i);
}
}
}
public class Test {
public static void main(String args[]) {
MyStack2 stack = new MyStack2();
Pusher one = new Pusher(stack);
Pusher two = new Pusher(stack);
one.start();
two.start();
try {
one.join();
two.join();
} catch (InterruptedException e) {
}
System.out.println(stack.toString());
}
}
Since the methods of MyStack2 class are synchronised, I was expecting the output as
1 2 3 4 5 1 2 3 4 5. But the output is indeterminate. Often it gives : 1 1 2 2 3 3 4 4 5 5
As per my understanding, when thread one is started it acquires a lock on the push method. Inside push() thread one yields for sometime. But does it release the lock when yield() is called? Now when thread two is started, would thread two acquire a lock before thread one completes execution? Can someone explain when does thread one release the lock on stack object?
A synchronized method will only stop other threads from executing it while it is being executed. As soon as it returns other threads can (and often will immediately) get access.
The scenario to get your 1 1 2 2 ... could be:
Thread 1 calls push(1) and is allowed in.
Thread 2 calls push(1) and is blocked while Thread 1 is using it.
Thread 1 exits push(1).
Thread 2 gains access to push and pushes 1 but at the same time Thread 1 calls push(2).
Result 1 1 2 - you can clearly see how it continues.
When you say:
As per my understanding, when thread one is started it acquires a lock on the push method.
that is not quite right, in that the lock isn't just on the push method. The lock that the push method uses is on the instance of MyStack2 that push is called on. The methods pop and toString use the same lock as push. When a thread calls any of these methods on an object, it has to wait until it can acquire the lock. A thread in the middle of calling push will block another thread from calling pop. The threads are calling different methods to access the same data structure, using the same lock for all the methods that access the structure prevents the threads from accessing the data structure concurrently.
Once a thread gives up the lock on exiting a synchronized method the scheduler decides which thread gets the lock next. Your threads are acquiring locks and letting them go multiple times, every time a lock is released there is a decision for the scheduler to make. You can't make any assumptions about which will get picked, it can be any of them. Output from multiple threads is typically jumbled up.
It seems like you may have some confusion on exactly what the synchronized and yield keywords mean.
Synchronized means that only one thread can enter that code block at a time. Imagine it as a gate and you need a key to get through. Each thread as it enters takes the only key, and returns it when they are done. This allows the next thread to get the key and execute the code inside. It doesn't matter how long they are in the synchronized method, only one thread can enter at a time.
Yield suggests (and yes its only a suggestion) to the compiler that the current thread can give up its allotted time and another thread can begin execution. It doesn't always happen that way, however.
In your code, even though the current thread suggest to the compiler that it can give up its execution time, it still holds the key to the synchronized methods, and therefore the new thread cannot enter.
The unpredictable behavior comes from the yield not giving up the execution time as you predicted.
Hope that helped!
Related
I am referencing from Baeldung.com. Unfortunately, the article does not explain why this is not a thread safe code. Article
My goal is to understand how to create a thread safe method with the synchronized keyword.
My actual result is: The count value is 1.
package NotSoThreadSafe;
public class CounterNotSoThreadSafe {
private int count = 0;
public int getCount() { return count; }
// synchronized specifies that the method can only be accessed by 1 thread at a time.
public synchronized void increment() throws InterruptedException { int temp = count; wait(100); count = temp + 1; }
}
My expected result is: The count value should be 10 because of:
I created 10 threads in a pool.
I executed Counter.increment() 10 times.
I make sure I only test after the CountDownLatch reached 0.
Therefore, it should be 10. However, if you release the lock of synchronized using Object.wait(100), the method become not thread safe.
package NotSoThreadSafe;
import org.junit.jupiter.api.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static org.junit.jupiter.api.Assertions.assertEquals;
class CounterNotSoThreadSafeTest {
#Test
void incrementConcurrency() throws InterruptedException {
int numberOfThreads = 10;
ExecutorService service = Executors.newFixedThreadPool(numberOfThreads);
CountDownLatch latch = new CountDownLatch(numberOfThreads);
CounterNotSoThreadSafe counter = new CounterNotSoThreadSafe();
for (int i = 0; i < numberOfThreads; i++) {
service.execute(() -> {
try { counter.increment(); } catch (InterruptedException e) { e.printStackTrace(); }
latch.countDown();
});
}
latch.await();
assertEquals(numberOfThreads, counter.getCount());
}
}
This code has both of the classical concurrency problems: a race condition (a semantic problem) and a data race (a memory model related problem).
Object.wait() releases the object's monitor and another thread can enter into the synchronized block/method while the current one is waiting. Obviously, author's intention was to make the method atomic, but Object.wait() breaks the atomicity. As result, if we call .increment() from, let's say, 10 threads simultaneously and each thread calls the method 100_000 times, we get count < 10 * 100_000 almost always, and this isn't what we'd like to. This is a race condition, a logical/semantic problem. We can rephrase the code... Since we release the monitor (this equals to the exit from the synchronized block), the code works as follows (like two separated synchronized parts):
public void increment() {
int temp = incrementPart1();
incrementPart2(temp);
}
private synchronized int incrementPart1() {
int temp = count;
return temp;
}
private synchronized void incrementPart2(int temp) {
count = temp + 1;
}
and, therefore, our increment increments the counter not atomically. Now, let's assume that 1st thread calls incrementPart1, then 2nd one calls incrementPart1, then 2nd one calls incrementPart2, and finally 1st one calls incrementPart2. We did 2 calls of the increment(), but the result is 1, not 2.
Another problem is a data race. There is the Java Memory Model (JMM) described in the Java Language Specification (JLS). JMM introduces a Happens-before (HB) order between actions like volatile memory write/read, Object monitor's operations etc. https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5 HB gives us guaranties that a value written by one thread will be visible by another one. Rules how to get these guaranties are also known as Safe Publication rules. The most common/useful ones are:
Publish the value/reference via a volatile field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5), or as the consequence of this rule, via the AtomicX classes
Publish the value/reference through a properly locked field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5)
Use the static initializer to do the initializing stores
(http://docs.oracle.com/javase/specs/jls/se11/html/jls-12.html#jls-12.4)
Initialize the value/reference into a final field, which leads to the freeze action (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.5).
So, to have the counter correctly (as JMM has defined) visible, we must make it volatile
private volatile int count = 0;
or do the read over the same object monitor's synchronization
public synchronized int getCount() { return count; }
I'd say that in practice, on Intel processors, you read the correct value without any of these additional efforts, with just simple plain read, because of TSO (Total Store Ordering) implemented. But on a more relaxed architecture, like ARM, you get the problem. Follow JMM formally to be sure your code is really thread-safe and doesn't contain any data races.
Why int temp = count; wait(100); count = temp + 1; is not thread-safe? One possible flow:
First thread reads count (0), save it in temp for later, and waits, allowing second thread to run (lock released);
second thread reads count (also 0), saved in temp, and waits, eventually allowing first thread to continue;
first thread increments value from temp and saves in count (1);
but second thread still holds the old value of count (0) in temp - eventually it will run and store temp+1 (1) into count, not incrementing its new value.
very simplified, just considering 2 threads
In short: wait() releases the lock allowing other (synchronized) method to run.
Consider the following simple example:
public class Example extends Thread {
private int internalNum;
public void getNum() {
if (internalNum > 1)
System.out.println(internalNum);
else
System.out.println(1000);
}
public synchronized modifyNum() {
internalNum += 1;
}
public void run() {
// Some code
}
}
Let's say code execution is split in two threads. Hypothetically, following sequence of events occurs:
First thread accesses the getNum method and caches the internalNum which is 0 at the moment.
At the very same time second thread accesses modifyNum method acquiring the lock, changes the internalNum to 1 and exits releasing the lock.
Now, first thread continues it execution and prints the internalNum.
The question is what will get printed on the console?
My guess is that this hypothetical example will result in 1000 being printed on the console because read and write flushes are only forced on a particular thread when entering or leaving the synchronized block. Therefore, first thread will happily use it's cached value, not knowing it was changed.
I am aware that making internalNum volatile would solve the possible issue, however I am only wondering weather it is really necessary.
Let's say code execution is split in two threads.
It doesn't exit. However a ressource (method, fields) may be accessed in concurrent way by two threads.
I think you mix things. Your class extends Thread but your question is about accessing to a resource of a same instance by concurrent threads.
Here is the code adapted to your question.
A shared resource between threads :
public class SharedResource{
private int internalNum;
public void getNum() {
if (internalNum > 1)
System.out.println(internalNum);
else
System.out.println(1000);
}
public synchronized modifyNum() {
internalNum += 1;
}
public void run() {
// Some code
}
}
Threads and running code :
public class ThreadForExample extends Thread {
private SharedResource resource;
public ThreadForExample(SharedResource resource){
this.resource=resource;
}
public static void main(String[] args){
SharedResource resource = new SharedResource();
ThreadForExample t1 = new ThreadForExample(resource);
ThreadForExample t2 = new ThreadForExample(resource);
t1.start();
t2.start();
}
}
Your question :
Hypothetically, following sequence of events occurs:
First thread accesses the getNum method and caches the internalNum
which is 0 at the moment. At the very same time second thread accesses
modifyNum method acquiring the lock, changes the internalNum to 1 and
exits releasing the lock. Now, first thread continues it execution and
prints the internalNum
In your scenario you give the impression that the modifyNum() method execution blocks the other threads to access to non synchronized methods but it is not the case.
getNum() is not synchronized. So, threads don't need to acquire the lock on the object to execute it. In this case, the output depends simply of which one thread has executed the instruction the first :
internalNum += 1;
or
System.out.println(internalNum);
I have written a program on synchronized block by locking on .class, and my program is executing thread by thread. But when i write the same code using synchronized method, the output is entirely different.
Synchronized block program given below:
public class SyncBlock {
public static void main(String[] args) {
final Thread t1 = new SimpleThread("First Thread");
final Thread t2 = new SimpleThread("Second Thread");
t1.start();
t2.start();
}
}
class SimpleThread extends Thread {
public SimpleThread(String str) {
super(str);
}
public void run() {
synchronized (SyncBlock.class) {
for (int i = 0; i < 5; i++) {
System.out.println(getName() + " says " + i);
try {
sleep((long) (Math.random() * 1000));
} catch (InterruptedException e) {
}
}
System.out.println(getName() + " is done.");
}
}
}
The out put is:
First Thread says 0
First Thread says 1
First Thread says 2
First Thread says 3
First Thread says 4
First Thread is done.
Second Thread says 0
Second Thread says 1
Second Thread says 2
Second Thread says 3
Second Thread says 4
Second Thread is done.
Now i am using the same program using synchronized method. But it is behaving differently. Could you please explain whether both will behave differently or is there any solution to get same output using both synchronized block and method.
Using synchronized method:
now synchronize the run method and replace this code:
public synchronized void run() {
for (int i = 0; i < 10; i++) {
System.out.println(getName() + " says " + i);
try {
sleep((long) (Math.random() * 1000));
} catch (InterruptedException e) {
}
}
System.out.println(getName() + " is done.");
}
Here the output is different:
First Thread says 0
Second Thread says 0
Second Thread says 1
First Thread says 1
First Thread says 2
Second Thread says 2
First Thread says 3
Second Thread says 3
First Thread says 4
First Thread is done.
Second Thread says 4
Second Thread is done.
In your synchronized block you are locking class object which will lock execution of run method on other objects when one object has invoked it. But when you synchronized run method, you will lock object not class, so it will not block another thread to execute same method on another object. Hence both thread executes in parallel. If you want to achieve same execution as with synchronized block you can have a synchronized static method which executes steps that are in run and call it from run method
When you use : synchronized (SyncBlock.class), your code works fine because you are locking on the SyncBlock class, so other thread cannot get access to the class Object of SyncBlock until the first one releases it.
In the second case, you are locking on the current instance of SimpleThread(this), the lock will be different for both threads (you are locking on the SimpleThread instances themselves). So, the lock itself is in-effective and the JVM might as well remove the synchronization code (from jdk6 U23 - escape analysis was introduced to optimize such things)
In case of synchronized block say First thread enters first
synchronized (SyncBlock.class) {--> // here First thread takes the lock now no other thread can enter
Now when First thread reaches here
System.out.println(getName() + " is done.");
} ---> here First thread releases the lock . So this gives chance to other thread which are waiting for this lock . so in ur case Second thread takes it and then executes it and when it reaches here it will release and then again other thread can take over. Note : This behavior is not definite
Threads can execute in any manner Depends upon CPU scheduling policy
And what happens in synchronized method is as soon as one thread enters this method it will complete its task and then release the lock .After this other thread gets the chance to execute .
Also note that sleep doesnt release the LOCK . at that stage thread is in wait state
None of the other answers here is wrong, but none of them really speaks to the heart of the matter.
When you write synchronized, your code synchronizes on an Object, and the JVM guarantees that no two threads can be synchronized on the same object at the same time.
In your first example, the SimpleThread.run() method synchronizes on the unique SyncBlock class object. That prevents both threads from entering run() at the same time because they both are trying to synchronize on the same object: there is only one SyncBlock class object.
In your second example, the SimpleThread.run() method synchronizes on this. That does not prevent the two threads from entering run() at the same time because the two threads are synchronizing on two different objects: You create two instances of SimpleThread.
The program creates thread t0 which spawns thread t1 and subsequently threads t2 and t3 are created.After the execution of thread t3and the application never returns to the other threads spawned earlier(t0,t1,t2) and they are left stuck.
Why are the threads t0, t1, and t2 suspended?
public class Cult extends Thread
{
private String[] names = {"t1", "t2", "t3"};
static int count = 0;
public void run()
{
for(int i = 0; i < 100; i++)
{
if(i == 5 && count < 3)
{
Thread t = new Cult(names[count++]);
t.start();
try{
Thread.currentThread().join();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
System.out.print(Thread.currentThread().getName() + " ");
}
}
public static void main(String[] a`)
{
new Cult("t0").start();
}
}
The most important point you missed:
Thread.currentThread().join();
Method join in source code uses isAlive method.
public final synchronized void join(long millis)
...
if (millis == 0) {
while (isAlive()) {
wait(0);
}
...
}
It means that Thread.currentThread().join() will return only when Thread.currentThread() is dead.
But in your case it's impossible because of your running code in Thread.currentThread() has itself
this peace of code Thread.currentThread().join(). That's why after Thread 3 completion your program should hang and nothing happens thereafter.
Why are the threads t0, t1, and t2 suspended? The execution of thread t3 completes.
t3 completes because it is not trying to fork a 4th thread and therefore is not trying to join() with it's own thread. The following line will never return so t0, t1, and t2 all stop there and wait forever:
Thread.currentThread().join();
This is asking the current thread to wait for itself to finish which doesn't work. I suspect that you meant to say t.join(); which is waiting for the thread that was just forked to finish.
Here are some other thoughts about your code in no apparent order:
You should consider implements Runnable instead of extends Thread. See here: "implements Runnable" vs. "extends Thread"
You are using the shared static variable count in multiple threads without any protection of locking. The best solution is to use an AtomicInteger instead of a int. You probably don't have a problem here because each thread is modifying count and then forking another thread but if you tried to fork 2 threads, this would be a real problem because of data race conditions.
I'm not sure why you are only spawning another thread if(i == 5 && count < 3). i is only going to be 5 once in that loop. Is that really what you intended?
String[] names = {"t1", "t2", "t3"}; fields are recommended to be declared at the top of classes. Otherwise they get buried in the code and get lost.
In main you start a Cult thread and then the main thread finishes. This is unnecessary and you can just call cult.run(); in main instead and use the main thread.
Cult(String s) { super(s); } there is no point in having a constructor that calls the super constructor with the same arguments. This can be removed.
This is debatable but I tend to put main method at the top of the class and not bury it since it is the "entrance" method. Same thing with constructors. Those should be above the run() method.
catch(Exception e) {} is a really bad pattern. At the very least you should do a e.printStackTrace(); or log it somehow. Catching and just dropping exceptions hides a lot of problems. Also, catching Exception should be changed to catch(InterruptedException e). You want to restrict your catch blocks just the exceptions thrown by the block otherwise this may again hide problems in the future if you copy and paste that block somewhere.
More a good practice but never use constants like 3 that have to match another data item. In this case it would be better to use names.length which is 3. THis means that you don't need to change 2 places in the code if you want to increase the number of threads. You could also have the name be "t" + count and get rid of the names array altogether.
So I have the following code:
import java.lang.Thread;
import java.lang.Integer;
class MyThread extends Thread {
private int id;
MyThread(int i){
id = i;
}
public void run() {
while(true){
try{
synchronized(Global.lock){
Global.lock.wait();
if(Global.n == 0) {System.out.println(id); Global.lock.notify(); break;}
--Global.n;
System.out.println("I am thread " + id + "\tn is now " + Global.n);
Global.lock.notify();
}
}
catch(Exception e){break;}
}
}
}
class Global{
public static int n;
public static Object lock = new Object();
}
public class Sync2{
public static final void main(String[] sArgs){
int threadNum = Integer.parseInt(sArgs[0]);
Global.n = Integer.parseInt(sArgs[1]);
MyThread[] threads = new MyThread[threadNum];
for(int i = 0; i < threadNum; ++i){
threads[i] = new MyThread(i);
threads[i].start();
}
synchronized(Global.lock){Global.lock.notify();}
}
}
two parameters are entered: a number n and the number of threads to be created. Every thread decreases n by one and then passes control. All threads should stop when n is 0. It seems to work fine so far, but the only problem is that in most of the cases all threads except one terminate. And one is hanging on. Any idea why?
And yes, this is part of a homework, and that is what I've done so far (I was no provided with the code). I'am also explicitly restricted to use a synchronized block and only wait() and .notify() methods by the task.
EDIT: modified the synchronized block a bit:
synchronized(Global.lock){
Global.lock.notify();
if (Global.n == 0) {break;}
if (Global.next != id) {Global.lock.wait(); continue;}
--Global.n;
System.out.println("I am thread " + id + "\tn is now " + Global.n);
Global.next = ++Global.next % Global.threadNum;
}
now threads act strictly in the order they are created. Its pretty unclear from the task wording, but might be the right thing.
You have a race condition. Think about what happens with a single worker thread. Global.n is set to 1 and then the thread starts. It immediately goes into a wait state. Suppose, though, that notify() had already been called on the main thread. Since the worker thread hasn't yet entered a wait state, it isn't notified. Then, when it finally does call wait(), there are no other threads around to call notify(), it stays in the wait state forever. You need to fix up your logic to avoid this race condition.
Also, do you really want a single worker thread to decrement Global.n more than once? That can easily happen with your while (true) ... loop.
EDIT
You also have another logic problem with a single thread. Suppose it enters the wait state and then the notify() in main is called. It wakes the worker thread which decrements Global.n to 0, calls notify(), and then goes back to waiting. The problem is that notify() didn't wake any other thread because there were no other threads to wake. So the one worker thread will wait forever. I haven't analyzed it fully, but something like this might also happen with more than one worker thread.
You should never have a naked wait() call, as semaphores in java are not cached. wait() should always be nested in some sort of
while (condition that you are waiting on)
obj.wait();