This question already has answers here:
Why is i++ not atomic?
(10 answers)
Closed 6 years ago.
Java MultiThreading skips loop and gives wrong result
package Threading;
class DemoThread extends Thread{ //Thread Class
static int count=0; // variable incremented by both the threads
public DemoThread(String name) {
// TODO Auto-generated constructor stub
super(name);
}
public void run() {
for(int i=0;i<100000;i++) {
count++;
System.out.println(Thread.currentThread()+"Count"+count); // print thread operating on count variable
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public class MyThreadClass {
public static void main(String[] args) {
// TODO Auto-generated method stub
DemoThread t1=new DemoThread("T1");
DemoThread t2=new DemoThread("T2");
t1.start();
t2.start();
try {
t1.join();
t2.join(); //allowing both the threads to complee before main thread
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Main Thread ends"+DemoThread.count); //final value of count
}
}
The final value of count should be 199998 but it is not giving the desired result.
Why the threads are missing the loops ???
It happened because Thread T1 and T2 will update count at the same time (concurrency) like that:
Thread[T1,5,main]Count10
Thread[T2,5,main]Count10
Thread[T1,5,main]Count12
Thread[T2,5,main]Count12
Thread[T2,5,main]Count14
Thread[T1,5,main]Count14
Thread[T1,5,main]Count15
Thread[T2,5,main]Count16
You should use AtomicInteger
And update your code:
static int count=0; to static AtomicInteger count= new AtomicInteger();
count++; to count.incrementAndGet();
You must use java.util.concurrent.atomic.AtomicInteger and not a shared static int variable.
Problem
In Java, multi threading introduces asynchronous behavior to your programs, you must enforce synchronicity when you need it.
Without synchronization, nothing stops a thread to access from calling a same method, on same object at the same time. This is known as Race Condition, because threads are racing each other to complete the method.
The output to your program:
The first line of the output printed 2!? This is because t1 wrote count's value and was preempted before it could print it. Note that for a thread to be preempted it need not go into sleep. OS does this neverthless.
If you notice the 1st line and 4th line you can see the inconsistency. This kind of inconsistency becomes unpredictable in huge programs.
Here are end results for running multiple times.
It should not be taken granted that always wrong results are produced. No, sometimes correct results are produced. It means that results will be unpredictable.
Misunderstanding on?
Thread skipped loop.!? No, thread didn't skip loop. Different threads accessed same value but before it could write it's own value other thread wrote the value and proceeded. As a result the next time it accessed the value, it picked up the wrong value.
This is known as Reader Writer Problem
Solution
In your program, t1 and t2 are accessing your count variable with different values. To prevent other threads from calling run() before other thread completes it we add a synchronized keyword before the method.
synchronized public void run(){}
The synchronized keyword guards the state from race conditions. Once, a thread enters the synchronized method, other threads are not allowed to use the method until the previous thread exits the method.
Here is the correct output due to synchronized keyword.
Note: I used 200 as the end for loop.
Bonus
If there are proprietary methods you are using which take in shared data as input. You can use synchronized block
synchronized(objRef){
//Method calling
}
objRef is the reference of the object being synchronized.
As recommended in comments
[Recommended Solution]
You should use AtomicInteger instead of native int. Use java.util.concurrent.atomic package.
Reference to java.util.concurrent.atomic
Instead of static int count = 0; use static AtomicInteger count = new AtomicInteger(0); and instead of count++; use count.incrementAndGet();
AtomicInteger is inherently synchronized.
Atomic Integer Reference Java Docs
Related
This question already has answers here:
Why is i++ not atomic?
(10 answers)
Closed 6 years ago.
Java MultiThreading skips loop and gives wrong result
package Threading;
class DemoThread extends Thread{ //Thread Class
static int count=0; // variable incremented by both the threads
public DemoThread(String name) {
// TODO Auto-generated constructor stub
super(name);
}
public void run() {
for(int i=0;i<100000;i++) {
count++;
System.out.println(Thread.currentThread()+"Count"+count); // print thread operating on count variable
try {
Thread.sleep(1);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
public class MyThreadClass {
public static void main(String[] args) {
// TODO Auto-generated method stub
DemoThread t1=new DemoThread("T1");
DemoThread t2=new DemoThread("T2");
t1.start();
t2.start();
try {
t1.join();
t2.join(); //allowing both the threads to complee before main thread
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println("Main Thread ends"+DemoThread.count); //final value of count
}
}
The final value of count should be 199998 but it is not giving the desired result.
Why the threads are missing the loops ???
It happened because Thread T1 and T2 will update count at the same time (concurrency) like that:
Thread[T1,5,main]Count10
Thread[T2,5,main]Count10
Thread[T1,5,main]Count12
Thread[T2,5,main]Count12
Thread[T2,5,main]Count14
Thread[T1,5,main]Count14
Thread[T1,5,main]Count15
Thread[T2,5,main]Count16
You should use AtomicInteger
And update your code:
static int count=0; to static AtomicInteger count= new AtomicInteger();
count++; to count.incrementAndGet();
You must use java.util.concurrent.atomic.AtomicInteger and not a shared static int variable.
Problem
In Java, multi threading introduces asynchronous behavior to your programs, you must enforce synchronicity when you need it.
Without synchronization, nothing stops a thread to access from calling a same method, on same object at the same time. This is known as Race Condition, because threads are racing each other to complete the method.
The output to your program:
The first line of the output printed 2!? This is because t1 wrote count's value and was preempted before it could print it. Note that for a thread to be preempted it need not go into sleep. OS does this neverthless.
If you notice the 1st line and 4th line you can see the inconsistency. This kind of inconsistency becomes unpredictable in huge programs.
Here are end results for running multiple times.
It should not be taken granted that always wrong results are produced. No, sometimes correct results are produced. It means that results will be unpredictable.
Misunderstanding on?
Thread skipped loop.!? No, thread didn't skip loop. Different threads accessed same value but before it could write it's own value other thread wrote the value and proceeded. As a result the next time it accessed the value, it picked up the wrong value.
This is known as Reader Writer Problem
Solution
In your program, t1 and t2 are accessing your count variable with different values. To prevent other threads from calling run() before other thread completes it we add a synchronized keyword before the method.
synchronized public void run(){}
The synchronized keyword guards the state from race conditions. Once, a thread enters the synchronized method, other threads are not allowed to use the method until the previous thread exits the method.
Here is the correct output due to synchronized keyword.
Note: I used 200 as the end for loop.
Bonus
If there are proprietary methods you are using which take in shared data as input. You can use synchronized block
synchronized(objRef){
//Method calling
}
objRef is the reference of the object being synchronized.
As recommended in comments
[Recommended Solution]
You should use AtomicInteger instead of native int. Use java.util.concurrent.atomic package.
Reference to java.util.concurrent.atomic
Instead of static int count = 0; use static AtomicInteger count = new AtomicInteger(0); and instead of count++; use count.incrementAndGet();
AtomicInteger is inherently synchronized.
Atomic Integer Reference Java Docs
What I understand by synchronizing static object which is a variable, if one thread is accessing it, other thread can't.
class T3
{
static Integer i = 0;
static void callStatic()
{
synchronized(T3.class)
{
System.out.println(i++);
while(true);
}
}
public void notStatic()
{
System.out.println(i++);
while(true);
}
}
class T2 implements Runnable
{
public void run()
{
System.out.println("calling nonstatic");
new T3().notStatic();
}
}
class T implements Runnable
{
public void run()
{
System.out.println("calling static");
T3.callStatic();
}
}
public class Test
{
public static void main(String[] args)
{
new Thread(new T()).start();
try
{
Thread.sleep(100);
}
catch (InterruptedException e)
{
}
new Thread(new T2()).start();
}
}
But this demo program has output as :
calling static
0
calling nonstatic
1
Is my understanding wrong? Or am I missing something?
I tried, synchronzing callStatic method, and synchronizing T3.class class object too. But none worked as I thought.
Note : I thought, 1 will not be printed as callStatic has lock on variable i and is in infinite loop.
You don't synchronize on variables, you synchronize on objects.
callStatic synchronizes on 1 and then sets i to 2. If notStatic were to enter a synchronized(i) block at this point, it would synchronize on 2. No other thread has locked 2, so it proceeds.
(Actually, 1 and 2 aren't objects, but Integer.valueOf(1) and Integer.valueOf(2) return objects, and the compiler automatically inserts the Integer.valueOf calls to convert ints to Integers)
In your code, notStatic doesn't actually synchronize at all. It's true that only one thread can be in a synchronized block for a particular object at a particular time, but that has no effect on other threads that are not trying to enter a synchronized block.
Note: This answer relates to the original question, which had synchronized(i), not synchronized(T3.class), in callStatic. The edit really changes the question.
synchronize acts on the object, not the variable/member holding it. There are a couple of important things going on in your code.
synchronize(i) does indeed synchronize access to i provided that the other code trying to use it also synchronizes. It has no effect on code that doesn't synchronize. Suppose Thread A does synchronize(i) and holds it (your infinite loop); then Thread B does System.out.println(i); Thread B can happily read i, there's nothing stopping it. Thread B would have to do
synchronize (i) {
System.out.println(i);
}
...in order to be affected by Thread A's synchronize(i). Your code is (attempting to) synchronized mutation, but not access.
i++; with an Integer is effectively equivalent to i = new Integer(i.intValue() + 1), because Integer is immutable. So it creates a different Integer object and stores that in the i member. So anything synchronizing on the old Integer object has no effect on code synchronizing on the new one. So even if your code were synchronizing both access and mutation, it wouldn't matter, because the synch would be on the old object.
This means that the code in your callStatic is synchronizing on an instance of Integer and then repeated creating a bunch of other instances, which it is not synchronizing on.
Synchronized blocks on static and non static don't block each other. You need to understand how synchronized works for this. Synchronized is done always on an object never on a variable. When you synchronize, thread takes a lock on the object's monitor, the object which you put in the synchronized statement braces.
Synchronized blocks on static references (like in your code) lock on the .class object of your class and not on any instance of that class. So static and non-static synchronized blocks don't block each other.
Now in your code the notStatic method doesn't synchronize on anything where as the callStatic synchronizes on the static i Integer object. So they don't block each other.
I can't Seem to get a final counter value Of 20000. What is wrong with this code?
public class Synchronize2 {
public static void main(String[] args) {
Threading t1 = new Threading();
Threading t2 = new Threading();
t1.start();
t2.start();
try {
t1.join();
t2.join();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(Threading.counter);
}
}
class Threading extends Thread {
static int counter;
public synchronized void incrementer() {
counter++;
}
public void run() {
for (int i=0; i<10000; i++) {
incrementer();
}
}
}
Your synchronized incrementer method will lock on the object itself. But you have 2 different objects, each locking on themselves, so the method isn't thread safe; both threads can still access incrementer at the same time.
Additionally, the post-increment operation isn't thread safe because it's not atomic; there is a read operation and an increment operation, and a thread can be interrupted in the middle of the two operations. This non-thread-safe code presents a race condition, where thread one reads the value, thread two reads the value, then thread one increments and thread two increments, yet only the last increment "wins" and one increment is lost. This shows up when the ending value is less than 20000.
Make the method static too, so that because it's synchronized, it will lock on the class object of the class, which is proper synchronization here.
public static synchronized void incrementer() {
You synchronize on two different Objects. Your incrementer is a short form of this:
public void incrementer() {
synchronized (this) {
counter++;
}
}
But the two instances of "this" are not the same Object. Thus, you do not synchronize at all. Try it this way:
private static Object sync = new Object();
public void incrementer() {
synchronized (sync) {
counter++;
}
}
You should also make the variable counter volatile. It is not strictly neccessary here, because you use it only in synchronized blocks. But in real code you might read it outside such a block, and then you will get problems. Non volatile variables can be read from a local thread cache, instead from the memory.
The program creates thread t0 which spawns thread t1 and subsequently threads t2 and t3 are created.After the execution of thread t3and the application never returns to the other threads spawned earlier(t0,t1,t2) and they are left stuck.
Why are the threads t0, t1, and t2 suspended?
public class Cult extends Thread
{
private String[] names = {"t1", "t2", "t3"};
static int count = 0;
public void run()
{
for(int i = 0; i < 100; i++)
{
if(i == 5 && count < 3)
{
Thread t = new Cult(names[count++]);
t.start();
try{
Thread.currentThread().join();
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}
System.out.print(Thread.currentThread().getName() + " ");
}
}
public static void main(String[] a`)
{
new Cult("t0").start();
}
}
The most important point you missed:
Thread.currentThread().join();
Method join in source code uses isAlive method.
public final synchronized void join(long millis)
...
if (millis == 0) {
while (isAlive()) {
wait(0);
}
...
}
It means that Thread.currentThread().join() will return only when Thread.currentThread() is dead.
But in your case it's impossible because of your running code in Thread.currentThread() has itself
this peace of code Thread.currentThread().join(). That's why after Thread 3 completion your program should hang and nothing happens thereafter.
Why are the threads t0, t1, and t2 suspended? The execution of thread t3 completes.
t3 completes because it is not trying to fork a 4th thread and therefore is not trying to join() with it's own thread. The following line will never return so t0, t1, and t2 all stop there and wait forever:
Thread.currentThread().join();
This is asking the current thread to wait for itself to finish which doesn't work. I suspect that you meant to say t.join(); which is waiting for the thread that was just forked to finish.
Here are some other thoughts about your code in no apparent order:
You should consider implements Runnable instead of extends Thread. See here: "implements Runnable" vs. "extends Thread"
You are using the shared static variable count in multiple threads without any protection of locking. The best solution is to use an AtomicInteger instead of a int. You probably don't have a problem here because each thread is modifying count and then forking another thread but if you tried to fork 2 threads, this would be a real problem because of data race conditions.
I'm not sure why you are only spawning another thread if(i == 5 && count < 3). i is only going to be 5 once in that loop. Is that really what you intended?
String[] names = {"t1", "t2", "t3"}; fields are recommended to be declared at the top of classes. Otherwise they get buried in the code and get lost.
In main you start a Cult thread and then the main thread finishes. This is unnecessary and you can just call cult.run(); in main instead and use the main thread.
Cult(String s) { super(s); } there is no point in having a constructor that calls the super constructor with the same arguments. This can be removed.
This is debatable but I tend to put main method at the top of the class and not bury it since it is the "entrance" method. Same thing with constructors. Those should be above the run() method.
catch(Exception e) {} is a really bad pattern. At the very least you should do a e.printStackTrace(); or log it somehow. Catching and just dropping exceptions hides a lot of problems. Also, catching Exception should be changed to catch(InterruptedException e). You want to restrict your catch blocks just the exceptions thrown by the block otherwise this may again hide problems in the future if you copy and paste that block somewhere.
More a good practice but never use constants like 3 that have to match another data item. In this case it would be better to use names.length which is 3. THis means that you don't need to change 2 places in the code if you want to increase the number of threads. You could also have the name be "t" + count and get rid of the names array altogether.
As far as I know volatile write happens-before volatile read, so we always will see the freshest data in volatile variable. My question basically concerns the term happens-before and where does it take place? I wrote a piece of code to clarify my question.
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
System.out.println("Value " + t.a);
}
}).start();
}
}
(try catch block is omitted for clarity)
In this case I always see the value 0 to be printed on console. Without Thread.sleep(3000); i always see value 10. Is this a case of happens-before relationship or it prints 'value 10' because thread 1 starts a bit earlier thread 2?
It would be great to see the example where the behaviour of code with and without volatile variable differs in every program start, because the result of code above depends only(at least in my case) on the order of threads and on thread sleeping.
You see the value 0 because the read is executed before the write. And you see the value 10 because the write is executed before the read.
If you want to have a test with more unpredictable output, you should have both of your threads await a CountDownLatch, to make them start concurrently:
final CountDownLatch latch = new CountDownLatch(1);
new Thread(new Runnable(){
#Override
public void run() {
try {
latch.await();
t.a = 10;
}
catch (InterruptedException e) {
// end the thread
}
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
try {
latch.await();
System.out.println("Value " + t.a);
}
catch (InterruptedException e) {
// end the thread
}
}
}).start();
Thread.sleep(321); // go
latch.countDown();
The happens-before really has to do with a write happens before any subsequent read. If the write has not occurred yet there really is no relationship. Since the write-thread is sleeping the read is executed before the write occurs.
To observe the relationship in action you can have two variables one that is volatile and one that is not. According to the JMM it says the write to a non-volatile variable before a volatile write happens before the volatile read.
For instance
volatile int a = 0;
int b = 0;
Thread 1:
b = 10;
a = 1;
Thread 2:
while(a != 1);
if(b != 10)
throw new IllegalStateException();
The Java Memory Model says that b should always equal 10 because the non-volatile store occurs before the volatile store. And all writes that occur in one thread before a volatile store happen-before all subsequent volatile loads.
I've re-phrased (changes in bold fonts) the happens-before rule mentioned in the first sentence of your question as below so that it could be understood better -
"write of the value of a volatile variable to the main memory happens-before any subsequent read of that varible from main memory".
Also it is important to note that volatile writes/reads always
happen to/from the main memory and NOT to/from any local memory
resources like registers, processor caches etc.
The practical implication of the above happens-before rule is that all the threads that share a volatile variable will always see consistent value of that variable. No two threads see different values of that variable at any given point of time.
On the contrary, all the threads that share a non-volatile variable may see different values at any given point of time unless it is not synchronized by any other kind of synchronization mechanisms such as synchronized block/method, final keyword etc.
Now coming back to your question on this happens-before rule, i think u've slightly misunderstood that rule. The rule does not dictate that a write code should always happen (execute) before a read code. Rather it dictates that if a write code (volatile variable write) were to be executed in one thread before a read code in another thread then the effect of the write code should have happened in the main memory before the read code is executed so that the read code can see the latest value.
In the absence of volatile (or any other synchronization mechanisms), this happens-before is not mandatory, and hence a reader thread might see a stale value of non-volatile variable even though it has been recently written by a different writer thread. Because the writer thread can store the value in its local copy and need not have flushed the value to the main memory.
Hope the above explanation is clear :)
don't stick to the term 'happens-before'. it is a relation between events, used by jvm during R/W operations scheduling. at this stage it won't help you understand the volatile. the point is: jvm orders all R/W operations. jvm can order however it wants (of course obeying to all synchronize, lock, wait etc).
and now: if variable is volatile then any read operation will see the result of latest write operation. if variable is not volatile then it is not guaranteed (in different threads). that's all
piotrek is right, here is the test:
class Test {
volatile int a = 0;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){
#Override
public void run() {
try {
Thread.sleep(3000);
} catch (Exception e) {}
t.a = 10;
System.out.println("now t.a == 10");
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
while(t.a == 0) {}
System.out.println("Loop done: " + t.a);
}
}).start();
}
}
with volatile: it will always end
without volatile: it will never end
From wiki:
In Java specifically, a happens-before relationship is a guarantee that memory written to by statement A is visible to statement B, that is, that statement A completes its write before statement B starts its read.
So if thread A write t.a with value 10 and thread B tries to read t.a some later, happens-before relationship guarantees that thread B must read value 10 written by thread A, not any other value. It's natural, just like Alice buys milk and put them into fridge then Bob opens fridge and sees the milk. However, when computer is running, memory access usually doesn't access memory directly, that's too slow. Instead, software get the data from register or cache to save time. It loads data from memory only when cache miss happens. That the problem happens.
Let's see the code in the question:
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){ //thread A
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){ //thread B
#Override
public void run() {
System.out.println("Value " + t.a);
}
}).start();
}
}
Thread A writes 10 into value t.a and thread B tries to read it out. Suppose thread A writes before thread B reads, then when thread B reads it will load the value from the memory because it doesn't cache the value in register or cache so it always get 10 written by thread A. And if thread A writes after thread B reads, thread B reads initial value (0). So this example doesn't show how volatile works and the difference. But if we change the code like this:
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){ //thread A
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){ //thread B
#Override
public void run() {
while (1) {
System.out.println("Value " + t.a);
}
}
}).start();
}
}
Without volatile, the print value should always be initial value (0) even some read happens after thread A writes 10 into t.a, which violate the happen-before relationship. The reason is compiler optimizes the code and save the t.a into register and every time it will use the register value instead of reading from cache memory, of course which much faster. But it also cause the happen-before relationship violation problem because thread B can't get the right value after others update it.
In the above example, volatile write happens-before volatile read means that with volatile thread B will get the right value of t.a once after thread A update it. Compiler will guarantee every time thread B reads t.a, it must read from cache or memory instead of just using register's stale value.