Multi-threading program to print numbers from 1 to 50? - java

im trying to write a program in which two threads are created and the output should be like 1st thread prints 1 and the next thread prints 2 ,1st thread again prints 3 and so on. im a beginner so pls help me clearly. i thought thread share the same memory so they will share the i variable and print accordingly. but in output i get like thread1: 1, thread2 : 1, thread1: 2, thread2 : 2 nd so on. pls help. here is my code
class me extends Thread
{
public int name,i;
public void run()
{
for(i=1;i<=50;i++)
{
System.out.println("Thread" + name + " : " + i);
try
{
sleep(1000);
}
catch(Exception e)
{
System.out.println("some problem");
}
}
}
}
public class he
{
public static void main(String[] args)
{
me a=new me();
me b=new me();
a.name=1;
b.name=2;
a.start();
b.start();
}
}

First off you should read this http://www.oracle.com/technetwork/java/codeconventions-135099.html.
Secondly the class member variables are not shared memory. You need to explicitly pass an object (such as the counter) to both objects, such that it becomes shared. However, this will still not be enough. The shared memory can be cached by the threads so you will have race-conditions. To solve this you will need to use a Lock or use an AtomicInteger

It seems what you want to do is:
Write all numbers from 1 to 50 to System.out
without any number being printed multiple times
with the numbers being printed in order
Have this execution be done by two concurrent threads
First, let's look at what is happening in your code: Each number is printed twice. The reason for this is that i is an instance variable of me, your Thread. So each Thread has its own i, i.e., they do not share the value.
To make the two threads share the same value, we need to pass the same value when constructing me. Now, doing so with the primitive int won't help us much, because by passing an int we are not passing a reference, hence the two threads will still work on independent memory locations.
Let us define a new class, Value which holds the integer for us: (Edit: The same could also be achieved by passing an array int[], which also holds the reference to the memory location of its content)
class Value{
int i = 1;
}
Now, main can instantiate one object of type Value and pass the reference to it to both threads. This way, they can access the same memory location.
class Me extends Thread {
final Value v;
public Me(Value v){
this.v = v;
}
public void run(){
for(; v.i < 50; v.i++){
// ...
}
public static void main(){
Value valueInstance = new Value();
Me a = new Me(valueInstance);
Me b = new Me(valueInstance);
}
}
Now i isn't printed twice each time. However, you'll notice that the behavior is still not as desired. This is because the operations are interleaved: a may read i, let's say, the value is 5. Next, b increments the value of i, and stores the new value. i is now 6. However, a did still read the old value, 5, and will print 5 again, even though b just printed 5.
To solve this, we must lock the instance v, i.e., the object of type Value. Java provides the keyword synchronized, which will hold a lock during the execution of all code inside the synchronized block. However, if you simply put synchronize in your method, you still won't get what you desire. Assuming you write:
public void run(){ synchronized(v) {
for(; v.i < 50; v.i++) {
// ...
}}
Your first thread will acquire the lock, but never release it until the entire loop has been executed (which is when i has the value 50). Hence, you must release the lock somehow when it is safe to do so. Well... the only code in your run method that does not depend on i (and hence does not need to be locking) is sleep, which luckily also is where the thread spends the most time in.
Since everything is in the loop body, a simple synchronized block won't do. We can use Semaphore to acquire a lock. So, we create a Semaphore instance in the main method, and, similar to v, pass it to both threads. We can then acquire and release the lock on the Semaphore to let both threads have the chance to get the resource, while guaranteeing safety.
Here's the code that will do the trick:
public class Me extends Thread {
public int name;
final Value v;
final Semaphore lock;
public Me(Value v, Semaphore lock) {
this.v = v;
this.lock = lock;
}
public void run() {
try {
lock.acquire();
while (v.i <= 50) {
System.out.println("Thread" + name + " : " + v.i);
v.i++;
lock.release();
sleep(100);
lock.acquire();
}
lock.release();
} catch (Exception e) {
System.out.println("some problem");
}
}
public static void main(String[] args) {
Value v = new Value();
Semaphore lock = new Semaphore(1);
Me a = new Me(v, lock);
Me b = new Me(v, lock);
a.name = 1;
b.name = 2;
a.start();
b.start();
}
static class Value {
int i = 1;
}
}
Note: Since we are acquiring the lock at the end of the loop, we must also release it after the loop, or the resource will never be freed. Also, I changed the for-loop to a while loop, because we need to update i before releasing the lock for the first time, or the other thread can again read the same value.

Check the below link for the solution. Using multiple threads we can print the numbers in ascending order
http://cooltekhie.blogspot.in/2017/06/#987628206008590221

Related

Java Selling Tickets with Multithreading

I have two threads to sell tickets.
public class MyThread {
public static void main(String[] args) {
Ticket ticket = new Ticket();
Thread thread1 = new Thread(()->{
for (int i = 0; i < 30; i++) {
ticket.sell();
} }, "A");
thread1.start();
Thread thread2 = new Thread(()->{
for (int i = 0; i < 30; i++) {
ticket.sell();
} }, "B");
thread2.start();
}
}
class Ticket {
private Integer num = 20 ;
private Object obj = new Object();
public void sell() {
// why shouldn't I use "num" as a monitor object ?
// I thought "num" is unique among two threads.
synchronized ( num ) {
if (this.num >= 0) {
System.out.println(Thread.currentThread().getName() + " sells " + this.num + "th ticket");
this.num--;
}
}
}
}
The output will be wrong if I use num as a monitor object.
But if I use obj as a monitor object, the output will be correct.
What's the difference between using num and using obj ?
===============================================
And why does it still not work if I use (Object)num as a monitor object ?
class Ticket {
private int num = 20 ;
private Object obj = new Object();
public void sell() {
// Can I use (Object)num as a monitor object ?
synchronized ( (Object)num ) {
if (this.num >= 0) {
System.out.println(Thread.currentThread().getName() + " sells " + this.num + "th ticket");
this.num--;
}
}
}
}
Integer is a boxed value. It contains a primitive int, and the compiler deals with autoboxing/autounboxing that int. Because of this, the statement this.num-- is actually:
num=Integer.valueOf(num.intValue()-1)
That is, the num instance containing the lock is lost once you perform that update.
The fundamental problem here is synchronizing on a non-final value.
The most important thing to understand about the Java Memory Model - that is, what values a thread sees whilst executing a Java program - is the happens-before relationship.
In the specific case of a synchronized block, actions done in one thread before exiting the synchronized block happen before actions done inside the synchronized block in another thread - so, if the first thread increments a variable inside that synchronized block, the second thread sees that updated value.
This goes over and above the well-known fact that a synchronized block can only be entered by one thread at a time: only one thread at a time and you get to see what the previous thread did.
// Thread 1 // Thread 2
synchronized (monitor) {
num = 1
} // Exiting monitor
// *happens before*
// entering monitor
synchronized (monitor) {
int n = num; // Guaranteed to see n = 1 (provided no other thread has entered a block synchronized on monitor and changed it first).
}
There is a very important caveat to this guarantee: it only holds if the two executions of the synchronized block use the same monitor. And that's not the same variable, it's the same actual concrete object on the heap (variables don't have monitors, they're just pointers to a value in the heap).
So, if you reassign the monitor inside the synchronized block:
synchronized (num) {
if (num > 0) {
num--; // This is the same as `num = Integer.valueOf(num.intValue() - 1);`
}
}
then you are destroying the happens-before guarantee, because the next thread to arrive at that synchronized block is entering the monitor of a different object (*).
Once you do, the behavior of your program is ill-defined: if you're lucky, it fails in an obvious way; if you're very unlucky, it can seem to work, and then start failing mysteriously at a later date.
Your code is just broken.
This isn't something that's specific to Integers either: this code would have the same problem.
// Assume `Object someObject = new Object();` is defined as a field.
synchronized (someObject) {
someObject = new Object();
}
(*) Actually, you still get a happens-before relationship for the new object: it's just not for the things inside this synchronized block, it's for things that happened in some other synchronized block that used the object as the monitor. Essentially, it's impossible to reason about what this means, so you may as well just consider it "broken".
The correct way to do it is to synchronize on a field that you can't (not just don't) reassign. You could simply synchronize on this (which can't be reassigned):
synchronized (this) {
if (num > 0) {
num--; // This is the same as `num = Integer.valueOf(num.intValue() - 1);`
}
}
Now it doesn't matter that you're reassigning num inside the block, because you're not synchronizing on it any more. You get the happens-before guarantee from the fact that you're always synchronizing on the same thing.
Note, however, that you must always access num from inside a synchronized block - for example, if you have a getter to get the number of tickets remaining, that must also synchronize on this, in order to get the happens-before guarantee that the value changed in the sell() method is visible in that getter.
This works, but it may not be entirely desirable: anybody who has access to a reference to your Ticket instance can also synchronize on it. This means they can potentially deadlock your code.
Instead, it is a common practice to introduce a private field which is used purely for locking: this is what the obj field gives you. The only modification from your code should be to make it final (and give it a better name than obj):
private final Object obj = new Object();
This can't be accessed outside your class, so nefarious clients cannot cause a deadlock for you directly.
Again, this can't be reassigned inside your synchronized block (or anywhere else), so there is no risk of you breaking the happens-before guarantee by reassigning it.

Understanding Multi-Threading in Java

I am learning multithreading in Java. Problem statement is: Suppose there is a datastruture that can contains million of Integers, now I want to search for a key in this. I want to use 2 threads so that if any one of the thread founds the key, it should set a shared boolean variable as false, and both the thread should stop further processing.
Here is what I am trying:
public class Test implements Runnable{
private List<Integer> list;
private Boolean value;
private int key = 27;
public Test(List<Integer> list,boolean value) {
this.list=list;
this.value=value;
}
#Override
public void run() {
synchronized (value) {
if(value){
Thread.currentThread().interrupt();
}
for(int i=0;i<list.size();i++){
if(list.get(i)==key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
}
}
And main class is:
public class MainClass {
public static void main(String[] args) {
List<Integer> list = new ArrayList<Integer>(101);
for(int i=0;i<=100;i++){
list.add(i);
}
Boolean value=false;
Thread t1 = new Thread(new Test(list.subList(0, 49),value));
t1.setName("Thread 1");
Thread t2 = new Thread(new Test(list.subList(50, 99),value));
t2.setName("Thread 2");
t1.start();
t2.start();
}
}
What I am expecting:
Both threads will run randomly and when any of thread encounters 27, both thread will be interrupted. So, thread 1 should not be able to process all the inputs, similarly thread 2.
But, what is happening:
Both threads are completing the loop and thread 2 is always starting after Thread 1 completes.
Please highlight the mistakes, I am still learning threading.
My next practice question will be: Access one by one any shared resource
You are wrapping your whole block of code under the synchronized block under the object value. What this means is that, once execution arrives at the synchronized block the first thread will hold the monitor to object value and any subsequent threads will block until the monitor is released.
Note how the whole block:
synchronized (value){
if(value){
Thread.currentThread().interrupt();
}
for(int i=0; i < list.size(); i++){
if(list.get(i) == key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
is wrapped within a synchronized block meaning that only one thread can run that block at once, contrary to your objective.
In this context, I believe you are misunderstanding the principals behind synchronization and "sharing variables". To clarify:
static - is the variable modifier used to make a variable global across objects (i.e. class variable) such that each object shares the same static variable.
volatile - is the variable modifier used to make a variable thread-safe. Note that you can still access a variable without this modifier from different threads (this is however dangerous and can lead to race conditions). Threads have no effect on the scope of variables (unless you use a ThreadLocal).
I would just like to add that you can't put volatile everywhere and expect code to be thread-safe. I suggest you read Oracle's guide on synchronization for a more in-depth review of how to establish thread-safety.
In your case, I would remove the synchronization block and declare the shared boolean as a:
private static volatile Boolean value;
Additionally, the task you are trying to perform right now is something a Fork/Join pool is built for. I suggest reading this part of Oracle's java tutorials to see how a Fork/Join pool is used in a divide-and-conquer approach.
By wrapping the main logic of your thread in a synchronized block, execution of the code in that block becomes mutually exclusive. Thread 1 will enter the block, acquiring a lock on "value" and run the entire loop before returning the lock and allowing Thread 2 to run.
If you were to wrap only the checking and setting of the flag "value", then both threads should run the code concurrently.
EDIT: As other people have discussed making "value" a static volatile boolean within the Test class, and not using the synchronized block at all, would also work. This is because access to volatile variables occurs as if it were in a synchronized block.
Reference: https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
You should not obtain a lock on the found flag - that will just make sure only one thread can run. Instead make the flag static so it is shared and volatile so it cannot be cached.
Also, you should check the flag more often.
private List<Integer> list;
private int key = 27;
private static volatile boolean found;
public Test(List<Integer> list, boolean value) {
this.list = list;
this.found = value;
}
#Override
public void run() {
for (int i = 0; i < list.size(); i++) {
// Has the other thread found it?
if (found) {
Thread.currentThread().interrupt();
}
if (list.get(i) == key) {
System.out.println("Found by: " + Thread.currentThread().getName());
// I found it!
found = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() + ": " + list.get(i));
}
}
BTW: Both of your threads start at 0 and walk up the array - I presume you do this in this code as a demonstration and you either have them work from opposite ends or they walk at random.
Make boolean value static so both threads can access and edit the same variable. You then don't need to pass it in. Then as soon as one thread changes it to true, the second thread will also stop since it is using the same value.

Multiple Threads accessing instance method from different Instances should cause a race condition?

I am trying to understand Synchornized in Java.
I understood if I have access a synchronized method on same object from 2 different Threads, only one will be able to access at a time.
But I think if the same method is being called on 2 different instances, Both Objects should be able to access the method parallel. Which would cause race condition if accessing/modifying a static member variable from the method. But I am not able to see the race condition happening in below code.
Could someone please explain whats wrong with the code or my understanding.
For reference code is accessible at : http://ideone.com/wo6h4R
class MyClass
{
public static int count=0;
public int getCount()
{
System.out.println("Inside getcount()");
return count;
}
public synchronized void incrementCount()
{
count=count+1;
}
}
class Ideone
{
public static void main(String[] args) throws InterruptedException {
final MyClass test1 = new MyClass();
final MyClass test2 = new MyClass();
Thread t1 = new Thread() {
public void run()
{
int k=0;
while (k++<50000000)
{
test1.incrementCount();
}
}
};
Thread t2 = new Thread() {
public void run()
{
int l=0;
while (l++<50000000)
{
test2.incrementCount();
}
}
};
t1.start();
t2.start();
t1.join();
t2.join();
//System.out.println(t2.getState());
int x=500000000+500000000;
System.out.println(x);
System.out.println("count = " + MyClass.count);
}
}
You're right that the race condition exists. But the racy operations are so quick that they're unlikely to happen -- and the synchronized keywords are likely providing synchronization "help" that, while not required by the JLS, hide the races.
If you want to make it a bit more obvious, you can "spell out" the count = count + 1 code and put in a sleep:
public synchronized void incrementCount()
{
int tmp = count + 1;
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
count=tmp;
}
That should show the races more easily. (My handling of the interrupted exception is not good for production code, btw; but it's good enough for small test apps like this.)
The lesson learned here is: race conditions can be really hard to catch through testing, so it's best to really understand the code and prove to yourself that it's right.
Since syncrhonized methods actually synchronize on this different instance methods will lock on different objects and therefore you will get race conditions since they don't block each other.
You probably have to make your own lock object and lock on that.
class MyClass
{
public static int count=0;
//this is what you lock on
private static Object lock = new Object();
public int getCount()
{
synchronized(lock){
System.out.println("Inside getcount()");
return count;
}
}
public void incrementCount()
{
synchronized(lock){
count = count+1;
}
}
//etc
Now when you run your main, this gets printed out:
1000000000
count = 100000000
Here's the relevant section of the Java specification:
"A synchronized method acquires a monitor (ยง17.1) before it executes. For a class (static) method, the monitor associated with the Class object for the method's class is used. For an instance method, the monitor associated with this (the object for which the method was invoked) is used."
However I fail to see where the MyClass' instances are actually incrementing "count" so what exactly are you expecting to show as a race condition?
(Taken originally from this answer)

why is this thread safe?

Because it always prints out '3'. No synchronization needed? I am testing this simple thing because I am having a trouble in a real multiple thread problem, which isn't good to illustrate the problem, because it's large. This is a simplified version to showcase the situation.
class Test {
public static int count = 0;
class CountThread extends Thread {
public void run()
{
count++;
}
}
public void add(){
CountThread a = new CountThread();
CountThread b = new CountThread();
CountThread c = new CountThread();
a.start();
b.start();
c.start();
try {
a.join();
b.join();
c.join();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
public static void main(String[] args) {
Test test = new Test();
System.out.println("START = " + Test.count);
test.add();
System.out.println("END: Account balance = " + Test.count);
}
Because it always prints out '3'. No synchronization needed?
It is not thread safe and you are just getting lucky. If you run this 1000 times, or on different architectures, you will see different output -- i.e. not 3.
I would suggest using AtomicInteger instead of a static field ++ which is not synchronized.
public static AtomicInteger count = new AtomicInteger();
...
public void run() {
count.incrementAndGet();
}
...
Seems to me like count++ is fast enough to finish until you invoke 'run' for the other class. So basically it runs sequential.
But, if this was a real life example, and two different threads were usingCountThread parallelly, then yes, you would have synchronization problem.
To verify that, you can try to print some test output before count++ and after, then you'll see if b.start() is invoking count++ before a.start() finished. Same for c.start().
Consider using AtomicInteger instead, which is way better than synchronizing when possible -
incrementAndGet
public final int incrementAndGet()
Atomically increments by one the current value.
This code is not thread-safe:
public static int count = 0;
class CountThread extends Thread {
public void run()
{
count++;
}
}
You can run this code a million times on one system and it might pass every time. This does not mean is it is thread-safe.
Consider a system where the value in count is copied to multiple processor caches. They all might be updated independently before something forces one of the caches to be copied back to main RAM. Consider that ++ is not an atomic operation. The order of reading and writing of count may cause data to be lost.
The correct way to implement this code (using Java 5 and above):
public static java.util.concurrent.atomic.AtomicInteger count =
new java.util.concurrent.atomic.AtomicInteger();
class CountThread extends Thread {
public void run()
{
count.incrementAndGet();
}
}
It's not thread safe just because the output is right. Creating a thread causes a lot of overhead on the OS side of things, and after that it's just to be expected that that single line of code will be done within a single timeslice. It's not thread safe by any means, just not enough potential conflicts to actually trigger one.
It is not thread safe.
It just happened to be way to short to have measurable chance to show the issue. Consider counting to much higher number (1000000?) in run to increase chance of 2 operations on multiple threads to overlap.
Also make sure your machine is not single core CPU...
To make the class threadsafe either make count volatile to force memory fences between threads, or use AtomicInteger, or rewrite like this (my preference):
class CountThread extends Thread {
private static final Object lock = new Object();
public void run()
{
synchronized(lock) {
count++;
}
}
}

Volatile in java

As far as I know volatile write happens-before volatile read, so we always will see the freshest data in volatile variable. My question basically concerns the term happens-before and where does it take place? I wrote a piece of code to clarify my question.
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
System.out.println("Value " + t.a);
}
}).start();
}
}
(try catch block is omitted for clarity)
In this case I always see the value 0 to be printed on console. Without Thread.sleep(3000); i always see value 10. Is this a case of happens-before relationship or it prints 'value 10' because thread 1 starts a bit earlier thread 2?
It would be great to see the example where the behaviour of code with and without volatile variable differs in every program start, because the result of code above depends only(at least in my case) on the order of threads and on thread sleeping.
You see the value 0 because the read is executed before the write. And you see the value 10 because the write is executed before the read.
If you want to have a test with more unpredictable output, you should have both of your threads await a CountDownLatch, to make them start concurrently:
final CountDownLatch latch = new CountDownLatch(1);
new Thread(new Runnable(){
#Override
public void run() {
try {
latch.await();
t.a = 10;
}
catch (InterruptedException e) {
// end the thread
}
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
try {
latch.await();
System.out.println("Value " + t.a);
}
catch (InterruptedException e) {
// end the thread
}
}
}).start();
Thread.sleep(321); // go
latch.countDown();
The happens-before really has to do with a write happens before any subsequent read. If the write has not occurred yet there really is no relationship. Since the write-thread is sleeping the read is executed before the write occurs.
To observe the relationship in action you can have two variables one that is volatile and one that is not. According to the JMM it says the write to a non-volatile variable before a volatile write happens before the volatile read.
For instance
volatile int a = 0;
int b = 0;
Thread 1:
b = 10;
a = 1;
Thread 2:
while(a != 1);
if(b != 10)
throw new IllegalStateException();
The Java Memory Model says that b should always equal 10 because the non-volatile store occurs before the volatile store. And all writes that occur in one thread before a volatile store happen-before all subsequent volatile loads.
I've re-phrased (changes in bold fonts) the happens-before rule mentioned in the first sentence of your question as below so that it could be understood better -
"write of the value of a volatile variable to the main memory happens-before any subsequent read of that varible from main memory".
Also it is important to note that volatile writes/reads always
happen to/from the main memory and NOT to/from any local memory
resources like registers, processor caches etc.
The practical implication of the above happens-before rule is that all the threads that share a volatile variable will always see consistent value of that variable. No two threads see different values of that variable at any given point of time.
On the contrary, all the threads that share a non-volatile variable may see different values at any given point of time unless it is not synchronized by any other kind of synchronization mechanisms such as synchronized block/method, final keyword etc.
Now coming back to your question on this happens-before rule, i think u've slightly misunderstood that rule. The rule does not dictate that a write code should always happen (execute) before a read code. Rather it dictates that if a write code (volatile variable write) were to be executed in one thread before a read code in another thread then the effect of the write code should have happened in the main memory before the read code is executed so that the read code can see the latest value.
In the absence of volatile (or any other synchronization mechanisms), this happens-before is not mandatory, and hence a reader thread might see a stale value of non-volatile variable even though it has been recently written by a different writer thread. Because the writer thread can store the value in its local copy and need not have flushed the value to the main memory.
Hope the above explanation is clear :)
don't stick to the term 'happens-before'. it is a relation between events, used by jvm during R/W operations scheduling. at this stage it won't help you understand the volatile. the point is: jvm orders all R/W operations. jvm can order however it wants (of course obeying to all synchronize, lock, wait etc).
and now: if variable is volatile then any read operation will see the result of latest write operation. if variable is not volatile then it is not guaranteed (in different threads). that's all
piotrek is right, here is the test:
class Test {
volatile int a = 0;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){
#Override
public void run() {
try {
Thread.sleep(3000);
} catch (Exception e) {}
t.a = 10;
System.out.println("now t.a == 10");
}
}).start();
new Thread(new Runnable(){
#Override
public void run() {
while(t.a == 0) {}
System.out.println("Loop done: " + t.a);
}
}).start();
}
}
with volatile: it will always end
without volatile: it will never end
From wiki:
In Java specifically, a happens-before relationship is a guarantee that memory written to by statement A is visible to statement B, that is, that statement A completes its write before statement B starts its read.
So if thread A write t.a with value 10 and thread B tries to read t.a some later, happens-before relationship guarantees that thread B must read value 10 written by thread A, not any other value. It's natural, just like Alice buys milk and put them into fridge then Bob opens fridge and sees the milk. However, when computer is running, memory access usually doesn't access memory directly, that's too slow. Instead, software get the data from register or cache to save time. It loads data from memory only when cache miss happens. That the problem happens.
Let's see the code in the question:
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){ //thread A
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){ //thread B
#Override
public void run() {
System.out.println("Value " + t.a);
}
}).start();
}
}
Thread A writes 10 into value t.a and thread B tries to read it out. Suppose thread A writes before thread B reads, then when thread B reads it will load the value from the memory because it doesn't cache the value in register or cache so it always get 10 written by thread A. And if thread A writes after thread B reads, thread B reads initial value (0). So this example doesn't show how volatile works and the difference. But if we change the code like this:
class Test {
volatile int a;
public static void main(String ... args) {
final Test t = new Test();
new Thread(new Runnable(){ //thread A
#Override
public void run() {
Thread.sleep(3000);
t.a = 10;
}
}).start();
new Thread(new Runnable(){ //thread B
#Override
public void run() {
while (1) {
System.out.println("Value " + t.a);
}
}
}).start();
}
}
Without volatile, the print value should always be initial value (0) even some read happens after thread A writes 10 into t.a, which violate the happen-before relationship. The reason is compiler optimizes the code and save the t.a into register and every time it will use the register value instead of reading from cache memory, of course which much faster. But it also cause the happen-before relationship violation problem because thread B can't get the right value after others update it.
In the above example, volatile write happens-before volatile read means that with volatile thread B will get the right value of t.a once after thread A update it. Compiler will guarantee every time thread B reads t.a, it must read from cache or memory instead of just using register's stale value.

Categories

Resources