Java Selling Tickets with Multithreading - java

I have two threads to sell tickets.
public class MyThread {
public static void main(String[] args) {
Ticket ticket = new Ticket();
Thread thread1 = new Thread(()->{
for (int i = 0; i < 30; i++) {
ticket.sell();
} }, "A");
thread1.start();
Thread thread2 = new Thread(()->{
for (int i = 0; i < 30; i++) {
ticket.sell();
} }, "B");
thread2.start();
}
}
class Ticket {
private Integer num = 20 ;
private Object obj = new Object();
public void sell() {
// why shouldn't I use "num" as a monitor object ?
// I thought "num" is unique among two threads.
synchronized ( num ) {
if (this.num >= 0) {
System.out.println(Thread.currentThread().getName() + " sells " + this.num + "th ticket");
this.num--;
}
}
}
}
The output will be wrong if I use num as a monitor object.
But if I use obj as a monitor object, the output will be correct.
What's the difference between using num and using obj ?
===============================================
And why does it still not work if I use (Object)num as a monitor object ?
class Ticket {
private int num = 20 ;
private Object obj = new Object();
public void sell() {
// Can I use (Object)num as a monitor object ?
synchronized ( (Object)num ) {
if (this.num >= 0) {
System.out.println(Thread.currentThread().getName() + " sells " + this.num + "th ticket");
this.num--;
}
}
}
}

Integer is a boxed value. It contains a primitive int, and the compiler deals with autoboxing/autounboxing that int. Because of this, the statement this.num-- is actually:
num=Integer.valueOf(num.intValue()-1)
That is, the num instance containing the lock is lost once you perform that update.

The fundamental problem here is synchronizing on a non-final value.
The most important thing to understand about the Java Memory Model - that is, what values a thread sees whilst executing a Java program - is the happens-before relationship.
In the specific case of a synchronized block, actions done in one thread before exiting the synchronized block happen before actions done inside the synchronized block in another thread - so, if the first thread increments a variable inside that synchronized block, the second thread sees that updated value.
This goes over and above the well-known fact that a synchronized block can only be entered by one thread at a time: only one thread at a time and you get to see what the previous thread did.
// Thread 1 // Thread 2
synchronized (monitor) {
num = 1
} // Exiting monitor
// *happens before*
// entering monitor
synchronized (monitor) {
int n = num; // Guaranteed to see n = 1 (provided no other thread has entered a block synchronized on monitor and changed it first).
}
There is a very important caveat to this guarantee: it only holds if the two executions of the synchronized block use the same monitor. And that's not the same variable, it's the same actual concrete object on the heap (variables don't have monitors, they're just pointers to a value in the heap).
So, if you reassign the monitor inside the synchronized block:
synchronized (num) {
if (num > 0) {
num--; // This is the same as `num = Integer.valueOf(num.intValue() - 1);`
}
}
then you are destroying the happens-before guarantee, because the next thread to arrive at that synchronized block is entering the monitor of a different object (*).
Once you do, the behavior of your program is ill-defined: if you're lucky, it fails in an obvious way; if you're very unlucky, it can seem to work, and then start failing mysteriously at a later date.
Your code is just broken.
This isn't something that's specific to Integers either: this code would have the same problem.
// Assume `Object someObject = new Object();` is defined as a field.
synchronized (someObject) {
someObject = new Object();
}
(*) Actually, you still get a happens-before relationship for the new object: it's just not for the things inside this synchronized block, it's for things that happened in some other synchronized block that used the object as the monitor. Essentially, it's impossible to reason about what this means, so you may as well just consider it "broken".
The correct way to do it is to synchronize on a field that you can't (not just don't) reassign. You could simply synchronize on this (which can't be reassigned):
synchronized (this) {
if (num > 0) {
num--; // This is the same as `num = Integer.valueOf(num.intValue() - 1);`
}
}
Now it doesn't matter that you're reassigning num inside the block, because you're not synchronizing on it any more. You get the happens-before guarantee from the fact that you're always synchronizing on the same thing.
Note, however, that you must always access num from inside a synchronized block - for example, if you have a getter to get the number of tickets remaining, that must also synchronize on this, in order to get the happens-before guarantee that the value changed in the sell() method is visible in that getter.
This works, but it may not be entirely desirable: anybody who has access to a reference to your Ticket instance can also synchronize on it. This means they can potentially deadlock your code.
Instead, it is a common practice to introduce a private field which is used purely for locking: this is what the obj field gives you. The only modification from your code should be to make it final (and give it a better name than obj):
private final Object obj = new Object();
This can't be accessed outside your class, so nefarious clients cannot cause a deadlock for you directly.
Again, this can't be reassigned inside your synchronized block (or anywhere else), so there is no risk of you breaking the happens-before guarantee by reassigning it.

Related

Why does wait(100) cause synchronized method to fail in multi threaded?

I am referencing from Baeldung.com. Unfortunately, the article does not explain why this is not a thread safe code. Article
My goal is to understand how to create a thread safe method with the synchronized keyword.
My actual result is: The count value is 1.
package NotSoThreadSafe;
public class CounterNotSoThreadSafe {
private int count = 0;
public int getCount() { return count; }
// synchronized specifies that the method can only be accessed by 1 thread at a time.
public synchronized void increment() throws InterruptedException { int temp = count; wait(100); count = temp + 1; }
}
My expected result is: The count value should be 10 because of:
I created 10 threads in a pool.
I executed Counter.increment() 10 times.
I make sure I only test after the CountDownLatch reached 0.
Therefore, it should be 10. However, if you release the lock of synchronized using Object.wait(100), the method become not thread safe.
package NotSoThreadSafe;
import org.junit.jupiter.api.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static org.junit.jupiter.api.Assertions.assertEquals;
class CounterNotSoThreadSafeTest {
#Test
void incrementConcurrency() throws InterruptedException {
int numberOfThreads = 10;
ExecutorService service = Executors.newFixedThreadPool(numberOfThreads);
CountDownLatch latch = new CountDownLatch(numberOfThreads);
CounterNotSoThreadSafe counter = new CounterNotSoThreadSafe();
for (int i = 0; i < numberOfThreads; i++) {
service.execute(() -> {
try { counter.increment(); } catch (InterruptedException e) { e.printStackTrace(); }
latch.countDown();
});
}
latch.await();
assertEquals(numberOfThreads, counter.getCount());
}
}
This code has both of the classical concurrency problems: a race condition (a semantic problem) and a data race (a memory model related problem).
Object.wait() releases the object's monitor and another thread can enter into the synchronized block/method while the current one is waiting. Obviously, author's intention was to make the method atomic, but Object.wait() breaks the atomicity. As result, if we call .increment() from, let's say, 10 threads simultaneously and each thread calls the method 100_000 times, we get count < 10 * 100_000 almost always, and this isn't what we'd like to. This is a race condition, a logical/semantic problem. We can rephrase the code... Since we release the monitor (this equals to the exit from the synchronized block), the code works as follows (like two separated synchronized parts):
public void increment() {
int temp = incrementPart1();
incrementPart2(temp);
}
private synchronized int incrementPart1() {
int temp = count;
return temp;
}
private synchronized void incrementPart2(int temp) {
count = temp + 1;
}
and, therefore, our increment increments the counter not atomically. Now, let's assume that 1st thread calls incrementPart1, then 2nd one calls incrementPart1, then 2nd one calls incrementPart2, and finally 1st one calls incrementPart2. We did 2 calls of the increment(), but the result is 1, not 2.
Another problem is a data race. There is the Java Memory Model (JMM) described in the Java Language Specification (JLS). JMM introduces a Happens-before (HB) order between actions like volatile memory write/read, Object monitor's operations etc. https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5 HB gives us guaranties that a value written by one thread will be visible by another one. Rules how to get these guaranties are also known as Safe Publication rules. The most common/useful ones are:
Publish the value/reference via a volatile field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5), or as the consequence of this rule, via the AtomicX classes
Publish the value/reference through a properly locked field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5)
Use the static initializer to do the initializing stores
(http://docs.oracle.com/javase/specs/jls/se11/html/jls-12.html#jls-12.4)
Initialize the value/reference into a final field, which leads to the freeze action (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.5).
So, to have the counter correctly (as JMM has defined) visible, we must make it volatile
private volatile int count = 0;
or do the read over the same object monitor's synchronization
public synchronized int getCount() { return count; }
I'd say that in practice, on Intel processors, you read the correct value without any of these additional efforts, with just simple plain read, because of TSO (Total Store Ordering) implemented. But on a more relaxed architecture, like ARM, you get the problem. Follow JMM formally to be sure your code is really thread-safe and doesn't contain any data races.
Why int temp = count; wait(100); count = temp + 1; is not thread-safe? One possible flow:
First thread reads count (0), save it in temp for later, and waits, allowing second thread to run (lock released);
second thread reads count (also 0), saved in temp, and waits, eventually allowing first thread to continue;
first thread increments value from temp and saves in count (1);
but second thread still holds the old value of count (0) in temp - eventually it will run and store temp+1 (1) into count, not incrementing its new value.
very simplified, just considering 2 threads
In short: wait() releases the lock allowing other (synchronized) method to run.

In which cases we need to synchronize a method?

Let's say I have the following code in Java
public class SynchronizedCounter {
private int c = 0;
public synchronized void increment() {
c++;
}
}
And I create two threads T1 and T2
Thread T1 = new Thread(c1);
Thread T2 = new Thread(c2);
Where c1 and c2 are two different instances of the class SynchronizedCounter.
It is really needed to synchronize the method increment? Because I know that when we use a synchronized method, the thread hold a lock on the object, in this way other threads cannot acquire the lock on the same object, but threads "associated" with other objects can execute that method without problems. Now, because I have only one thread associated with the object c1, it is anyway needed to use the synchronized method? Also if no other threads associated with the same object exist?
In your specific example, synchronized is not needed because each thread has its own instance of the class, so there is no data "sharing" between them.
If you change your example to:
Thread T1 = new Thread(c);
Thread T2 = new Thread(c);
Then you need to synchronize the method because the ++ operation is not atomic and the instance is shared between threads.
The bottom line is that your class is not thread safe without synchronized. If you never use a single instance across threads it doesn't matter. There are plenty of legitimate use cases for classes which are not thread safe. But as soon as you start sharing them between threads all bets are off (i.e. vicious bugs may appear randomly).
Given code/example does not need synchronization since it is using two distinct instances (and so, variables). But if you have one instance shared between two or more threads, synchronization is needed, despite comments stating otherwise.
Actually it is very simple to create a program to show that behavior:
removed synchronized
added code to call the method from two threads
public class SynchronizedCounter {
private int c = 0;
public void increment() {
c++;
}
public static void main(String... args) throws Exception {
var counter = new SynchronizedCounter();
var t1 = create(100_000, counter);
var t2 = create(100_000, counter);
t1.start();
t2.start();
// wait termination of both threads
t1.join();
t2.join();
System.out.println(counter.c);
}
private static Thread create(int count, SynchronizedCounter counter) {
return new Thread(() -> {
for (var i = 0; i < count; i++) {
counter.increment();
}
System.out.println(counter.c);
});
}
}
Eventually (often?) this will result in weird numbers like:
C:\TMP>java SynchronizedCounter.java
122948
136644
136644
add synchronized and output should always end with 200000:
C:\TMP>java SynchronizedCounter.java
170134
200000
200000
Apparently posted code is not complete: the incremented variable is private and there is no method to retrieve the incremented value. impossible to really know if the method must be synchronized or not.

Understanding Multi-Threading in Java

I am learning multithreading in Java. Problem statement is: Suppose there is a datastruture that can contains million of Integers, now I want to search for a key in this. I want to use 2 threads so that if any one of the thread founds the key, it should set a shared boolean variable as false, and both the thread should stop further processing.
Here is what I am trying:
public class Test implements Runnable{
private List<Integer> list;
private Boolean value;
private int key = 27;
public Test(List<Integer> list,boolean value) {
this.list=list;
this.value=value;
}
#Override
public void run() {
synchronized (value) {
if(value){
Thread.currentThread().interrupt();
}
for(int i=0;i<list.size();i++){
if(list.get(i)==key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
}
}
And main class is:
public class MainClass {
public static void main(String[] args) {
List<Integer> list = new ArrayList<Integer>(101);
for(int i=0;i<=100;i++){
list.add(i);
}
Boolean value=false;
Thread t1 = new Thread(new Test(list.subList(0, 49),value));
t1.setName("Thread 1");
Thread t2 = new Thread(new Test(list.subList(50, 99),value));
t2.setName("Thread 2");
t1.start();
t2.start();
}
}
What I am expecting:
Both threads will run randomly and when any of thread encounters 27, both thread will be interrupted. So, thread 1 should not be able to process all the inputs, similarly thread 2.
But, what is happening:
Both threads are completing the loop and thread 2 is always starting after Thread 1 completes.
Please highlight the mistakes, I am still learning threading.
My next practice question will be: Access one by one any shared resource
You are wrapping your whole block of code under the synchronized block under the object value. What this means is that, once execution arrives at the synchronized block the first thread will hold the monitor to object value and any subsequent threads will block until the monitor is released.
Note how the whole block:
synchronized (value){
if(value){
Thread.currentThread().interrupt();
}
for(int i=0; i < list.size(); i++){
if(list.get(i) == key){
System.out.println("Found by: "+Thread.currentThread().getName());
value = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() +": "+ list.get(i));
}
}
is wrapped within a synchronized block meaning that only one thread can run that block at once, contrary to your objective.
In this context, I believe you are misunderstanding the principals behind synchronization and "sharing variables". To clarify:
static - is the variable modifier used to make a variable global across objects (i.e. class variable) such that each object shares the same static variable.
volatile - is the variable modifier used to make a variable thread-safe. Note that you can still access a variable without this modifier from different threads (this is however dangerous and can lead to race conditions). Threads have no effect on the scope of variables (unless you use a ThreadLocal).
I would just like to add that you can't put volatile everywhere and expect code to be thread-safe. I suggest you read Oracle's guide on synchronization for a more in-depth review of how to establish thread-safety.
In your case, I would remove the synchronization block and declare the shared boolean as a:
private static volatile Boolean value;
Additionally, the task you are trying to perform right now is something a Fork/Join pool is built for. I suggest reading this part of Oracle's java tutorials to see how a Fork/Join pool is used in a divide-and-conquer approach.
By wrapping the main logic of your thread in a synchronized block, execution of the code in that block becomes mutually exclusive. Thread 1 will enter the block, acquiring a lock on "value" and run the entire loop before returning the lock and allowing Thread 2 to run.
If you were to wrap only the checking and setting of the flag "value", then both threads should run the code concurrently.
EDIT: As other people have discussed making "value" a static volatile boolean within the Test class, and not using the synchronized block at all, would also work. This is because access to volatile variables occurs as if it were in a synchronized block.
Reference: https://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
You should not obtain a lock on the found flag - that will just make sure only one thread can run. Instead make the flag static so it is shared and volatile so it cannot be cached.
Also, you should check the flag more often.
private List<Integer> list;
private int key = 27;
private static volatile boolean found;
public Test(List<Integer> list, boolean value) {
this.list = list;
this.found = value;
}
#Override
public void run() {
for (int i = 0; i < list.size(); i++) {
// Has the other thread found it?
if (found) {
Thread.currentThread().interrupt();
}
if (list.get(i) == key) {
System.out.println("Found by: " + Thread.currentThread().getName());
// I found it!
found = true;
Thread.currentThread().interrupt();
}
System.out.println(Thread.currentThread().getName() + ": " + list.get(i));
}
}
BTW: Both of your threads start at 0 and walk up the array - I presume you do this in this code as a demonstration and you either have them work from opposite ends or they walk at random.
Make boolean value static so both threads can access and edit the same variable. You then don't need to pass it in. Then as soon as one thread changes it to true, the second thread will also stop since it is using the same value.

Multi-threading program to print numbers from 1 to 50?

im trying to write a program in which two threads are created and the output should be like 1st thread prints 1 and the next thread prints 2 ,1st thread again prints 3 and so on. im a beginner so pls help me clearly. i thought thread share the same memory so they will share the i variable and print accordingly. but in output i get like thread1: 1, thread2 : 1, thread1: 2, thread2 : 2 nd so on. pls help. here is my code
class me extends Thread
{
public int name,i;
public void run()
{
for(i=1;i<=50;i++)
{
System.out.println("Thread" + name + " : " + i);
try
{
sleep(1000);
}
catch(Exception e)
{
System.out.println("some problem");
}
}
}
}
public class he
{
public static void main(String[] args)
{
me a=new me();
me b=new me();
a.name=1;
b.name=2;
a.start();
b.start();
}
}
First off you should read this http://www.oracle.com/technetwork/java/codeconventions-135099.html.
Secondly the class member variables are not shared memory. You need to explicitly pass an object (such as the counter) to both objects, such that it becomes shared. However, this will still not be enough. The shared memory can be cached by the threads so you will have race-conditions. To solve this you will need to use a Lock or use an AtomicInteger
It seems what you want to do is:
Write all numbers from 1 to 50 to System.out
without any number being printed multiple times
with the numbers being printed in order
Have this execution be done by two concurrent threads
First, let's look at what is happening in your code: Each number is printed twice. The reason for this is that i is an instance variable of me, your Thread. So each Thread has its own i, i.e., they do not share the value.
To make the two threads share the same value, we need to pass the same value when constructing me. Now, doing so with the primitive int won't help us much, because by passing an int we are not passing a reference, hence the two threads will still work on independent memory locations.
Let us define a new class, Value which holds the integer for us: (Edit: The same could also be achieved by passing an array int[], which also holds the reference to the memory location of its content)
class Value{
int i = 1;
}
Now, main can instantiate one object of type Value and pass the reference to it to both threads. This way, they can access the same memory location.
class Me extends Thread {
final Value v;
public Me(Value v){
this.v = v;
}
public void run(){
for(; v.i < 50; v.i++){
// ...
}
public static void main(){
Value valueInstance = new Value();
Me a = new Me(valueInstance);
Me b = new Me(valueInstance);
}
}
Now i isn't printed twice each time. However, you'll notice that the behavior is still not as desired. This is because the operations are interleaved: a may read i, let's say, the value is 5. Next, b increments the value of i, and stores the new value. i is now 6. However, a did still read the old value, 5, and will print 5 again, even though b just printed 5.
To solve this, we must lock the instance v, i.e., the object of type Value. Java provides the keyword synchronized, which will hold a lock during the execution of all code inside the synchronized block. However, if you simply put synchronize in your method, you still won't get what you desire. Assuming you write:
public void run(){ synchronized(v) {
for(; v.i < 50; v.i++) {
// ...
}}
Your first thread will acquire the lock, but never release it until the entire loop has been executed (which is when i has the value 50). Hence, you must release the lock somehow when it is safe to do so. Well... the only code in your run method that does not depend on i (and hence does not need to be locking) is sleep, which luckily also is where the thread spends the most time in.
Since everything is in the loop body, a simple synchronized block won't do. We can use Semaphore to acquire a lock. So, we create a Semaphore instance in the main method, and, similar to v, pass it to both threads. We can then acquire and release the lock on the Semaphore to let both threads have the chance to get the resource, while guaranteeing safety.
Here's the code that will do the trick:
public class Me extends Thread {
public int name;
final Value v;
final Semaphore lock;
public Me(Value v, Semaphore lock) {
this.v = v;
this.lock = lock;
}
public void run() {
try {
lock.acquire();
while (v.i <= 50) {
System.out.println("Thread" + name + " : " + v.i);
v.i++;
lock.release();
sleep(100);
lock.acquire();
}
lock.release();
} catch (Exception e) {
System.out.println("some problem");
}
}
public static void main(String[] args) {
Value v = new Value();
Semaphore lock = new Semaphore(1);
Me a = new Me(v, lock);
Me b = new Me(v, lock);
a.name = 1;
b.name = 2;
a.start();
b.start();
}
static class Value {
int i = 1;
}
}
Note: Since we are acquiring the lock at the end of the loop, we must also release it after the loop, or the resource will never be freed. Also, I changed the for-loop to a while loop, because we need to update i before releasing the lock for the first time, or the other thread can again read the same value.
Check the below link for the solution. Using multiple threads we can print the numbers in ascending order
http://cooltekhie.blogspot.in/2017/06/#987628206008590221

Are java getters thread-safe?

Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Consider:
public class A
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
}
I've heard people argue that it's possible for getA() and incrementA() to be called at roughly the same time and have getA() return to wrong thing. However it seems like, in the case that they're called at the same time, even if the getter is synchronized you can get the wrong thing. In fact the "right thing" doesn't even seem defined if these are called concurrently. The big thing for me is that the state remains consistent.
I've also heard talk about JIT optimizations. Given an instance of the above class and the following code(the code would be depending on a to be set in another thread):
while(myA.getA() < 10)
{
//incrementA is not called here
}
it is apparently a legal JIT optimization to change this to:
int temp = myA.getA();
while(temp < 10)
{
//incrementA is not called here
}
which can obviously result in an infinite loop.
Why is this a legal optimization? Would this be illegal if a was volatile?
Update
I did a little bit of testing into this.
public class Test
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
public static void main(String[] args)
{
final Test myA = new Test();
Thread t = new Thread(new Runnable(){
public void run() {
while(true)
{
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
myA.incrementA();
}
}});
t.start();
while(myA.getA() < 15)
{
System.out.println(myA.getA());
}
}
}
Using several different sleep times, this worked even when a is not volatile. This of course isn't conclusive, it still may be legal. Does anyone have some examples that could trigger such JIT behaviour?
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Depends on the particulars. It is important to realize that synchronization does two important things. It is not just about atomicity but it is also required because of memory synchronization. If one thread updates the a field, then other threads may not see the update because of memory caching on the local processor. Making the int a field be volatile solves this problem. Making both the get and the set method be synchronized will as well but it is more expensive.
If you want to be able to change and read a from multiple threads, the best mechanism is to use an AtomicInteger.
private AtomicInteger a = new AtomicInteger(5);
public void setA(int a) {
// no need to synchronize because of the magic of the `AtomicInteger` code
this.a.set(a);
}
public int getA() {
// AtomicInteger also takes care of the memory synchronization
return a.get();
}
I've heard people argue that it's possible for getA() and setA() to be called at roughly the same time and have getA() return to wrong thing.
This is true but you can get the wrong value if getA() is called after setA() as well. A bad cache value can stick forever.
which can obviously result in an infinite loop. Why is this a legal optimization?
It is a legal optimization because threads running with their own memory cache asynchronously is one of the important reasons why you see performance improvements with them. If all memory accesses where synchronized with main memory then the per-CPU memory caches would not be used and threaded programs would run a lot slower.
Would this be illegal if a was volatile?
It is not legal if there is some way for a to be altered – by another thread possibly. If a was final then the JIT could make that optimization. If a was volatile or the get method marked as synchronized then it would certainly not be a legal optimization.
It's not thread safe because that getter does not ensure that a thread will see the latest value, as the value may be stale. Having the getter be synchronized ensures that any thread calling the getter will see the latest value instead of a possible stale one.
You basically have two options:
1) Make your int volatile
2) Use an atomic type like AtomicInt
using a normal int without synchronization is not thread safe at all.
Your best solution is to use an AtomicInteger, they were basically designed for exactly this use case.
If this is more of a theoretical "could this be done question", I think something like the following would be safe (but still not perform as well as an AtomicInteger):
public class A
{
private volatile int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
final int tmp = a + 1;
a = tmp;
}
}
public int getA()
{
return a;
}
}
The short answer is your example will be thread-safe, if
the variable is declared as volatile, or
the getter is declared as synchronized.
The reason that your example class A is not thread-safe is that one can create a program using it that doesn't have a "well-formed execution" (see JLS 17.4.7).
For instance, consider
// in thread #1
int a1 = A.getA();
Thread.sleep(...);
int a2 = A.getA();
if (a1 == a2) {
System.out.println("no increment");
// in thread #2
A.incrementA();
in the scenario that the increment happens during the sleep.
For this execution to be well-formed, there must be a "happens before" (HB) chain between the assignment to a in incrementA called by thread #2, and the subsequent read of a in getA called by thread #1.
If the two threads synchronize using the same lock object, then there is a HB between one thread releasing the lock and a second thread acquiring the lock. So we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #2 releases lock --HB-->
thread #1 acquires lock --HB-->
thread #1 reads a
If two threads share a a volatile variable, there is a HB between any write and any subsequent read (without an intervening write). So we typically get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #1 reads a
Note that incrementA needs to be synchronized to avoid race conditions with other threads calling incrementA.
If neither of the above is true, we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a // No HB!!
thread #1 reads a
Since there is no HB between the write by thread #2 and the subsequent read by thread #1, the JLS does not guarantee that the latter will see the value written by the former.
Note that this is a simplified version of the rules. For the complete version, you need to read all of JLS Chapter 17.

Categories

Resources