Can Synchronized Methods Have Race Conditions? [Example] - java

I am learning about synchronized methods as a means of preventing race conditions and unwanted behavior in Java. I was presented with the following example, and told the race condition is quite subtle:
public class Messages {
private String message = null;
private int count = 2;
// invariant 0 <= count && count <= 2
public synchronized void put(String message) {
while( count < 2 )
this.wait();
this.message = message;
this.count = 0;
this.notifyAll();
}
public synchronized String getMessage() {
while( this.count == 2 )
this.wait();
String result = this.message;
this.count += 1;
this.notifyAll();
return result;
}
}
Subtle or not, I think I have a fundamental misunderstanding of what synchronized methods do. I was under the impression they restrict access to threads through use of a lock token (or similar), and thus can never race. How, then, does this example have a race condition, if its methods are synchronized? Can anyone help clarify?

I presume that what the author had in mind is that, since count goes from 0 to 2, two threads might call put() in sequence, and the reader threads would thus miss one of the messages.
It's indeed a race condition: readers and putters compete for the same lock, and if the messages being read depends on which thread is notified by notifyAll().

Think about ways that count could become > 2...
That code has a bad smell, too. What is count supposed to be counting? Why does get increment it and put reset it? Why the unnecessary use of 'this'? If I saw code like that in a project, I would look at it very carefully...

Multi threading is when you use new Thread(runnable).start(); this starts a new thread and goes to the run() method. The runnable is any class that implements runnable. Or extends thread a synchronized method makes sure that if these threads want to read data changed by the synchronized method it will be possible, otherwise it might be unchanged, or worse, half-changed.

Java's synchronized methods buys you mutual exclusion between the two methods, which means that you can assume they will not interleave.
However, you still have a race condition because you can get different behavior depending on which method runs first.
As JB Nizet suggested in his answer, consider what happens with each of the two orderings (assume they are running in different threads).

A race condition occurs whenever two entities compete for a single resource, which can cause unpredictable behavior if the outcome depends on the order. When you use notifyAll() all threads are woken up and they race to obtain the lock they were waiting for, and it's impossible to say which will execute next.

I don't think having a count value >2 is the problem if the code can work as expected.
Since both put() and getMessage() methods are synchronized both method's can't be called at the same time. So if an thread calls getMessage() and has the count value 2, another thread can't call the put() method to set the count = 0 and notify the waiting thread.There is too much synchronizing, which cause an deadlock. So the while part shouldn't be synchronized and can solved like this.
public void put(String message) {
while( count < 2 )
this.wait();
synchronzied(this){
this.message = message;
this.count = 0;
this.notifyAll();
}
}
public String getMessage() {
while( this.count == 2 )
this.wait();
synchronized(this){
String result = this.message;
this.count += 1;
this.notifyAll();
}
return result;
}

Related

Why does wait(100) cause synchronized method to fail in multi threaded?

I am referencing from Baeldung.com. Unfortunately, the article does not explain why this is not a thread safe code. Article
My goal is to understand how to create a thread safe method with the synchronized keyword.
My actual result is: The count value is 1.
package NotSoThreadSafe;
public class CounterNotSoThreadSafe {
private int count = 0;
public int getCount() { return count; }
// synchronized specifies that the method can only be accessed by 1 thread at a time.
public synchronized void increment() throws InterruptedException { int temp = count; wait(100); count = temp + 1; }
}
My expected result is: The count value should be 10 because of:
I created 10 threads in a pool.
I executed Counter.increment() 10 times.
I make sure I only test after the CountDownLatch reached 0.
Therefore, it should be 10. However, if you release the lock of synchronized using Object.wait(100), the method become not thread safe.
package NotSoThreadSafe;
import org.junit.jupiter.api.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import static org.junit.jupiter.api.Assertions.assertEquals;
class CounterNotSoThreadSafeTest {
#Test
void incrementConcurrency() throws InterruptedException {
int numberOfThreads = 10;
ExecutorService service = Executors.newFixedThreadPool(numberOfThreads);
CountDownLatch latch = new CountDownLatch(numberOfThreads);
CounterNotSoThreadSafe counter = new CounterNotSoThreadSafe();
for (int i = 0; i < numberOfThreads; i++) {
service.execute(() -> {
try { counter.increment(); } catch (InterruptedException e) { e.printStackTrace(); }
latch.countDown();
});
}
latch.await();
assertEquals(numberOfThreads, counter.getCount());
}
}
This code has both of the classical concurrency problems: a race condition (a semantic problem) and a data race (a memory model related problem).
Object.wait() releases the object's monitor and another thread can enter into the synchronized block/method while the current one is waiting. Obviously, author's intention was to make the method atomic, but Object.wait() breaks the atomicity. As result, if we call .increment() from, let's say, 10 threads simultaneously and each thread calls the method 100_000 times, we get count < 10 * 100_000 almost always, and this isn't what we'd like to. This is a race condition, a logical/semantic problem. We can rephrase the code... Since we release the monitor (this equals to the exit from the synchronized block), the code works as follows (like two separated synchronized parts):
public void increment() {
int temp = incrementPart1();
incrementPart2(temp);
}
private synchronized int incrementPart1() {
int temp = count;
return temp;
}
private synchronized void incrementPart2(int temp) {
count = temp + 1;
}
and, therefore, our increment increments the counter not atomically. Now, let's assume that 1st thread calls incrementPart1, then 2nd one calls incrementPart1, then 2nd one calls incrementPart2, and finally 1st one calls incrementPart2. We did 2 calls of the increment(), but the result is 1, not 2.
Another problem is a data race. There is the Java Memory Model (JMM) described in the Java Language Specification (JLS). JMM introduces a Happens-before (HB) order between actions like volatile memory write/read, Object monitor's operations etc. https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5 HB gives us guaranties that a value written by one thread will be visible by another one. Rules how to get these guaranties are also known as Safe Publication rules. The most common/useful ones are:
Publish the value/reference via a volatile field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5), or as the consequence of this rule, via the AtomicX classes
Publish the value/reference through a properly locked field (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.4.5)
Use the static initializer to do the initializing stores
(http://docs.oracle.com/javase/specs/jls/se11/html/jls-12.html#jls-12.4)
Initialize the value/reference into a final field, which leads to the freeze action (https://docs.oracle.com/javase/specs/jls/se11/html/jls-17.html#jls-17.5).
So, to have the counter correctly (as JMM has defined) visible, we must make it volatile
private volatile int count = 0;
or do the read over the same object monitor's synchronization
public synchronized int getCount() { return count; }
I'd say that in practice, on Intel processors, you read the correct value without any of these additional efforts, with just simple plain read, because of TSO (Total Store Ordering) implemented. But on a more relaxed architecture, like ARM, you get the problem. Follow JMM formally to be sure your code is really thread-safe and doesn't contain any data races.
Why int temp = count; wait(100); count = temp + 1; is not thread-safe? One possible flow:
First thread reads count (0), save it in temp for later, and waits, allowing second thread to run (lock released);
second thread reads count (also 0), saved in temp, and waits, eventually allowing first thread to continue;
first thread increments value from temp and saves in count (1);
but second thread still holds the old value of count (0) in temp - eventually it will run and store temp+1 (1) into count, not incrementing its new value.
very simplified, just considering 2 threads
In short: wait() releases the lock allowing other (synchronized) method to run.

What is the difference between synchronized fields and ReadWriteLocks?

Just want to know how the below codes that does the same functionality differs
Code 1:
class ReadWriteCounter {
ReadWriteLock lock = new ReentrantReadWriteLock();
private Integer count = 0;
public Integer incrementAndGetCount() {
lock.writeLock().lock();
try {
count = count + 1;
return count;
} finally {
lock.writeLock().unlock();
}
}
public Integer getCount() {
lock.readLock().lock();
try {
return count;
} finally {
lock.readLock().unlock();
}
}
}
Code 2:
class ReadWriteCounter {
private Integer count = 0;
public getCount()
{
synchronized(count){
return count;
}
}
public void setCount(Integer i)
{
synchronized(count){
count = i;
}
}
}
The purpose is to ensure that when count is modified no other threads access it for reading and while reading no other threads should should access it for writing. Which is an optimum solution and why? Also, I will be using this in a class where there are field variables which needs to edited. Please offer your suggestions.
ReentrantReadWriteLock is the best way to implement your thoughts.
synchronized would only allow one thread if two or more threads attempt to read count.
But everyone could get the value of count when they all attempt to read it.
Both your solutions work however there is a bug in the way you are implementing locking.
First the difference in the two approaches:
The ReentrantReadWriteLock is mainly used in situations wherein you have many more reads than writes typically in ratios of 10 reads : 1 write. This allows the reads to happen concurrently without blocking each other however when a write starts all reads will be blocked. So performance is the primary reason.
Bug in your approach :
The object on which you are locking should be final. In setCount() you are effectively swapping the object out and that can cause a dirty read at that time.
Also, never expose the object that you are locking on. The object you are locking should be private and final. The reason is if you happen to expose the object the caller may happen to use the returned object itself for locking, in which case you will run into contention issues with components outside this class itself.

Implementing a cyclicbarrier in java using semaphores

The question is as follows, since the barrier is only called using down() so that it would wait for the n threads to arrive and then execute all n threads together in the critical region now how do I inform the threads calling on barrier.down that it can move on now. I tried adding notifyAll() after phase2() and that doesn't work. Help? :)
public class cyclicBarrier {
private int n;
private int count;
private semaphore mutex;
private semaphore turnstile;
private semaphore turnstile2;
public cyclicBarrier(int n){
this.n = n;
this.count = 0;
this.mutex = new semaphore(1);
this.turnstile = new semaphore(0);
this.turnstile2 = new semaphore(0);
}
public synchronized void down() throws InterruptedException{
this.phase1(); //waits for n threads to arrive
this.phase2(); //waits for n threads to execute
}
private synchronized void phase1() throws InterruptedException {
this.mutex.down();
this.count++;
if(this.count == this.n){
for(int i = 0; i < this.n; i++){
this.turnstile.signal(); //when n threads received then move on to phase 2
}
}
this.mutex.signal();
this.turnstile.down(); //keeps waiting till I get n threads
}
private synchronized void phase2() throws InterruptedException {
this.mutex.down();
this.count--;
if(this.count == 0){
for(int i = 0; i < this.n; i++){
this.turnstile2.signal(); //reset the barrier for reuse
}
}
this.mutex.signal();
this.turnstile2.down(); //keeps waiting till n threads get executed
}
}
public class semaphore {
private int counter;
public semaphore(int number){
if (number > 0) {
this.counter = number;
}
}
public synchronized void signal(){
this.counter++;
notifyAll();
}
public synchronized void down() throws InterruptedException{
while (this.counter <= 0){
wait();
}
this.counter--;
}
}
I see you're using the solution from The Little Book of Semaphores. One main point of the book is that you can solve many coordination problems using semaphores as the only coordination primitive. It is perfectly fine to use synchronized to implement a semaphore, since that is necessary to do it correctly. It misses the point, however, to use synchronized in the methods which solve a puzzle that is supposed to be solved with semaphores.
Also, I think it doesn't work in your case: don't you get a deadlock at this.turnstile.down()? You block on a semaphore which holding an exclusive lock (through synchronized) on the object and method which would allow that semaphore to get released.
Addressing the question as stated: you signal to threads that they can proceed by returning from barrier.down(). You ensure that you don't return too soon by doing turnstile.down().
Aside: Semaphore implementation
Your semaphore implementation looks correct, except that you only allow non-negative initial values, which is at least non-standard. Is there some motivation for doing this that I can't see? If you think negative initial values are wrong, why not throw an error instead of silently doing something else?
Aside: Other synchronization primitives
Note that the java constructs synchronized, .wait() and .notify() correspond to the Monitor coordination primitive. It may be instructive to solve the puzzles with monitors (or other coordination primitives) instead of semaphores, but I would recommend keeping those efforts separate. I've had a bit of fun trying to solve a puzzle using Haskell's Software Transactional Memory.
Aside: On runnability
You say you have tried things, which indicates that you have some code that allows you to run the code in the question. It would have been helpful if you had included that code, so we could easily run it too. I probably would have checked that my hypothesized deadlock actually occurs.

Java threading synchronized block behavior - synchronized vs synchronized()? [duplicate]

This question already has answers here:
Is there an advantage to use a Synchronized Method instead of a Synchronized Block?
(23 answers)
Closed 9 years ago.
I have simple question but has problem to find answer on it.
Question is if synchronized method is equal to synchronized(this) - mean do same locking.
I want to write thread safe code with reduced thread locking (not want use always synchronized methods but sometime partial synchronization critical sections only).
Could you explain me if this code is equal or not and why in short words (examples is simplified to show atomic problem)?
Examples
Is this mixed locking code is equal to brute force code bellow:
public class SynchroMixed {
int counter = 0;
synchronized void writer() {
// some not locked code
int newCounter = counter + 1;
// critical section
synchronized(this) {
counter = newCounter;
}
}
synchronized int reader() {
return counter;
}
}
Brute force code (each method is locked including not critical section:
public class SynchroSame {
int counter = 0;
synchronized void writer() {
int newCounter = counter + 1;
counter = newCounter;
}
synchronized int reader() {
return counter;
}
}
Or I should write this code (this is for sure valid but more micro coding and unclear).
public class SynchroMicro {
int counter = 0;
void writer() {
// some not locked code
int newCounter = counter + 1;
// critical section
synchronized(this) {
counter = newCounter;
}
}
int reader() {
synchronized (this) {
return counter;
}
}
}
synchronized method and synchronized(this) means absolutely the same thing, and uses the same mutex behind. It's more question of taste what notation to prefer.
Personally I prefer synchronized(this), because it explicitly specifies the scope of the mutex lock which could be smaller than the whole method
All three examples are equivalent. Using synchronized on a method is the same as wrapping the entire body within synchronized(this) {}.
Then, by using synchronized(this) {} for some statements, the thread is only re-acquiring a lock it already owns: it's pointless here.
There is definitely no point in synchronized(this) within a synchronized method since entering the method is already implicitly synchronized(this).
That was just a syntax mistake on your part since you clearly intend to reduce the scope of the critical section, but the reduced scope introduces a data race into your code: you must both read and write the shared variable within the same synchronized block.
In addition, even if a method only reads the shared variable, it still must do that in a synchronized block; otherwise it may never observe any writes by other threads. This is the basic semantics of Java's Memory Model.
Now, if what you are showing is really representative of your full problem, then you shouldn't even be using synchronized, but a simple AtomicInteger, which will have the best concurrent performance.
Synchronized method and block are absolutely similar from functional point of view. They both do the same task i.e. to avoid concurrent access to particular method or block of code within a method.
synchronized() block is more flexible and handy when you have a long method and just need a part of it to be synchronized. You need not lock access to the entire method, as we know synchronization has some performance issues associated with it. Hence it is always recommended to synchronize only need part of the code and not the entire method (if not required).

How to correctly create a SynchronizedStack class?

I made a simple synchronized Stack object in Java, just for training purposes.
Here is what I did:
public class SynchronizedStack {
private ArrayDeque<Integer> stack;
public SynchronizedStack(){
this.stack = new ArrayDeque<Integer>();
}
public synchronized Integer pop(){
return this.stack.pop();
}
public synchronized int forcePop(){
while(isEmpty()){
System.out.println(" Stack is empty");
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return this.stack.pop();
}
public synchronized void push(int i){
this.stack.push(i);
notifyAll();
}
public boolean isEmpty(){
return this.stack.isEmpty();
}
public synchronized void pushAll(int[] d){
for(int i = 0; i < d.length; i++){
this.stack.push(i);
}
notifyAll();
}
public synchronized String toString(){
String s = "[";
Iterator<Integer> it = this.stack.iterator();
while(it.hasNext()){
s += it.next() + ", ";
}
s += "]";
return s;
}
}
Here are my questions:
Is it OK not to synchronize the isEmtpy() method? I figured it was because even if another Thread is modifying the stack at the same time, it would still return a coherent result (there is no operation that goes into a isEmpty state that is neither initial or final). Or is it a better design to have all the methods of a synchronized object synchronized?
I don't like the forcePop() method. I just wanted to create a thread that was able to wait until an item was pushed into the stack before poping an element, and I thought the best option was to do the loop with the wait() in the run() method of the thread, but I can't because it throws an IllegalMonitorStatException. What is the proper method to do something like this?
Any other comment/suggestion?
Thank you!
Stack itself is already synchronized, so it doesn't make sense to apply synchronization again (use ArrayDeque if you want a non-synchronized stack implementation)
It's NOT OK (aside from the fact from the previous point), because lack of synchronization may cause memory visibility effects.
forcePop() is pretty good. Though it should pass InterruptedException without catching it to follow the contract of interruptable blocking method. It would allow you to interrupt a thread blocked at forcePop() call by calling Thread.interrupt().
Assuming that stack.isEmpty() won't need synchronization might be true, but you are relying on an implementation detail of a class that you have no control over.
The javadocs of Stack state that the class is not thread-safe, so you should synchronize all access.
I think you're mixing idioms a little. You are backing your SynchronizedStack with java.util.Stack, which in turn is backed by java.util.Vector, which is synchronized. I think you should encapsulate the wait() and notify() behaivor in another class.
The only problem with not synchronizing isEmpty() is that you don't know what's happening underneath. While your reasoning is, well, reasonable, it assumes that the underlying Stack is also behaving in a reasonable manner. Which it probably is in this case, but you can't rely on it in general.
And the second part of your question, there's nothing wrong with a blocking pop operation, see this for a complete implementation of all the possible strategies.
And one other suggestion: if you're creating a class that is likely to be re-used in several parts of an application (or even several applications), don't use synchronized methods. Do this instead:
public class Whatever {
private Object lock = new Object();
public void doSomething() {
synchronized( lock ) {
...
}
}
}
The reason for this is that you don't really know if users of your class want to synchronize on your Whatever instances or not. If they do, they might interfere with the operation of the class itself. This way you've got your very own private lock which nobody can interfere with.

Categories

Resources