Is this class thread-safe?
Is it possible to see inconsistent values? Lets say initially a's value is 80. Thread 1 calls setA(100) and enters the function but did not yet call a.set(100) and Thread 2 concurrently calls getA(). Is it possible for Thread 2 to see 80?
public class A {
private AtomicInteger a;
public int getA() {
return a.get()
}
public void setA(int newVal){
a.set(newVal);
}
}
I know that synchronizing it will guarantee thread 2 sees 100, but not sure with AtomicInteger.
Is this class thread-safe?
Yes it is.
Thread 1 calls setA(100) and enters the function but did not yet call a.set(100) and Thread 2 concurrently calls getA(). Is it possible for Thread 2 to see 80?
Yes. Until the memory barrier code that synchronizes the volatile field inside of AtomicInteger completes, the race condition can show 80 or 100.
Thread 1 could even enter the AtomicInteger.set method and be before the inner field assignment and still the 80 may be returned by get AtomicInteger.get method.
There are no guarantees about when the values will be updated in other threads. What is guaranteed is when the get() completes, you get the most recent synchronized value and when the set() completes, all other threads will see the updates.
There is no guarantee as to the timing of getter and setter calls in different threads.
As #Gray noted, there is a possibility for a race condition here.
Calling get and then set is not an atomic operation. The Atomic* classes offer a lock-free atomic conditional update operation, compareAndSet - you should use that one for thread safety.
Related
Here's the class:
#NotThreadSafe
public class MutableInteger {
private int value;
public int get() { return value;}
public void set(int value) { this.value = value;}
}
Here's the post-condition that I have come up with: The value returned by get() is equal to the value value set by set() or 0.
It is easy to see that the above post-condition will not always hold true. Take two threads A and B for instance. Suppose A sets value to 5 and B sets it to 8 after that. Doing a get() in thread A would return 8. It should have returned 5. It is a simple race condition situation.
How can I make this class thread safe? In the book Java: Concurrency in Practice, the authors have guarded value and both the methods on the same object. I fail to understand how this helps at all with the race condition. First of all, set() is not a compound action. So, why do we need to synchronise it? And even after that, the race condition does not disappear. As soon as the lock is released once a thread exits from the set() method, another thread can aquire the lock and set a new value. Doing a get() in the initial thread will return the new value, breaching the post-condition.
(I understand the the author is guarding the get()) for visibility stuff. But I am not sure how it eliminates the race condition.
First of all, set() is not a compound action. So, why do we need to synchronise it?
You're not synchronising the set() in its own right, you're synchronising both the get() and set() methods against the same object (assuming you make both these methods synchronised.)
If you didn't do this, and the value variable isn't marked as volatile, then you can't guarantee that a thread will see the correct value because of per-thread caching. (Thread a could update it to 5, then thread b could still potentially see 8 even after thread a has updated it. That's what's meant by lack of thread safety in this context.)
You're correct that all reference assignments are atomic, so there's no worry about a corrupt reference in this scenario.
And even after that, the race condition does not disappear. As soon as the lock is released once a thread exits from the set() method, another thread can aquire the lock and set a new value.
New threads setting new values (or new code setting new values) isn't an issue at all in terms of thread safety - that's designed, and expected. The issue is if the results are inconsistent, or specifically if multiple threads are able to concurrently view the object in an inconsistent state.
if I have two thread A and B that use the same class C.java what happens if thread A use a synchronized method(synchro()) that access use another class method(myMethod()) and after 1ms or minus thread B try to use myMethod()? He will wait until thread A has finished or it accesses to myMethod()? Thread A and Thread B use the same class instance.
Synchronization is not implicitly transitive. It is merely a lock on the object to execute a block of code. It does not lock the objects that are used inside the code block.
Thread B will have access to the unsynchronized method. Since it's not synchronized, Thread B doesn't need to wait to acquire an object monitor.
Implicit mutex lock will be introduced only for method synchro(), but not for myMethod(), since myMethod() is not synchronized. As a consequence access to myMethod() will not be syncronized between multiple threads.
It will access it.
Only synchronized methods or synchronized blocks cannot be executed concurrently: Synchronized Methods
There is no such thing as a synchronized method.
Repeat after me: There is no such thing as a synchronized method.
When you write this:
synchronized Foobar myFunk() { ... }
That's just syntactic sugar that saves you from having to write this instead:
Foobar myFunk() {
synchronized(this) { ... }
}
But the second one makes it more obvious what is really going on: It's not the method that is synchronized, it's the object.
The JVM will not allow two different threads to synchronize the same object at the same time. That's all it means. Synchronizing an object does not "lock" the object (other threads can still modify it). Synchronizing an object does not prevent other threads from calling the same method. All it does is prevent other threads from synchronizing the same object at the same time.
How you use that feature is up to you.
Normally, you use it to protect invariants. An invariant is an assertion that you make about some value or some group of values. (e.g., the length of list L is always even). If one thread must temporarily break the invariant (e.g., by first adding one thing to the list, and then adding another), and some other thread will crash and burn if it sees the broken invariant; then you need synchronization.
The first thread only breaks the invariant while inside a synchronized block, and any other thread only looks at the data when it is synchronized on the same object. That way, the one the looks can never see the invariant in the broken state.
Here is a simple example:
private long counter = 0;
// note this method is NOT synchronized
// this will be called by thread A
public void increment() { counter++; }
// note this method IS synchronized
// this will be called by thread B
public synchronized long value() { return counter; }
So I just want to get a good value for counter, not a stuck value in the cpu cache because the variable is non-volatile. The goal is to NOT make counter volatile so it does NOT impact thread A doing the increments, but only thread B, which I don't care, when it reads the variable.
Just for the record, I plan to read the value of counter from thread B when thread A has already finished anyways...
No, the synchronized block in thread B does not ensure that it will read the actual current value of counter. You would need synchronized blocks in both threads to do that. From a practical perspective, your code ensures that the processor running thread B invalidates its cache and reads the value of counter from main memory, but it does not ensure that the processor running thread A flushes its current value to main memory, so the value in main memory may be stale.
Since using a volatile variable is cheaper than synchronized blocks in both threads, making counter volatile is likely the correct solution. This is what volatile variables are for.
Edit: if thread A is going to complete before thread B reads the final value, you could enclose the entire execution of thread A in a single synchronized block or have thread B join thread A before reading the counter, ensuring that thread A completes before the counter is read. That would result in one cache flush at the end of Thread A's execution, which would have negligible impact on performance.
long assignment is not guaranteed to be atomic, so not only could B read a stale value, it could also read a half written value.
For proper visibility you need to make counter volatile. Note that even then, calling increment n times from several threads may not increment counter by n.
You could use an AtomicLong to simply since your problem.
No, synchornized only guarantees visibility of changes that were made within synchronized blocks of the same lock:
synchornized(this) {
counter++;
}
or before them (as defined by transitive nature of happens-before relationship):
// Thread A
counter++
synchronized (this) {
finished = true;
}
// Thread B
synchonized (this) {
if (finished) {
// you can read counter here
}
}
Note, however, that counter is guaranteed to be visibile if you read it after you positively determined that Thread A has finished (for example, using join()):
threadA.join();
// you can read counter here
No.There is no guarantee that Thread B will gives latest value always.Since increment() is non-synchronized method and value() is synchronized method.
Since
While a thread is inside a synchronized method of an object, all other threads that wish to execute this synchronized method or any other synchronized method of the object will have to wait.
This restriction does not apply to the thread that already has the lock and is executing a synchronized method of the object. Such a method can invoke other synchronized methods of the object without being blocked. The non-synchronized methods of the object can of course be called at any time by any thread.
I know that in a program that works with multiple threads it's necessary to synchronize the methods because it's possible to have problems like race conditions.
But I cannot understand why we need to synchronize also the methods that need just to read a shared variable.
Look at this example:
public ConcurrentIntegerArray(final int size) {
arr = new int[size];
}
public void set(final int index, final int value) {
lock.lock();
try {
arr[index] = value;
} finally {
lock.unlock();
}
}
public int get(final int index) {
lock.lock();
try {
return arr[index];
} finally {
lock.unlock();
}
}
They did a look on the get and also on the set method. On the set method I understand why. For example if I want to put with Thread1 in index=3 the number 5 and after some milliseconds the Thread2 have to put in index=3 the number 6. Can it happen that I have in index=3 in my array still a 5 instead of a 6 (if I don't do a synchronization on the method set)? This because the Thread1 can have a switch-context and so the Thread2 enter in the same method put the value and after the Thread1 assign the value 5 on the same position So instead of a 6 I have a 5.
But I don't understand why we need (look the example) to synchronize also the method get. I'm asking this question because we need just to read on the memory and not to write.So why we need also on the method get to have a synchronization? Can someone give to me a very simple example?
Both methods need to be synchronized. Without synchronization on the get method, this sequence is possible:
get is called, but the old value isn't returned yet.
Another thread calls set and updates the value.
The first thread that called get now examines the now-returned value and sees what is now an outdated value.
Synchronization would disallow this scenario by guaranteeing that another thread can't just call set and invalidate the get value before it even returns. It would force a thread that calls set to wait for the thread that calls get to finish.
If you do not lock in the get method than a thread might keep a local copy of the array and never refreshes from the main memory. So its possible that a get never sees a value which was updated by a set method. Lock will force the visibility.
Each thread maintain their own copy of value. The synchronized ensures that the coherency is maintained between different threads. Without synchronized, one can never be sure if any one has modified it. Alternatively, one can define the variable as volatile and it will have the same memory effects as synchronized.
The locking action also guarantees memory visibility. From the Lock doc:
All Lock implementations must enforce the same memory synchronization semantics as provided by the built-in monitor lock, [...]:
A successful lock operation has the same memory synchronization effects as a successful Lock action.
A successful unlock operation has the same memory synchronization effects as a successful Unlock action.
Without acquiring the lock, due to memory consistency errors, there's no reason a call to get needs to see the most updated value. Modern processors are very fast, access to DRAM is comparatively very slow, so processors store values they are working on in a local cache. In concurrent programming this means one thread might write to a variable in memory but a subsequent read from a different thread gets a stale value because it is read from the cache.
The locking guarantees that the value is actually read from memory and not from the cache.
Can anyone tell me if I'm right or not? I have two thread which will run in parallel.
class MyThread extends Thread {
MyThread() {
}
method1() {
}
method2() {
}
method3() {
}
approach(1):
run() {
method1();
method2();
method3();
}
approach(2):
run() {
//the code of method1 is here (no method calling)
//the code of method2 is here (no method calling)
//the code of method3 is here (no method calling)
}
}
class Test{
public static void main(){
Thread t1 = new Thread();
t1.start();
Thread t2 = new Thread();
t2.start();
}
}
method1, method2 and method3 don't access global shared data but their codes perform some write in local variable within the method section, thus I guess I can not allow overlap execution within the method section.
Thereby:
in approach(1): I need to make the methods (method1, method2 and method3) synchronized, right?
in approach(2): No need to synchronize the code sections, right?
If I'm right in both approach, using the approach(2) will give better performance, right?
Short answer: you don't need the synchronization. Both approaches are equivalent from a thread safety perspective.
Longer answer:
It may be worthwhile taking a step back and remembering what the synchronized block does. It does essentially two things:
makes sure that if thread A is inside a block that's synchronized on object M, no other thread can enter a block that's synchronized on the same object M until thread A is done with its block of code
makes sure that if thread A has done work within a block that's synchronized object M, and then finishes that block, and then thread B enters a block that's also synchronized on
M, then thread B will see everything that thread A had done within its synchronized block. This is called establishing the happens-before relationship.
Note that a synchronized method is just shorthand for wrapping the method's code in synchronized (this) { ... }.
In addition to those two things, the Java Memory Model (JMM) guarantees that within one thread, things will happen as if they had not been reordered. (They may actually be reordered for various reasons, including efficiency -- but not in a way that your program can notice within a single thread. For instance, if you do "x = 1; y = 2" the compiler is free to switch that such that y = 2 happens before x = 1, since a single thread can't actually notice the difference. If multiple threads are accessing x and y, then it's very possible, without proper synchronization, for another thread to see y = 2 before it sees x = 1.)
So, getting back to your original question, there are a couple interesting notes.
First, since a synchronized method is shorthand for putting the whole method inside a "synchronized (this) { ... }" block, t1's methods and t2's methods will not be synchronized against the same reference, and thus will not be synchronized relative to each other. t1's methods will only be synchronized against the t1 object, and t2's will only be synchronized against t2. In other words, it would be perfectly fine for t1.method1() and t2.method1() to run at the same time. So, of those two things the synchronized keyword provides, the first one (the exclusivity of entering the block) isn't relevant. Things could go something like:
t1 wants to enter method1. It needs to acquire the t1 monitor, which is not contended -- so it acquires it and enters the block
t2. wants to enter method2. It needs to acquire the 11 monitor, which is not contended -- s it acquires it and enters the block
t1 finishes method1 and releases its hold on the t1 monitor
t2 finishes method1 and releases its hold on the t2 monitor
As for the second thing synchronization does (establishing happens-before), making method1() and method2() synchronized will basically be ensuring that t1.method1() happens-before t1.method2(). But since both of those happen on the same thread anyway (the t1 thread), the JMM anyway guarantees that this will happen.
So it actually gets even a bit uglier. If t1 and t2 did share state -- that is, synchronization would be necessary -- then making the methods synchronized would not be enough. Remember, a synchronized method means synchronized (this) { ... }, so t1's methods would be synchronized against t1, and t2's would be against t2. You actually wouldn't be establishing any happens-before relationship between t1's methods and t2's.
Instead, you'd have to ensure that the methods are synchronized on the same reference. There are various ways to do this, but basically, it has to be a reference to an object that the two threads both know about.
Assume t1 and t2 both know about the same reference, LOCK. Both have methods like:
method1() {
synchronized(LOCK) {
// do whatever
}
}
Now things could go something like this:
t1 wants to enter method1. It needs to acquire the LOCK monitor, which is not contended -- so it acquires it and enters the block
t2 wants to enter method1. It needs to acquire the LOCK monitor, which is already held by t1 -- so t2 is put on hold.
t1 finishes method1 and releases its hold on the LOCK monitor
t2 is now able to acquire the LOCK monitor, so it does, and starts on the meat of method1
t2 finishes method1 and releases its hold on the LOCK monitor
You are saying your methods don't access global shared data and write only local variables so there is no need to synchronize them Because both the threads will be having their own copies of local variables. They will not overlap or something.
This kind of problem is faced in case of static/class variables. If multiple threads try to change the value of static variables at same time then there comes the problem so there we need to synchronize.
If the methods you're calling don't write to global shared data, you don't have to synchronize them.
In a multithreaded program, each thread has its own call stack. The local variables of each method will be separate in each thread, and will not overwrite one another.
Therefore, approach 1 works fine, does not require synchronization overhead, and is much better programming practice because it avoids duplicated code.
Thread-wise your ok. local variables within methods are not shared between threads as each instance running in a thread will have its own stack.
You won't have any speed improvements between the two approaches it is just a better organisation of the code (shorter methods are easier to understand)
If each method is independent of the other you may want to consider if they belong in the same class. If you want the performance gain create 3 different classes and execute multiple threads for each method (performance gains depends on the number of available cores cpu/io ration etc.)
Thereby: in approach(1): I need to make the methods(method1,method2
and method3) synchronized, right? in approach(2): No need to
synchronize the code sections, right?
Invoking in-lined methods v/s invoking multiple methods don't determine whether a method should be synchronized or not. I'd recommend you to read this and then ask for more clarifications.
If I'm right in both approach, using the approach(2) will give better performance, right?
At the cost of breaking down methods into a single god method? Sure, but you would be looking at a "very" miniscule improvement as compared to the lost code readability, something definitely not recommended.
method1, 2 and 3 won't be executed concurrently so if the variables that they read/write are not shared outside the class with other threads while they're running then there is no synchronization required and no need to inline.
If they modify data that other threads will read at the same time that they're running then you need to guard access to that data.
If they read data that other threads will write at the same time that they're running then you need to guard access to that data.
If other threads are expected to read data modified by method1, 2, or 3, then you need to make the run method synchronized (or them in a synchronized block) to set up a gate so that the JVM will set up a memory barrier and ensure that other threads can see the data after m1,2 and 3 are done.