Method call and atomicity - java

I have a method with a single atomic operation, like this one
int value;
public void setValue(int value) {
this.value = value;
}
then I call it in obvious way, like
foo.setValue(10);
The question is: would it be atomic operation? If no, what atomic operations will be executed? How I can test this at my computer (if I can)?

Yes, the
this.value = value;
operation is atomic. See the Java Language Specification: Threads and Locks.
Note though that threads are allowed to cache their own values of non-volatile variables, so it is not guaranteed that a successive get-operation would yield the last set value.
To get rid of these kind of data races you need to synchronize the access to the variable somehow. This can be done by
making the method synchronized,
by letting the variable be volatile or,
use AtomicInteger from the java.util.concurrent package. (preferred way imo)
It should also be noted that the operation would not be atomic if you changed from int to long or double. Here is a relevant section from the JLS:
17.4 Non-atomic Treatment of double and long
If a double or long variable is not declared volatile, then for the purposes of load, store, read, and write actions they are treated as if they were two variables of 32 bits each: wherever the rules require one of these actions, two such actions are performed, one for each 32-bit half.
Some useful links:
Wikipedia article on the Java Memory Model
Java Language Specification, Interaction with the Memory Model

It is atomic, because it is just a primitive 32 bit value.
Hence when you read it, there is a guarantee that you will see a value set by one of the threads, but you won't know which one it was.
If it was a long, you wouldn't have the same guarantee, although in practice most VM implementations do treat long writes as atomic operations.
This is what the JLS has to say on the issue:
VM implementors are encouraged to avoid splitting their 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications.
But with ints you are safe, question is, is this very weak guarantee enough for you? More often than not, the answer is a no.

First of all, assignment to all primitive types (except 64-bit ones) in Java is atomic according to the Java specification. But for instance auto-increment isn't thread-safe, no matter which type you use.
But the real problem with this code is not atomicity, but visibility. If two threads are modifying the value, they might not see the changes made by each other. Use the volatile keyword or, even better, AtomicInteger to guarantee correct synchronization and visibility.
Please note that synchronized keyword also guarantees visibility, which means if some modification happens inside synchronnized block, it is guaranteed that it will be visible by other threads.

Related

Atomic variables over Volatile

Since Atomic variables are volatile, are there any disadvantages of using always using Atomic variable even if you just need the volatility aspect?
From a concurrency perspective there is no difference between:
final AtomicInteger foo1 = new AtomicInteger();
And
volatile int foo2;
A foo1.get/set is the same as reading of writing to the foo2. Both will provide atomicity, visibility and ordering guarantees. If you look in the code of e.g. AtomicInteger, you will see a volatile int variable.
The primary use-cases for an Atomic is that it is very easy to do read modify write operations like incrementing a counter. And that you have access to more relaxed forms of ordering like getRelease and setAcquire. But you can do the same thing using AtomicFieldReference and VarHandles (although the syntax is less pretty).
One drawback of atomic is extra memory usage and indirection.
A Variable cannot be atomic. There is a clear difference between Atomicity and Volatile nature.
Atomicity: If only one thread can execute a set of instruction at a given time, the operation is called Atomic.
Volatile: A volatile nature ensures visibility. If a thread modify some volatile state the other threads get most recent updated state.
Examples :
volatile boolean flag;
public void flipTheFlag(){
if(flag == true){
flag = false;
}else{
flag = true;
}
}
If multiple threads are working on operation flipTheFlag, the value of flag will be uncertain even though the flag is volatile variable. That's why operation flipTheFlag need to be Atomic. We can make the flipTheFlag operation atomic just by adding keyword 'synchronized'.
When after creation of the final Atomic object different threads use the object to change the internal state, everything works as if volatile.
However there is an extra object instance around. Costs memory and speed performance. It should in this case be constant/effectively final, and its creation should be done before an other thread will have access.
An other aspect - which correctness I actually do not remember from the java reference but did read elsewhere -, is that with several fields, when one field is volatile on its modification also the other fields will be updated for other threads.
Atomic constants (or similar constant arrays of 1 item) still have a mutable state and are sometimes abused to collect aggregated results with a Stream operation which can only access constants. This means Atomic does not imply a multithreading usage.
In x = x + c; (volatile x) you will read the latest x, but after adding c an other thread might change x and you still will assign a stale sum to x. Here atomicity is required. Or if (x > 0) x = c;.
So to answer the question: depending on the brittle context they are somewhat interchangeable. I can see why you prefer Atomic, but there are simple cases where volatile is considerably more performant, especially in fine grained concurrency.
A last remark: I am not totally confident, whether I am entirely correct here.

unsynchronized read/write of variables may cause data race?

in Java Performance Tuning by Jack Shirazi it writes:
This means that access and update of variables are automatically synchronized (as long as they are not longs or doubles). If a method consists solely of a variable access or assignment, there is no need to make it synchronized for thread safety, and every reason not to do so for performance. Thread safety extends further to any set of statements that are accessing or assigning to a variable independently of any other variable values.
according to the description above, operations like flag = true is always atomic and does not need synchronize.
However, here comes another article that regards the flollowing circumstance as data race:
class DataRaceExample {
static boolean flag = false;//w0
static void raiseFlag() {
flag = true;//w1
}
public static void main(String... args) {
ForkJoinPool.commonPool().execute(DataRaceExample::raiseFlag);
while (!flag);//r_i, where i ∈ [1, k), k may be infinite
System.out.print(flag);//r
}
}
and the author says:
Now, all executions have data races, because the flag is not volatile
It confused me a lot for the conflits between the two articles.
Jack Shirazi is wrong.
Access and update of a primitive variable such as int is atomic, but not synchronized.
Because it is atomic, it can be made fully thread-safe by making it volatile. Without that, other threads running on a different core may see stale values, because the CPU cache hasn't been refreshed.
The point that Jack Shirazi is trying to make is that non-volatile accesses to primitive types other than double and long are guaranteed to be performed atomically according to the JMM. Thus, synchronization is unnecessary to prevent, for example, torn reads and writes in the presence of concurrent accesses.
The confusion arises because his book predates JSR-133 and he uses terms like "automatically synchronized" which is not in line with modern notions of synchronization within the JMM.
In your second example, the loop will either not run or run forever.
The reason for this is that the variable flag is read just once when it is first checked.
If flag is volatile, then it is read from memory each time. This allows another thread to change the value of flag and the loop will see it.

Methods that don't change a variable's value need to be synchronized if they accessed the variable [duplicate]

private double value;
public synchronized void setValue(double value) {
this.value = value;
}
public double getValue() {
return this.value;
}
In the above example is there any point in making the getter synchronized?
I think its best to cite Java Concurrency in Practice here:
It is a common mistake to assume that synchronization needs to be used only when writing to shared variables; this is simply not true.
For each mutable state variable that may be accessed by more than one
thread, all accesses to that variable must be performed with the same
lock held. In this case, we say that the variable is guarded by that
lock.
In the absence of synchronization, the compiler, processor, and runtime can do some downright weird things to the order in which operations appear to execute. Attempts to reason about the order in which memory actions "must" happen in insufflciently synchronized multithreaded programs will almost certainly be incorrect.
Normally, you don't have to be so careful with primitives, so if this would be an int or a boolean it might be that:
When a thread reads a variable without synchronization, it may see a
stale value, but at least it sees a value that was actually placed
there by some thread rather than some random value.
This, however, is not true for 64-bit operations, for instance on long or double if they are not declared volatile:
The Java Memory Model requires fetch and
store operations to be atomic, but for nonvolatile long and double
variables, the JVM is permitted to treat a 64-bit read or write as two
separate 32-bit operations. If the reads and writes occur in different
threads, it is therefore possible to read a nonvolatile long and get
back the high 32 bits of one value and the low 32 bits of another.
Thus, even if you don't care about stale values, it is not safe to use
shared mutable long and double variables in multithreaded programs
unless they are declared volatile or guarded by a lock.
Let me show you by example what is a legal way for a JIT to compile your code. You write:
while (myBean.getValue() > 1.0) {
// perform some action
Thread.sleep(1);
}
JIT compiles:
if (myBean.getValue() > 1.0)
while (true) {
// perform some action
Thread.sleep(1);
}
In just slightly different scenarios even the Java compiler could prouduce similar bytecode (it would only have to eliminate the possibility of dynamic dispatch to a different getValue). This is a textbook example of hoisting.
Why is this legal? The compiler has the right to assume that the result of myBean.getValue() can never change while executing above code. Without synchronized it is allowed to ignore any actions by other threads.
The reason here is to guard against any other thread updating the value when a thread is reading and thus avoid performing any action on stale value.
Here get method will acquire intrinsic lock on "this" and thus any other thread which might attempt to set/update using setter method will have to wait to acquire lock on "this" to enter the setter method which is already acquired by thread performing get.
This is why its recommended to follow the practice of using same lock when performing any operation on a mutable state.
Making the field volatile will work here as there are no compound statements.
It is important to note that synchronized methods use intrinsic lock which is "this". So get and set both being synchronized means any thread entering the method will have to acquire lock on this.
When performing non atomic 64 bit operations special consideration should be taken. Excerpts from Java Concurrency In Practice could be of help here to understand the situation -
"The Java Memory Model requires fetch and store operations to be atomic, but for non-volatile long and double variables, the JVM is permitted to treat a 64 bit read or write as two separate 32
bit operations. If the reads and writes occur in different threads, it is therefore possible to read a non-volatile long and get back the high 32 bits of one value and the low 32 bits of another. Thus, even if you don't care about stale values, it
is not safe to use shared mutable long and double variables in multi-threaded programs unless they are declared
volatile or guarded by a lock."
Maybe for someone this code looks awful, but it works very well.
private Double value;
public void setValue(Double value){
updateValue(value, true);
}
public Double getValue(){
return updateValue(value, false);
}
private double updateValue(Double value,boolean set){
synchronized(MyClass.class){
if(set)
this.value = value;
return value;
}
}

using volatile keyword in java4 and java5

what is the difference in using volatile keyword in java4 and java5 onwards?
and related to that,
Read/write operations on non-atomic variables(long/double) are atomic when they are
declared as volatile.
Is this also true for java4 or it is valid from java5 onwards???
Yes there is a difference.
Up to Java 4 volatile could be re-ordered by compiler with respect to any previous read or write, leading to subtle concurrency bugs e.g. making it impossible to implement a double check locking (very common idiom for a Singleton).
This is fixed in Java 5.0 which extends the semantics for volatile which cannot be reordered with respect to any following read or write anymore and introduces a new memory model. You can read this Double Checked Locking for example reference
This site gives a good explanation of the differences: http://www.javamex.com/tutorials/synchronization_volatile.shtml
They also give an explanation of the behavior of volatile in Java 5 on a separate page: http://www.javamex.com/tutorials/synchronization_volatile_java_5.shtml
People have provided good points and references responding to my question answering first part.
Going specific to the second part of question, this i read at some forum:
A volatile declared long is atomic (pre-Java 5 also) in the sense that
it guarantees (for all JVM implementations) a read or write go
directly to main memory instead of two 32-bit registers.
And
Pre-Java 5, volatile was supposed to provide such guarantees for long
and double. However things did not work out this way in practice, and
implementations frequently violated this guarantee. As I recall the
issue seemed to get fixed around JDK 1.4, but as they were still
working on the whole memory model thing, they didn't really make any
clear announcements about it until JDK 5, when the new rules were
announced, and memory guarantees actually meant something.
And this is from Java Language Specification,Second Edition:
17.4 Nonatomic Treatment of double and long
The load, store, read, and write actions on volatile variables are atomic,
even if the type of the variable is double or long.
What is the difference in using volatile keyword in java4 and java5 onwards?
JMM before JDK5 is broken and using volatile for JDK4 may not provide the intended result. For more check this:
http://www.ibm.com/developerworks/library/j-jtp02244/
Read/write operations on non-atomic variables(long/double) are atomic when they are declared as volatile.
Read/Write for long/double happen as two separate 32-bit operations. For two threads it is possible that one thread has read higher 32-bits and other one has read lower 32-bits of a long/double variable. In short read/write on long is not atomic operation unlike other primitives.
Using volatile for long/double is supposed to provide such guarantee as the instructions for volatile are not re-ordered for volatile read/write by compiler and volatile also provides visibility guarantee. But again it may not work for JDK 4 or before.

What operations are atomic operations

I am little confused...
Is it true that reading\writing from several threads all except long and double are atomic operations and it's need to use volatile only with long and double?
It sounds like you're referring to this section of the JLS. It is guaranteed for all primitive types -- except double and long -- that all threads will see some value that was actually written to that variable. (With double and long, the first four bytes might have been written by one thread, and the last four bytes by another thread, as specified in that section of the JLS.) But they won't necessarily see the same value at the same time unless the variable is marked volatile.
Even using volatile, x += 3 is not atomic, because it's x = x + 3, which does a read and a write, and there might be writes to x between the read and the write. That's why we have things like AtomicInteger and the other utilities in java.util.concurrent.
Let's not confuse atomic with thread-safe. Long and double writes are not atomic underneath because each is two separate 32 bit stores. Storing and loading non long/double fields are perfectly atomic assuming they are not a compound writes (i++ for example).
By atomic I mean you will not read some garbled object as a result of many threads writing different objects to the same field.
From Java Concurrency In Practice 3.1.2
Out-of-thin-aire safety: When a thread reads a variable without
synchronization, it may see a stale value, but at least it sees a
value that was actually placed there by some thread rather than some
random value. This is true for all variables, except 64-bit long and
double, which are not volatile. The JVM is permitted to treat 64-bit
read or write as two seperate 32-bit operations which are not atomic.
That doesn't sound right.
An atomic operation is one that forces all threads to wait to access a resource until another thread is done with it. I don't see why other data types would be atomic, and others not.
volatile has other semantics than just writing the value atomically
it means that other threads can see the updated value immediately (and that it can't be optimized out)

Categories

Resources