Why does JDK sourcecode take a `final` copy of `volatile` instances - java

I read the JDK's source code about ConcurrentHashMap.
But the following code confused me:
public boolean isEmpty() {
final Segment<K,V>[] segments = this.segments;
...
}
My question is:
"this.segments" is declared:
final Segment<K,V>[] segments;
So, here, in the beginning of the method, declared a same type reference, point to the same memory.
Why did the author write it like this? Why didn't they use this.segments directly? Is there some reason?

This is an idiom typical for lock-free code involving volatile variables. At the first line you read the volatile once and then work with it. In the meantime another thread can update the volatile, but you are only interested in the value you initially read.
Also, even when the member variable in question is not volatile but final, this idiom has to do with CPU caches as reading from a stack location is more cache-friendly than reading from a random heap location. There is also a higher chance that the local var will end up bound to a CPU register.
For this latter case there is actually some controversy, since the JIT compiler will usually take care of those concerns, but Doug Lea is one of the guys who sticks with it on general principle.

I guess it's for performance consideration, so that we only need retrieve field value once.
You can refer to a singleton idiom from effective java by Joshua Bloch
His singleton is here:
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) {
synchronized(this) {
result = field;
if (result == null)
field = result = computeFieldValue();
}
}
return result;
}
and he wrote:
This code may appear a bit convoluted. In particular, the need for the
local variable result may be unclear. What this variable does is to
ensure that field is read only once in the common case where it’s
already initialized. While not strictly necessary, this may improve
performance and is more elegant by the standards applied to low-level
concurrent programming. On my machine, the method above is about 25
percent faster than the obvious version without a local variable.

It may reduce byte code size - accessing a local variable is shorter in byte code than accessing an instance variable.
Runtime optimization overhead may be reduced too.
But none of these are significant. It's more about code style. If you feel comfortable with instance variables, by all means. Doug Lea probably feel more comfortable dealing with local variables.

Related

Why Double checked locking is 25% faster in Joshua Bloch Effective Java Example

Below is a snippet from Effective Java 2nd Edition. The author claims that the following piece of code is 25% faster than a code in which you do not use the result variable.
According to the book "What this variable does is to ensure that field is read only once in the common case where it’s already initialized." .
I am not able to understand why this code would be faster after the value is initialized compared to if we do not use the local variable result. In either case you will have only one volatile read after initialization whether you use the local variable result or not.
// Double-check idiom for lazy initialization of instance fields
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
Once field has been initialised, the code is either:
if (field == null) {...}
return field;
or:
result = field;
if (result == null) {...}
return result;
In the first case you read the volatile variable twice whereas in the second you only read it once. Although volatile reads are very fast, they can be a little slower than reading from a local variable (I don't know if it is 25%).
Notes:
volatile reads are as cheap as normal reads on recent processors (at least x86)/JVMs, i.e. there is no difference.
however the compiler can better optimise a code without volatile so you could get efficiency from better compiled code.
25% of a few nanoseconds is still not much anyway.
it is a standard idiom that you can find in many classes of the java.util.concurrent package - see for example this method in ThreadPoolExecutor (there are many of them)
Without using a local variable, in most invocations we have effectively
if(field!=null) // true
return field;
so there are two volatile reads, which is slower than one volatile read.
Actually JVM can merge the two volatile reads into one volatile read and still conform to JMM. But we expect JVM to perform a good faith volatile read every time it's told to, not to be a smartass and try to optimize away any volatile read. Consider this code
volatile boolean ready;
do{}while(!ready); // busy wait
we expect JVM to really load the variable repeatedly.

Access to volatile fields through local variables

This question is somewhat continuation and expansion of this one, as I think perfect question: How does assigning to a local variable help here?
This question based on Item 71 of Effective Java, where it is suggested to speed up performance by introducing local variable in purpose of volatile field access:
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
So, my question is more common:
should we always access to volatile fields through assigning their values to local variables? (in order to archive best performance).
I.e. some idiom:
we have some volatile field, call it just volatileField;
if we want to read its value in multi-thread method, we should:
create local variable with same type: localVolatileVariable
assign value of volatile field: localVolatileVariable = volatileField
read value from this local copy, e.g.:
if (localVolatileVariable != null) { ... }
You must assign volatile variables to local fields if you plan on doing any sort of multi-step logic (assuming of course, that the field is mutable).
for instance:
volatile String _field;
public int getFieldLength() {
String tmp = _field;
if(tmp != null) {
return tmp.length();
}
return 0;
}
if you did not use a local copy of _field, then the value could change between the "if" test and the "length()" method call, potentially resulting in an NPE.
this is besides the obvious benefit of a speed improvement by not doing more than one volatile read.
There are two sides of a coin.
On the one hand assignment to a volatile works like a memory barrier and it's very unlikely that JIT will reorder assignment with computeFieldValue invocation.
On the other hand in theory this code breaks JMM. Because for some reasons some JVM is allowed to reorder computeFieldValue with assignment and you see partially initialized object. This is possible as long as variable read is not order with variable write.
field = result = computeFieldValue();
does not happen before
if (result == null) { // First check (no locking)
As long as java code supposed to be "write once run everywhere" DCL is a bad practice and should be avoided. This code is broken and is not a point of consideration.
If you have multiple reads of a volatile variable in a method, by assigning it to a local variable first you minimize such reads, which are more expensive. But I don't think, that you get a performance boost. This is likely to be a theoretical improvement. Such optimization should be left to JIT and is not a point of developer's considerations. I agree with this.
Instead of potentially doing TWO reads of a volatile variables, he does just one.
Reading volatile is probably a bit slower the a usual variable. But even if so, we are talking about nano seconds here.

Equivalent of AtomicReference but without the volatile synchronization cost

What is the equivalent of:
AtomicReference<SomeClass> ref = new AtomicReference<SomeClass>( ... );
but without the synchronization cost. Note that I do want to wrap a reference inside another object.
I've looked at the classes extending the Reference abstract class but I'm a bit lost amongst all the choices.
I need something really simple, not weak nor phantom nor all the other references besides one. Which class should I use?
If you want a reference without thread safety you can use an array of one.
MyObject[] ref = { new MyObject() };
MyObject mo = ref[0];
ref[0] = n;
If you are simply trying to store a reference in an object. Can't you create a class with a field, considering the field would be a strong reference that should achieve what you want
You shouldn't create a StrongReference class (because it would be silly) but to demonstrate it
public class StrongReference{
Object refernece;
public void set(Object ref){
this.reference =ref;
}
public Object get(){
return this.reference;
}
}
Since Java 9 you can now use AtomicReference.setPlain() and AtomicReference.getPlain().
JavaDoc on setPlain:
"Sets the value to newValue, with memory semantics of setting as if the variable was declared non-volatile and non-final."
AtomicReference does not have the cost of synchronization in the sense of traditional synchronized sections. It is implemented as non-blocking, meaning that threads that wait to "acquire the lock" are not context-switched, which makes it very fast in practice. Probably for concurrently updating a single reference, you cannot find a faster method.
If you still want to use AtomicReference but don't want to incur the cost of the volatile write you can use lazySet
The write doesn't issue a memory barrier that a normal volatile write does, but the get still invokes a volatile load (which is relatively cheap)
AtomicReference<SomeClass> ref = new AtomicReference<SomeClass>();
ref.lazySet(someClass);
I think all you want is:
public class MyReference<T>{
T reference;
public void set(T ref){
this.reference =ref;
}
public T get(){
return this.reference;
}
}
You might consider adding delegating equals(), hashcode(), and toString().
To use java.util.concurrent.atomic.AtomicReference feels wrong to me too in order to share a reference of an object. Besides the "atomicity costs" AtomicReference is full of methods that are irrelevant for your use case and may raise wrong expectations to the user.
But I haven't encounter such an equivalent class in the JDK yet.
Here is a summary of your options - chose what fits best to you:
A self-written value container like the proposed StrongReference or MyReference from the other answers
MutableObject from Apache Commons Lang
Array with length == 1 or a List with size == 1
setPlain(V) and getPlain() in AtomicReference since Java 9
all provided classes extending Reference has some special functionality attached, from atomic CaS to allowing the referenced object to be collected event thoguh a reference still exists to the object
you can create your own StringReference as John Vint explained (or use a array with length==1) but there aren't that many uses for that though
There is no synchronization cost to AtomicReference. From the description of the java.util.concurrent.atomic package:
A small toolkit of classes that support lock-free thread-safe programming on single variables.
EDIT
Based on your comments to your original post, it seems that you used the term "synchronization cost" in a non-standard way to mean thread-local cache flushing in general. On most architectures, reading a volatile is nearly as cheap as reading a non-volatile value. Any update to a shared variable is going to require cache flushing of at least that variable (unless you are going to abolish thread-local caches entirely). There isn't anything cheaper (performance-wise) than the classes in java.util.concurrent.atomic.
If your value is immutable, java.util.Optional looks like a great option.

Why it doesn't use the instance field directly, but assigns it to a local variable? [duplicate]

This question already has answers here:
In ArrayBlockingQueue, why copy final member field into local final variable?
(2 answers)
Java local vs instance variable access speed
(7 answers)
Closed 3 years ago.
I'm reading the source of java.util.concurrent.ArrayBlockingQueue, and found some code I don't understand:
private final ReentrantLock lock;
public boolean offer(E e) {
if (e == null) throw new NullPointerException();
final ReentrantLock lock = this.lock;
lock.lock();
try {
if (count == items.length)
return false;
else {
insert(e);
return true;
}
} finally {
lock.unlock();
}
}
Notice this line:
final ReentrantLock lock = this.lock;
Why it doesn't use this.lock directly, but assigns it to a local variable?
Could it be for optimization purposes?
Possibly a local variable could more easily be directly allocated to a register with a JIT compiler.
At least in Android, for the first versions of the API, accessing a local variable was cheaper than accessing an instance variable (can't speak for newer versions). It could be that plain Java is the same, and in some cases it makes sense to use a local.
Actually, found a thread confirming this here. Extract:
It's a coding style made popular by Doug Lea. It's an extreme
optimization that probably isn't necessary; you can expect the JIT to
make the same optimizations. (you can try to check the machine code
yourself!) Nevertheless, copying to locals produces the smallest
bytecode, and for low-level code it's nice to write code that's a
little closer to the machine.
Since it's just copying the reference and the lock is on the Object instead, and the Object is the same, it shouldn't matter.
The instance variable lock is also declared final, so really, I don't see any point in doing a reference copy.
As JRL pointed out is an optimization, but it's really such a tiny micro-optimization, that I still don't see much point doing it, especially for just one read.
Better safe than sorry?
I guess, when writing fundamental libraries, you'd better go for the safest solution, if it's cheap enough. And this one is extremely cheap.
Concerning performance, a local variable should be faster than a field access. I guess, any JVM worth its name will do such a trivial optimization(*) itself, but what about interpreted code and possibly C1 (the first, fast and low-quality compiler in Oracle JVM)? It won't make much difference, but the saved microseconds count for millions of users. What about Java running on exotic platforms with a JVM yet to be written...
The final is not exactly final in reality. The field may get changed using reflection. I can't imagine any reason for doing this and anyone doing such funny things just gets what they deserve. OTOH debugging such problems may take days and fool-proof programming is a good habit when writing such a fundamental stuff.
(*) I believe to have read an article by someone reputable claiming the opposite, but I can't find it now.

Java volatile reference vs. AtomicReference

Is there any difference between a volatile Object reference and AtomicReference in case I would just use get() and set()-methods from AtomicReference?
Short answer is: No.
From the java.util.concurrent.atomic package documentation. To quote:
The memory effects for accesses and updates of atomics generally follow the rules for volatiles:
get has the memory effects of reading a volatile variable.
set has the memory effects of writing (assigning) a volatile variable.
By the way, that documentation is very good and everything is explained.
AtomicReference::lazySet is a newer (Java 6+) operation introduced that has semantics unachievable through volatile variables. See this post for more information.
No, there is not.
The additional power provided by AtomicReference is the compareAndSet() method and friends. If you do not need those methods, a volatile reference provides the same semantics as AtomicReference.set() and .get().
There are several differences and tradeoffs:
Using an AtomicReference get/set has the same JMM semantics as a volatile field(as the javadoc states), but the AtomicReference is a wrapper around a reference, so any access to the field involves a further pointer chase.
The memory footprint is multiplied (assuming a compressed OOPs environment, which is true for most VMs):
volatile ref = 4b
AtomicReference = 4b + 16b (12b object header + 4b ref field)
AtomicReference offers a richer API than a volatile reference. You can regain the API for the volatile reference by using an AtomicFieldUpdater, or with Java 9 a VarHandle. You can also reach straight for sun.misc.Unsafe if you like running with scissors. AtomicReference itself is implemented using Unsafe.
So, when is it good to choose one over the other:
Only need get/set? Stick with a volatile field, simplest solution and lowest overhead.
Need the extra functionality? If this is a performance(speed/memory overhead) sensitive part of your code make a choice between AtomicReference/AtomicFieldUpdater/Unsafe where you tend to pay in readability and risk for your performance gain. If this not a sensitive area just go for AtomicReference. Library writers typically use a mix of these methods depending on targeted JDKs, expected API restrictions, memory constraints and so on.
JDK source code is one of the best ways to answers confusions like this. If you look at the code in AtomicReference, it uses a volatie variable for object storage.
private volatile V value;
So, obviously if you are going to just use get() and set() on AtomicReference it is like using a volatile variable. But as other readers commented, AtomicReference provides additional CAS semantics. So, first decide if you want CAS semantics or not, and if you do only then use AtomicReference.
AtomicReference provides additional functionality which a plain volatile variable does not provide. As you have read the API Javadoc you will know this, but it also provides a lock which can be useful for some operations.
However, unless you need this additional functionality I suggest you use a plain volatile field.
Sometimes even if you only use gets and sets, AtomicReference might be a good choice:
Example with volatile:
private volatile Status status;
...
public setNewStatus(Status newStatus){
status = newStatus;
}
public void doSomethingConditionally() {
if(status.isOk()){
System.out.println("Status is ok: " + status); // here status might not be OK anymore because in the meantime some called setNewStatus(). setNewStatus should be synchronized
}
}
The implementation with AtomicReference would give you a copy-on-write synchronization for free.
private AtomicReference<Status> statusWrapper;
...
public void doSomethingConditionally() {
Status status = statusWrapper.get();
if(status.isOk()){
System.out.println("Status is ok: " + status); // here even if in the meantime some called setNewStatus() we're still referring to the old one
}
}
One might say that you could still could have a proper copy if you substituted:
Status status = statusWrapper.get();
with:
Status statusCopy = status;
However the second one is more likely to be removed by someone accidentally in the future during "code cleaning".

Categories

Resources