Access to volatile fields through local variables - java

This question is somewhat continuation and expansion of this one, as I think perfect question: How does assigning to a local variable help here?
This question based on Item 71 of Effective Java, where it is suggested to speed up performance by introducing local variable in purpose of volatile field access:
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
So, my question is more common:
should we always access to volatile fields through assigning their values to local variables? (in order to archive best performance).
I.e. some idiom:
we have some volatile field, call it just volatileField;
if we want to read its value in multi-thread method, we should:
create local variable with same type: localVolatileVariable
assign value of volatile field: localVolatileVariable = volatileField
read value from this local copy, e.g.:
if (localVolatileVariable != null) { ... }

You must assign volatile variables to local fields if you plan on doing any sort of multi-step logic (assuming of course, that the field is mutable).
for instance:
volatile String _field;
public int getFieldLength() {
String tmp = _field;
if(tmp != null) {
return tmp.length();
}
return 0;
}
if you did not use a local copy of _field, then the value could change between the "if" test and the "length()" method call, potentially resulting in an NPE.
this is besides the obvious benefit of a speed improvement by not doing more than one volatile read.

There are two sides of a coin.
On the one hand assignment to a volatile works like a memory barrier and it's very unlikely that JIT will reorder assignment with computeFieldValue invocation.
On the other hand in theory this code breaks JMM. Because for some reasons some JVM is allowed to reorder computeFieldValue with assignment and you see partially initialized object. This is possible as long as variable read is not order with variable write.
field = result = computeFieldValue();
does not happen before
if (result == null) { // First check (no locking)
As long as java code supposed to be "write once run everywhere" DCL is a bad practice and should be avoided. This code is broken and is not a point of consideration.
If you have multiple reads of a volatile variable in a method, by assigning it to a local variable first you minimize such reads, which are more expensive. But I don't think, that you get a performance boost. This is likely to be a theoretical improvement. Such optimization should be left to JIT and is not a point of developer's considerations. I agree with this.

Instead of potentially doing TWO reads of a volatile variables, he does just one.
Reading volatile is probably a bit slower the a usual variable. But even if so, we are talking about nano seconds here.

Related

Understanding Java source code: Why is this null-handling logic so complicated? [duplicate]

While looking through the Java API source code I often see method parameters reassigned to local variables. Why is this ever done?
void foo(Object bar) {
Object baz = bar;
//...
}
This is in java.util.HashMap
public Collection<V> values() {
Collection<V> vs = values;
return (vs != null ? vs : (values = new Values()));
}
This is rule of thread safety/better performance. values in HashMap is volatile. If you are assigning variable to local variable it becomes local stack variable which is automatically thread safe. And more, modifying local stack variable doesn't force 'happens-before' so there is no synchronization penalty when using it(as opposed to volatile when each read/write will cost you acquiring/releasing a lock)
I'd have to look at some real examples, but the only reason I can think to do this is if the original value needs to be preserved for some computation at the end of the method. In this case, declaring one of the "variables" final would make this clear.

Why Double checked locking is 25% faster in Joshua Bloch Effective Java Example

Below is a snippet from Effective Java 2nd Edition. The author claims that the following piece of code is 25% faster than a code in which you do not use the result variable.
According to the book "What this variable does is to ensure that field is read only once in the common case where it’s already initialized." .
I am not able to understand why this code would be faster after the value is initialized compared to if we do not use the local variable result. In either case you will have only one volatile read after initialization whether you use the local variable result or not.
// Double-check idiom for lazy initialization of instance fields
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
Once field has been initialised, the code is either:
if (field == null) {...}
return field;
or:
result = field;
if (result == null) {...}
return result;
In the first case you read the volatile variable twice whereas in the second you only read it once. Although volatile reads are very fast, they can be a little slower than reading from a local variable (I don't know if it is 25%).
Notes:
volatile reads are as cheap as normal reads on recent processors (at least x86)/JVMs, i.e. there is no difference.
however the compiler can better optimise a code without volatile so you could get efficiency from better compiled code.
25% of a few nanoseconds is still not much anyway.
it is a standard idiom that you can find in many classes of the java.util.concurrent package - see for example this method in ThreadPoolExecutor (there are many of them)
Without using a local variable, in most invocations we have effectively
if(field!=null) // true
return field;
so there are two volatile reads, which is slower than one volatile read.
Actually JVM can merge the two volatile reads into one volatile read and still conform to JMM. But we expect JVM to perform a good faith volatile read every time it's told to, not to be a smartass and try to optimize away any volatile read. Consider this code
volatile boolean ready;
do{}while(!ready); // busy wait
we expect JVM to really load the variable repeatedly.

Why does JDK sourcecode take a `final` copy of `volatile` instances

I read the JDK's source code about ConcurrentHashMap.
But the following code confused me:
public boolean isEmpty() {
final Segment<K,V>[] segments = this.segments;
...
}
My question is:
"this.segments" is declared:
final Segment<K,V>[] segments;
So, here, in the beginning of the method, declared a same type reference, point to the same memory.
Why did the author write it like this? Why didn't they use this.segments directly? Is there some reason?
This is an idiom typical for lock-free code involving volatile variables. At the first line you read the volatile once and then work with it. In the meantime another thread can update the volatile, but you are only interested in the value you initially read.
Also, even when the member variable in question is not volatile but final, this idiom has to do with CPU caches as reading from a stack location is more cache-friendly than reading from a random heap location. There is also a higher chance that the local var will end up bound to a CPU register.
For this latter case there is actually some controversy, since the JIT compiler will usually take care of those concerns, but Doug Lea is one of the guys who sticks with it on general principle.
I guess it's for performance consideration, so that we only need retrieve field value once.
You can refer to a singleton idiom from effective java by Joshua Bloch
His singleton is here:
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) {
synchronized(this) {
result = field;
if (result == null)
field = result = computeFieldValue();
}
}
return result;
}
and he wrote:
This code may appear a bit convoluted. In particular, the need for the
local variable result may be unclear. What this variable does is to
ensure that field is read only once in the common case where it’s
already initialized. While not strictly necessary, this may improve
performance and is more elegant by the standards applied to low-level
concurrent programming. On my machine, the method above is about 25
percent faster than the obvious version without a local variable.
It may reduce byte code size - accessing a local variable is shorter in byte code than accessing an instance variable.
Runtime optimization overhead may be reduced too.
But none of these are significant. It's more about code style. If you feel comfortable with instance variables, by all means. Doug Lea probably feel more comfortable dealing with local variables.

Java Double Locking - Can someone explain more simply why intuition wouldn't work? [duplicate]

This question already has answers here:
Why is volatile used in double checked locking
(8 answers)
Closed 4 years ago.
I found the following code here: http://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java
I am trying to understand why there are certain cases where this would not work. I read the explanation of the "subtle" problems, and that using volatile will fix the issue, but I'm a bit confused.
// Broken multithreaded version
// "Double-Checked Locking" idiom
class Foo {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized(this) {
if (helper == null) {
helper = new Helper();
}
}
}
return helper;
}
// other functions and members...
}
Basically, am I right to assume this would fail due to the fact that the helper == null check in the synchronized block has a chance to fail because it could be "partially" constructed at that point? Does java not return null if an object is partially constructed? Is that the issue?
Anyway, I know that it's not great practice to do double check locking, but I was just curious in theory why the above code fails, and why volatile (plus the addition of assigning a local variable) fixes this? Here's some code I got from somewhere.
// Double-check idiom for lazy initialization of instance fields
private volatile FieldType field;
FieldType getField() {
FieldType result = field;
if (result == null) { // First check (no locking)
synchronized(this) {
result = field;
if (result == null) // Second check (with locking)
field = result = computeFieldValue();
}
}
return result;
}
I know there are a thousand posts already about this, but explanations seem to mention changes in memory model after 1.5, and I don't quite get what that has to do with it too :-(.
Thanks in advanced!
am I right to assume this would fail due to the fact that the helper == null check in the synchronized block has a chance to fail because it could be "partially" constructed at that point?
Yes you are right. This is explained in Out-of-order writes. helper = new Helper() consists of 3 steps: memory allocation, call to the constructor, and assignment. JIT compiler is free to reorder instructions and do assignment after memory allocation (which returns reference to the new object) but before the constructor invocation. Using volatile prevents reordering.
You need to declare the field volatile because that will force the write to the field to be "flushed" to main memory. Otherwise the JVM specification allows each thread to keep its local version of the field and never communicate its writes to other threads. This is generally nice because it allows aggressive optimizations in the JVM.
Hope that helps! Else I can recommend getting a really strong cup of coffee, a very quiet room and then read the Java Memory Model which explains how it works and the interaction between threads. I think you will be surprised in how few situations a thread is required to communicate its writes to (shared) memory to other threads and the reordering of reads and writes that the JVM can perform!
An exciting read!

Lazy initialization without synchronization or volatile keyword

The other day Howard Lewis Ship posted a blog entry called "Things I Learned at Hacker Bed and Breakfast", one of the bullet points is:
A Java instance field that is assigned exactly once via lazy
initialization does not have to be synchronized or volatile (as long
as you can accept race conditions across threads to assign to the
field); this is from Rich Hickey
On the face of it this seems at odds with the accepted wisdom about visibility of changes to memory across threads, and if this is covered in the Java Concurrency in Practice book or in the Java language spec then I have missed it. But this was something HLS got from Rich Hickey at an event where Brian Goetz was present, so it would seem there must be something to it. Could someone please explain the logic behind this statement?
This statement sounds a little bit cryptic. However, I guess HLS refers to the case when you lazily initialize an instance field and don't care if several threads performs this initialization more than once.
As an example, I can point to the hashCode() method of String class:
private int hashCode;
public int hashCode() {
int hash = hashCode;
if (hash == 0) {
if (count == 0) {
return 0;
}
final int end = count + offset;
final char[] chars = value;
for (int i = offset; i < end; ++i) {
hash = 31*hash + chars[i];
}
hashCode = hash;
}
return hash;
}
As you can see access to the hashCode field (which holds cached value of the computed String hash) is not synchronized and the field isn't declared as volatile. Any thread which calls hashCode() method will still receive the same value, though hashCode field may be written more than once by different threads.
This technique has limited usability. IMHO it's usable mostly for the cases like in the example: a cached primitive/immutable object which is computed from the others final/immutable fields, but its computation in the constructor is an overkill.
Hrm. As I read this it is technically incorrect but okay in practice with some caveats. Only final fields can safely be initialized once and accessed in multiple threads without synchronization.
Lazy initialized threads can suffer from synchronization issues in a number of ways. For example, you can have constructor race conditions where the reference of the class has been exported without the class itself being initialized fully.
I think it highly depends on whether or not you have a primitive field or an object. Primitive fields that can be initialized multiple times where you don't mind that multiple threads do the initialization would work fine. However HashMap style initialization in this manner may be problematic. Even long values on some architectures may store the different words in multiple operations so may export half of the value although I suspect that a long would never cross a memory page so therefore it would never happen.
I think it depends highly on whether or not an application has any memory barriers -- any synchronized blocks or access to volatile fields. The devil is certainly in the details here and the code that does the lazy initialization may work fine on one architecture with one set of code and not in a different thread model or with an application that synchronizes rarely.
Here's a good piece on final fields as a comparison:
http://www.javamex.com/tutorials/synchronization_final.shtml
As of Java 5, one particular use of the final keyword is a very important and often overlooked weapon in your concurrency armoury. Essentially, final can be used to make sure that when you construct an object, another thread accessing that object doesn't see that object in a partially-constructed state, as could otherwise happen. This is because when used as an attribute on the variables of an object, final has the following important characteristic as part of its definition:
Now, even if the field is marked final, if it is a class, you can modify the fields within the class. This is a different issue and you must still have synchronization for this.
This works fine under some conditions.
its okay to try and set the field more than once.
its okay if individual threads see different values.
Often when you create an object which is not changed e.g. loading a Properties from disk, having more than one copy for a short amount of time is not an issue.
private static Properties prop = null;
public static Properties getProperties() {
if (prop == null) {
prop = new Properties();
try {
prop.load(new FileReader("my.properties"));
} catch (IOException e) {
throw new AssertionError(e);
}
}
return prop;
}
In the short term this is less efficient than using locking, but in the long term it could be more efficient. (Although Properties has a lock of it own, but you get the idea ;)
IMHO, Its not a solution which works in all cases.
Perhaps the point is that you can use more relaxed memory consistency techniques in some cases.
I think the statement is untrue. Another thread can see a partially initialized object, so the reference can be visible to another thread even though the constructor hasn't finished running. This is covered in Java Concurrency in Practice, section 3.5.1:
public class Holder {
private int n;
public Holder (int n ) { this.n = n; }
public void assertSanity() {
if (n != n)
throw new AssertionError("This statement is false.");
}
}
This class isn't thread-safe.
If the visible object is immutable, then I you are OK, because of the semantics of final fields means you won't see them until its constructor has finished running (section 3.5.2).

Categories

Resources