How does Java manage the visibility of a volatile field? - java

This Q is looking for specific details on how exactly Java makes a volatile field visible.
The volatile keyword in Java is used for making a variable "actively" visible to the readers of that variable right after a write operation on it is done. This is one form of happens-before relationship-- makes the results of a write exposed to whoever accessing that memory location of that variable for some use. And when used, makes the read/write operations on that variable atomic-- for long & double as well-- R/W to every other var types are atomic already.
I'm looking to find out what Java does to make a variable value visible after a write operation?
Eg.: The following code is from one of the answers on this discussion:
public class Foo extends Thread {
private volatile boolean close = false;
public void run() {
while(!close) {
// do work
}
}
public void close() {
close = true;
// interrupt here if needed
}
}
Reads and writes to boolean literals are atomic. if the method close() above is invoked, it is an atomic operation to set the value of close as true even if it isn't declared as volatile.
What more volatile is doing in this code is making sure that a change to this value is seen the moment it happens.
How exactly volatile is achieving this?
by giving priority to threads with operations on a volatile variable? if so - how, in thread scheduling, or by making the threads-with-read-operations go look up a flag to see whether there's a writer-thread pending? I'm aware that "A write to a volatile field happens-before every subsequent read of that same field." Is it choosing among the threads, the one(s) that have a write operation on a volatile variable before giving CPU time to threads that only read?
If this is managed in thread scheduling level (which i doubt), then running a thread with a write on a volatile field has a bigger effect than it seems.
How exactly is Java managing visibility of volatile variables?
TIA.

This is the comment from source code of OpenJDK about volatile
// ----------------------------------------------------------------------------
// Volatile variables demand their effects be made known to all CPU's
// in order. Store buffers on most chips allow reads & writes to
// reorder; the JMM's ReadAfterWrite.java test fails in -Xint mode
// without some kind of memory barrier (i.e., it's not sufficient that
// the interpreter does not reorder volatile references, the hardware
// also must not reorder them).
//
// According to the new Java Memory Model (JMM):
// (1) All volatiles are serialized wrt to each other. ALSO reads &
// writes act as aquire & release, so:
// (2) A read cannot let unrelated NON-volatile memory refs that
// happen after the read float up to before the read. It's OK for
// non-volatile memory refs that happen before the volatile read to
// float down below it.
// (3) Similar a volatile write cannot let unrelated NON-volatile
// memory refs that happen BEFORE the write float down to after the
// write. It's OK for non-volatile memory refs that happen after the
// volatile write to float up before it.
//
// We only put in barriers around volatile refs (they are expensive),
// not _between_ memory refs (that would require us to track the
// flavor of the previous memory refs). Requirements (2) and (3)
// require some barriers before volatile stores and after volatile
// loads.
I hope it's helpful.

According to this :
http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile
Java's new memory model does this by
1) prohibiting the compiler and runtime from allocating volatile variables in registers.
2) not allowing the compiler/optimizer to reorder field access from the code. Effectively, this is like acquiring a lock.
3) Forcing the compiler/runtime to flush a volatile variable to main memory from cache as soon as it is written.
4) Marking a cache as invalidated before a volatile field is read.
From the article:
"Volatile fields are special fields which are used for communicating state between threads. Each read of a volatile will see the last write to that volatile by any thread; in effect, they are designated by the programmer as fields for which it is never acceptable to see a "stale" value as a result of caching or reordering. The compiler and runtime are prohibited from allocating them in registers. They must also ensure that after they are written, they are flushed out of the cache to main memory, so they can immediately become visible to other threads. Similarly, before a volatile field is read, the cache must be invalidated so that the value in main memory, not the local processor cache, is the one seen. There are also additional restrictions on reordering accesses to volatile variables. "
...
"Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not..."

Related

volatile barrier in Hibernate source code would "syncs state with other threads". How?

I was digging inside the source code of hibernate-jpa today and stumbled upon the following code snippet (that you can also find here):
private static class PersistenceProviderResolverPerClassLoader implements PersistenceProviderResolver {
//FIXME use a ConcurrentHashMap with weak entry
private final WeakHashMap<ClassLoader, PersistenceProviderResolver> resolvers =
new WeakHashMap<ClassLoader, PersistenceProviderResolver>();
private volatile short barrier = 1;
/**
* {#inheritDoc}
*/
public List<PersistenceProvider> getPersistenceProviders() {
ClassLoader cl = getContextualClassLoader();
if ( barrier == 1 ) {} //read barrier syncs state with other threads
PersistenceProviderResolver currentResolver = resolvers.get( cl );
if ( currentResolver == null ) {
currentResolver = new CachingPersistenceProviderResolver( cl );
resolvers.put( cl, currentResolver );
barrier = 1;
}
return currentResolver.getPersistenceProviders();
}
That weird statement if ( barrier == 1 ) {} //read barrier syncs state with other threads disturbed me. I took the time to dig into the volatile keyword specification.
To put it simply, in my understanding, it ensures that any READ or WRITE operation on the corresponding variable will allways be performed directly in the memory at the place the value is usually stored. It specifically prevents accesses through caches or registrars that hold a copy of the value and are not necessarily aware if the value has changed or is being modified by a concurrent thread on another core.
As a consequence it causes a drop in performances because every access implies to go all the way into the memory instead of using the usual (pipelined?) shortcuts. But it also ensures that whenever a thread reads the variable it will always be up to date.
I provided those details to let you know what my understanding of the keyword is. But now when I re-read the code I am telling myself "Ok wo we are slowing the execution by ensuring that a value which is always 1 is always 1 (and setting it to 1). How does that help?"
Anybody can explain this?
You understand volatile wrong.
it ensures that any READ or WRITE operation on the corresponding
variable will allways be performed directly in the memory at the place
the value is usually stored. It specifically prevents accesses through
caches or registrars that hold a copy of the value and are not
necessarily aware if the value has changed or is being modified by a
concurrent thread on another core.
You are talking about the implemention, while the implemention may differs from jvm to jvm.
volatile is much like some kind of specification or rule, it can gurantee that
Write to a volatile variable establishes a happens-before relationship
with subsequent reads of that same variable. This means that changes
to a volatile variable are always visible to other threads. What's
more, it also means that when a thread reads a volatile variable, it
sees not just the latest change to the volatile, but also the side
effects of the code that led up the change.
and
Using simple atomic variable access is more efficient than accessing
these variables through synchronized code, but requires more care by
the programmer to avoid memory consistency errors. Whether the extra
effort is worthwhile depends on the size and complexity of the
application.
In this case, volatile is not used to gurantte barrier == 1:
if ( barrier == 1 ) {} //read
PersistenceProviderResolver currentResolver = resolvers.get( cl );
if ( currentResolver == null ) {
currentResolver = new CachingPersistenceProviderResolver( cl );
resolvers.put( cl, currentResolver );
barrier = 1; //write
}
it is used to gurantee that the side effects between the read and write is visible to other threads.
Without it, if you put something in the resolvers in Thread1, Thread2 might not notice it.
With it, if Thread2 read barrier after Thread1 write it, Thread2 is gurantted to see this put action.
And, there are many other synchronization mechanism, such as:
synchronized keyword
ReentrantLock
AtomicInteger
....
Usually, they can also build this happens-before relation ship between different threads.
This is done to make updates done to resolvers map to other threads by establishing happens before relationship (https://www.logicbig.com/tutorials/core-java-tutorial/java-multi-threading/happens-before.html).
In a single thread the following instructions have happens before relation
resolvers.put( cl, currentResolver );
barrier = 1;
But to make change in resolvers visible to other threads we need to read value from volatile variable barrier because write and subsequent read of the same volatile variable establish happens before relation (which is also transitive). So basically this is the overall result:
Update resolvers
Write to volatile barrier
Read from volatile barrier to make update made in step 1 visible to a thread which reads value from barrier
Volatile variables - is lightweight form of synchronization in Java.
Declaring a field volatile will give the following effects:
Compiler will not reorder the operations
Variable will be not cashed in registers
Operations on 64-bit data structures will be executed as atomic one
It will affect visibility synchronization of other variables
Quote from Brian Goetz's Concurrency in practice:
The visibility effects of volatile variables extend beyond the value
of the volatile variable itself. When thread A writes to a volatile
variable and subsequently thread B reads that same variable, the
values of all variables that were visible to A prior to writing to the
volatile variable become visible to B after reading the volatile
variable.
Okay, what is the point of keeping 1 and not declare resolvers as volatile WeakHashMap?
This safe publication guarantee applies only to primitive fields and object references. For the purposes of this visibility guarantee, the actual member is the object reference; the objects referred to by volatile object references are beyond the scope of the safe publication guarantee. Consequently, declaring an object reference to be volatile is insufficient to guarantee that changes to the members of the referent are published to other threads. A thread may fail to observe a recent write from another thread to a member field of such an object referent.
Furthermore, when the referent is mutable and lacks thread safety, other threads might see a partially constructed object or an object in a inconsistent state.
The instance of the Map object is mutable because of its put() method.
Interleaved calls to get() and put() may result in the retrieval of internally inconsistent values from the Map object because put() modifies its state. Declaring the object reference volatile is insufficient to eliminate this data race.
Since volatile variable establishes a happens-before relationship, when one thread has an update, it's just can inform others accessing barrier.
From a memory visibility perspective, writing a volatile
variable is like exiting a synchronized block and reading a volatile
variable is like entering a synchronized block.

What is difference between getXXXVolatile vs getXXX in java unsafe?

I am trying to understand the two methods here in java unsafe:
public native short getShortVolatile(Object var1, long var2);
vs
public native short getShort(Object var1, long var2);
What is the real difference here? What does volatile here really work for? I found API doc here: http://www.docjar.com/docs/api/sun/misc/Unsafe.html#getShortVolatile(Object,%20long)
But it does not really explain anything for the difference between the two functions.
My understanding is that, for volatile, it only matters when we do write. To me, it should make sense that we call putShortVolatile and then for reading, we can simply call getShort() since volatile write already guarantee the new value has been flushed into main memory.
Please kindly correct me if anything is wrong. Thanks!
Here there is an article: http://mydailyjava.blogspot.it/2013/12/sunmiscunsafe.html
Unsafe supports all primitive values and can even write values without hitting thread-local caches by using the volatile forms of the methods
getXXX(Object target, long offset): Will read a value of type XXX from target's address at the specified offset.
getXXXVolatile(Object target, long offset): Will read a value of type XXX from target's address at the specified offset and not hit any thread local caches.
putXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offset.
putXXXVolatile(Object target, long offset, XXX value): Will place value at target's address at the specified offset and not hit any thread local caches.
UPDATE:
You can find more information about memory management and volatile fields on this article: http://cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html (it contains also some example of reordering).
In multiprocessor systems, processors generally have one or more layers of memory cache, which improves performance both by speeding access to data (because the data is closer to the processor) and reducing traffic on the shared memory bus (because many memory operations can be satisfied by local caches.) Memory caches can improve performance tremendously, but they present a host of new challenges. What, for example, happens when two processors examine the same memory location at the same time? Under what conditions will they see the same value?
Some processors exhibit a strong memory model, where all processors see exactly the same value for any given memory location at all times. Other processors exhibit a weaker memory model, where special instructions, called memory barriers, are required to flush or invalidate the local processor cache in order to see writes made by other processors or make writes by this processor visible to others.
The issue of when a write becomes visible to another thread is compounded by the compiler's reordering of code. If a compiler defers an operation, another thread will not see it until it is performed; this mirrors the effect of caching. Moreover, writes to memory can be moved earlier in a program; in this case, other threads might see a write before it actually "occurs" in the program.
Java includes several language constructs, including volatile, final, and synchronized, which are intended to help the programmer describe a program's concurrency requirements to the compiler. The Java Memory Model defines the behavior of volatile and synchronized, and, more importantly, ensures that a correctly synchronized Java program runs correctly on all processor architectures.
As you can see in the section What does volatile do?
Volatile fields are special fields which are used for communicating state between threads. Each read of a volatile will see the last write to that volatile by any thread; in effect, they are designated by the programmer as fields for which it is never acceptable to see a "stale" value as a result of caching or reordering. The compiler and runtime are prohibited from allocating them in registers. They must also ensure that after they are written, they are flushed out of the cache to main memory, so they can immediately become visible to other threads. Similarly, before a volatile field is read, the cache must be invalidated so that the value in main memory, not the local processor cache, is the one seen.
There are also additional restrictions on reordering accesses to volatile variables. Accesses to volatile variables could not be reordered with each other. Is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.
So the difference is that the setXXX() and getXXX() could be reorded or could use cached values not yet synchronized between the threads, while the setXXXVolatile() and the getXXXVolatile() won't be reordered and will use always the last value.
The thread local cache is a temporary storage used from java to improve performances: the data will be written/read into/from the cache before to be flushed on the memory.
In a single thread context you can use both the not-volatile than the volatile version of those methods, there will be no difference. When you write something, it doesn't matter if it is written immediately on memory or just in the thread local cache: when you'll try to read it, you'll be in the same thread, so you'll get the last value for sure (the thread local cache contain the last value).
In a multi thread context, instead, the cache could give you some throubles.
If you init an unsafe object, and you share it between two or more threads, each of those threads will have a copy of it into its local cache (the two threads could be runned on different processors, each one with its cache).
If you use the setXXX() method on a thread, the new value could be written in the thread local cache, but not yet in the memory. So it could happens that just one of the multiple thread contains the new value, while the memory and the other threadds local cache contain the old value. This could bring to unexpected results. The setXXXVolatile() method will write the new value directly on memory, so also the other threadds will be able to access to the new value (if they use the getXXXVolatile() methods).
If you use the getXXX() method, you'll get the local cache value. So if another thread has changed the value on the memory, the current thread local cache could still contains the old value, and you'll get unexpeted results. If you use the getXXXVolatile() method, you'll access directly to the memory, and you'll get the last value for sure.
Using the example of the previous link:
class DirectIntArray {
private final static long INT_SIZE_IN_BYTES = 4;
private final long startIndex;
public DirectIntArray(long size) {
startIndex = unsafe.allocateMemory(size * INT_SIZE_IN_BYTES);
unsafe.setMemory(startIndex, size * INT_SIZE_IN_BYTES, (byte) 0);
}
}
public void setValue(long index, int value) {
unsafe.putInt(index(index), value);
}
public int getValue(long index) {
return unsafe.getInt(index(index));
}
private long index(long offset) {
return startIndex + offset * INT_SIZE_IN_BYTES;
}
public void destroy() {
unsafe.freeMemory(startIndex);
}
}
This class use the putInt and the getInt to write the values into an array allocated on the memory (so outside the heap space).
As said before, those methods write the data in the thread local cache, not immediately in the memory. So when you use the setValue() method, the local cache will be updated immediately, the allocated memory will be updated after a while (it depends from the JVM implementation).
In a single thread context that class will work without problem.
In a multi threads context it could fails.
DirectIntArray directIntArray = new DirectIntArray(maximum);
Runnable t1 = new MyThread(directIntArray);
Runnable t2 = new MyThread(directIntArray);
new Thread(t1).start();
new Thread(t2).start();
Where MyThread is:
public class MyThread implements Runnable {
DirectIntArray directIntArray;
public MyThread(DirectIntArray parameter) {
directIntArray = parameter;
}
public void run() {
call();
}
public void call() {
synchronized (this) {
assertEquals(0, directIntArray.getValue(0L)); //the other threads could have changed that value, this assert will fails if the local thread cache is already updated, will pass otherwise
directIntArray.setValue(0L, 10);
assertEquals(10, directIntArray.getValue(0L));
}
}
}
With putIntVolatile() and getIntVolatile(), one of the two threads will fails for sure (the second threads will get 10 instead of 0).
With putInt() and getInt(), both the threads could finish with success (because the local cache of both threads could still contains 0 if the writer cache wasn't been flushed or the reader cache wasn't been refreshed).
I think that getShortVolatile is reading a plain short from an Object, but treats it as a volatile; it's like reading a plain variable and inserting the needed barriers (if any) yourself.
Much simplified (and to some degree wrong, but just to get the idea). Release/Acquire semantics:
Unsafe.weakCompareAndSetIntAcquire // Acquire
update some int here
Unsafe.weakCompareAndSetIntRelease // Release
As to why this is needed (this is for getIntVolatile, but the case still stands), is to probably enforce non-reorderings. Again, this is a bit beyond me and Gil Tene explaining this is FAR more suited.

Java - how does Lock guarantee happens-before relationship?

Let's consider the following standard synchronization in Java:
public class Job {
private Lock lock = new ReentrantLock();
public void work() {
lock.lock();
try {
doLotsOfWork();
} finally {
lock.unlock();
}
}
}
I understand, based on Javadoc, that this is equivalent to synchronized block. I am struggling to see how this is actually enforced on the lower-level.
Lock has a state which is a volatile, upon call to lock() it does a volatile read, then upon release it performs a volatile write. How can a write to a state of one object ensure, that none of the instruction of doLotsOfWork, which might touch lots of different objects, will not be executed out of order?
Or imagine that doLotsOfWork is actually substituted with 1000+ lines of code. Clearly the compiler cannot know in advance that there is a volatile somewhere inside the lock, therefore it needs to stop re-ordering the instructions. So, how is happens-before guaranteed for lock/unlock, even though it is built around volatile state of a separate object?
Well, if I understood correctly then your answer is here. volatile writes and reads introduce memory barriers : LoadLoad, LoadStore, etc. that forbid re-orderings. At the CPU level this is translated to actual memory barriers like mfence or lfence (the CPU forces the non-reordering via some other mechanisms too, so you might see something else in the machine code as-well).
Here is a small example:
i = 42;
j = 53;
[StoreStore]
[LoadStore]
x = 1; // volatile store
i and j assignments can be re-ordered between then, but they can not with x=1 or in other words i and j can not go below x.
Same applies to the volatile reads.
For your example every operation inside doLotsOfWork can be re-ordered as the compiler pleases, but it can not be re-ordered with lock operations.
Also when you say that the compiler can not know that there is a volatile read/write, you are slightly wrong. It has to know that, otherwise there would be no other way to prevent those re-orderings.
Also, last note: since jdk-8 you can enforce non re-orderings via the Unsafe that provides ways to that besides volatile.
From Oracle's documentation:
A write to a volatile field happens-before every subsequent read of
that same field. Writes and reads of volatile fields have similar
memory consistency effects as entering and exiting monitors, but do
not entail mutual exclusion locking.
Java Concurrency in Practice states it even more clearly:
The visibility effects of volatile variables extend beyond the value
of the volatile variable itself. When a thread A writes to a volatile
variable and subsequently thread B reads that same variable, the
values of all variables that were visible to A prior to writing to the
volatile variable become visible to B after reading the volatile
variable.
Applied to ReentrantLock it means that everything executed before lock.unlock() (doLotsOfWork() in your case) will be guaranteed to happen before subsequent call to lock.lock(). Instructions inside doLotsOfWork() still can be reordered among themselves. The only thing that is guaranteed here is that any thread which will subsequently acquire the lock calling lock.lock() will see all changes done in doLotsOfWork() before calling lock.unlock().

Why and how does volatile imply atomic reads/writes?

First off, I'm aware that volatile does not make multiple operations (as i++) atomic. This question is about a single read or write operation.
My initial understanding was that volatile only enforces a memory barrier (i.e. other threads will be able to see updated values).
Now I've noticed that JLS section 17.7 says that volatile additionally makes a single read or write atomic. For instance, given two threads, both writing a different value to a volatile long x, then x will finally represent exactly one of the values.
I'm curious how this is possible. On a 32 bit system, if two threads write to a 64 bit location in parallel and without "proper" synchronization (i.e. some kind of lock), it should be possible for the result to be a mixup. For clarity, let's use an example in which thread 1 writes 0L while thread 2 writes -1L to the same 64 bit memory location.
T1 writes lower 32 bit
T2 writes lower 32 bit
T2 writes upper 32 bit
T1 writes upper 32 bit
The result could then be 0x0000FFFF, which is undesirable. How does volatile prevent this scenario?
I've also read elsewhere that this does, typically, not degrade performance. How is it possible to synchronize writes with only a minor speed impact?
Your statement that volatile only enforces a memory barrier (in the meaning, flushes the processor cache) is false. It also implies a happens-before relationship of read-write combinations of volatile values. For example:
class Foo {
volatile boolean x;
boolean y;
void qux() {
x = true; // volatile write
y = true;
}
void baz() {
System.out.print(x); // volatile read
System.out.print(" ");
System.out.print(y);
}
}
When you run both methods from two threads, the above code will either print true false, true true or false false but never false true. Without the volatile keyword, you are not guaranteed the later condition because the JIT compiler might reorder statements.
The same way as the JIT compiler can assure this condition, is can guard 64-bit value reads and writes in the assembly. volatile values are treated explicitly by the JIT compiler to assure their atomicity. Some processor instruction sets support this directly by specific 64-bit instructions, otherwise the JIT compiler emulates it.
The JVM is more complex as you might expect it to be and it is often explained without full scope. Consider reading this excellent article which covers all the details.
volatile assures that what a thread reads is the latest values at that point, but it doesn't synchronize two writes.
If a thread writes a normal variable, it keeps the values within the thread until some certain events happen. If a thread writes a volatile variable, it change the memory of the variable immediately.
On a 32 bit system, if two threads write to a 64 bit location in parallel and without "proper" synchronization (i.e. some kind of lock), it should be possible for the result to be a mixup
This is indeed what can happen if a variable isn't marked volatile. Now, what does the system do if the field is marked volatile? Here is a resource that explains this: http://gee.cs.oswego.edu/dl/jmm/cookbook.html
Nearly all processors support at least a coarse-grained barrier instruction, often just called a Fence, that guarantees that all loads and stores initiated before the fence will be strictly ordered before any load or store initiated after the fence [...] if available, you can implement volatile store as an atomic instruction (for example XCHG on x86) and omit the barrier. This may be more efficient if atomic instructions are cheaper than StoreLoad barriers
Essentially the processors provide facilities to implement the guarantee, and what facility is available depends on the processor.

Volatile and more threads

I'm trying to understand the volatile keyword and its proper using. Looking at the Brian Goetz's article Java theory and practice: Fixing the Java Memory Model, I'm stuck on this example:
Map configOptions;
char[] configText;
volatile boolean initialized = false;
// In Thread A
configOptions = new HashMap();
configText = readConfigFile(fileName);
processConfigOptions(configText, configOptions);
initialized = true;
// In Thread B
while (!initialized)
sleep();
// use configOptions
The volatile variable above is used as a "guard" to indicate that a set of shared variables had been initialized.
I understand that since java 1.5, the volatile is strong enough to ensure that when thread B reads the volatile variable, it sees all variables that was visible to the thread A at the time the thread A writes to the volatile variable.
But what if there would be a thread C doing something like this:
// In Thread C
configOptions = new HashMap();
// put something to configOptions
My question: Is the volatile strong enough to ensure that when thread B reads the volatile variable, it sees all variables from all threads. Maybe some kind of flushing all caches? If not, then such a code with 3 threads is broken, right?
per the lang spec (http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.4.4):
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
and
A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field.
so the volatile variable itself is safe from stale cache problems. Your questions is; "what about all other variables?" Well no, the volatile keyword only affects caching on the variable it is on: all other variables on those threads are unsynchronized.
In this answer I will try to explain what volatile variables in Java is.
So, where to start?
Read and write operations with volatile variables are guaranteed to be atomic, even for 64-bit length variables. Note: i++; is not atomic because technically it is three variables.
Writing some value to volatile variable happens-before this value can be read from it. You can find lots of questions on SO about what happens-before is. Important: in JVM it is implemented with memory fences, store fence on writing and load fence on reading. From practical side that means when you read some value from it, you're guaranteed to see all values written to non-volatile variables before volatile write;
Values written to volatile variables are available to all CPUs and all threads at once, without any CPU caches.
Now, regarding your question.
Is the volatile strong enough to ensure that when thread B reads the volatile variable, it sees all variables from all threads?
No. It is strong enough to ensure that when thread B read some value from volatile variable, it sees (will read) values from variables written before volatile write.
Maybe some kind of flushing all caches?
Actually yes, on x86 architecture volatile write empties store order buffer, volatile read empties load order buffer. If you want more details on that, you may want to read answer for this question: Java 8 Unsafe: xxxFence() instructions
If not, then such a code with 3 threads is broken, right?
This code works as intended (I guess), because thread B does volatile read prior to reading configOptions which guarantees its visibility.
Is the volatile strong enough to ensure that when thread B reads the
volatile variable, it sees all variables from all threads. Maybe some
kind of flushing all caches?
All variables from the volatile-writing-thread that are written prior to the volatile store will be visible.
So there is no 'flush all caches' magic.
If not, then such a code with 3 threads is broken, right?
It could very well be broken with two threads if you do not synchronize correctly. There is a reason the initialized flag is written to. That effectively flushes all the writes that occurred on that thread.

Categories

Resources