I recently heard in a talk that a write to a volatile triggers a memory-barrier for each variable that the thread has written to. Is that really correct? From the JLS, it seems that only the variable concerned gets flushed out, but not others. Does anybody know what is actually correct? Can one point me a concrete location in the JLS?
Yes, it will initiate a barrier. You can read more here. There are 4 types, LoadLoad LoadStore StoreStore StoreLoad.
As far as your question
From the JLS, it seems that only the variable concerned gets flushed
out, but not others. Does anybody know what is actually correct?
All writes that occur before a volatile store are visible by any other threads with the predicate that the other threads load this new store. However, writes that occur before a volatile load may or may not be seen by other threads if they do not load the new value.
For a practical example
volatile int a =0;
int b = 0;
Thread-1
b = 10;
a = 3;
Thread-2
if(a == 0){
// b can b 10 or 0
}
if(a == 3){
// b is guaranteed to be 10 (according to the JMM)
}
The reference to Volatile variables and other variables was correct. I did not realize that the transitivity of happens-before is something that must be implemented by the VM, not something that follows from the definition. I am still puzzled why something with so far-reaching consequences is not stated clearly but actually a corollary from some definition. To wrap it up: Suppose you have 4 actions like this:
thread1 thread2
a1
a2
a3
a4
where a2 is a write to a volatile variable v and a3 is a read from the same volatile variable v.
It follows from the definiton of happens-before (hb) that hb(a1,a2) and hb(a3,a4).
Also, for volatiles we have hb(a2,a3). It follows now from the required transitivity of hb that hb(a1,a3). So the write and subsequent read of the volatile variable v functions as a memory barrier.
Related
Java memory visibility documentation says that:
A write to a volatile field happens-before every subsequent read of that same field.
I'm confused what does subsequent means in context of multithreading. Does this sentence implies some global clock for all processors and cores. So for example I assign value to variable in cycle c1 in some thread and then second thread is able to see this value in subsequent cycle c1 + 1?
It sounds to me like it's saying that it provides lockless acquire/release memory-ordering semantics between threads. See Jeff Preshing's article explaining the concept (mostly for C++, but the main point of the article is language neutral, about the concept of lock-free acquire/release synchronization.)
In fact Java volatile provides sequential consistency, not just acq/rel. There's no actual locking, though. See Jeff Preshing's article for an explanation of why the naming matches what you'd do with a lock.)
If a reader sees the value you wrote, then it knows that everything in the producer thread before that write has also already happened.
This ordering guarantee is only useful in combination with other guarantees about ordering within a single thread.
e.g.
int data[100];
volatile bool data_ready = false;
Producer:
data[0..99] = stuff;
// release store keeps previous ops above this line
data_ready = true;
Consumer:
while(!data_ready){} // spin until we see the write
// acquire-load keeps later ops below this line
int tmp = data[99]; // gets the value from the producer
If data_ready was not volatile, reading it wouldn't establish a happens-before relationship between two threads.
You don't have to have a spinloop, you could be reading a sequence number, or an array index from a volatile int, and then reading data[i].
I don't know Java well. I think volatile actually gives you sequential-consistency, not just release/acquire. A sequential-release store isn't allowed to reorder with later loads, so on typical hardware it needs an expensive memory barrier to make sure the local core's store buffer is flushed before any later loads are allowed to execute.
Volatile Vs Atomic explains more about the ordering volatile gives you.
Java volatile is just an ordering keyword; it's not equivalent to C11 _Atomic or C++11 std::atomic<T> which also give you atomic RMW operations. In Java, volatile_var++ is not an atomic increment, it a separate load and store, like volatile_var = volatile_var + 1. In Java, you need a class like AtomicInteger to get an atomic RMW.
And note that C/C++ volatile doesn't imply atomicity or ordering at all; it only tells the compiler to assume that the value can be modified asynchronously. This is only a small part of what you need to write lockless for anything except the simplest cases.
It means that once a certain Thread writes to a volatile field, all other Thread(s) will observe (on the next read) that written value; but this does not protect you against races though.
Threads have their caches, and those caches will be invalidated and updated with that newly written value via cache coherency protocol.
EDIT
Subsequent means whenever that happens after the write itself. Since you don't know the exact cycle/timing when that will happen, you usually say when some other thread observes the write, it will observer all the actions done before that write; thus a volatile establishes the happens-before guarantees.
Sort of like in an example:
// Actions done in Thread A
int a = 2;
volatile int b = 3;
// Actions done in Thread B
if(b == 3) { // observer the volatile write
// Thread B is guaranteed to see a = 2 here
}
You could also loop (spin wait) until you see 3 for example.
Peter's answer gives the rationale behind the design of the Java memory model.
In this answer I'm attempting to give an explanation using only the concepts defined in the JLS.
In Java every thread is composed by a set of actions.
Some of these actions have the potential to be observable by other threads (e.g. writing a shared variable), these
are called synchronization actions.
The order in which the actions of a thread are written in the source code is called the program order.
An order defines what is before and what is after (or better, not before).
Within a thread, each action has a happens-before relationship (denoted by <) with the next (in program order) action.
This relationship is important, yet hard to understand, because it's very fundamental: it guarantees that if A < B then
the "effects" of A are visible to B.
This is indeed what we expect when writing the code of a function.
Consider
Thread 1 Thread 2
A0 A'0
A1 A'1
A2 A'2
A3 A'3
Then by the program order we know A0 < A1 < A2 < A3 and that A'0 < A'1 < A'2 < A'3.
We don't know how to order all the actions.
It could be A0 < A'0 < A'1 < A'2 < A1 < A2 < A3 < A'3 or the sequence with the primes swapped.
However, every such sequence must have that the single actions of each thread are ordered according to the thread's program order.
The two program orders are not sufficient to order every action, they are partial orders, in opposition of the
total order we are looking for.
The total order that put the actions in a row according to a measurable time (like a clock) they happened is called the execution order.
It is the order in which the actions actually happened (it is only requested that the actions appear to be happened in
this order, but that's just an optimization detail).
Up until now, the actions are not ordered inter-thread (between two different threads).
The synchronization actions serve this purpose.
Each synchronization action synchronizes-with at least another synchronization action (they usually comes in pairs, like
a write and a read of a volatile variable, a lock and the unlock of a mutex).
The synchronize-with relationship is the happens-before between thread (the former implies the latter), it is exposed as
a different concept because 1) it slightly is 2) happens-before are enforced naturally by the hardware while synchronize-with
may require software intervention.
happens-before is derived from the program order, synchronize-with from the synchronization order (denoted by <<).
The synchronization order is defined in terms of two properties: 1) it is a total order 2) it is consistent with each thread's
program order.
Let's add some synchronization action to our threads:
Thread 1 Thread 2
A0 A'0
S1 A'1
A1 S'1
A2 S'2
S2 A'3
The program orders are trivial.
What is the synchronization order?
We are looking for something that by 1) includes all of S1, S2, S'1 and S'2 and by 2) must have S1 < S2 and S'1 < S'2.
Possible outcomes:
S1 < S2 < S'1 < S'2
S1 < S'1 < S'2 < S2
S'1 < S1 < S'2 < S'2
All are synchronization orders, there is not one synchronization order but many, the question of above is wrong, it
should be "What are the synchronization orders?".
If S1 and S'1 are so that S1 << S'1 than we are restricting the possible outcomes to the ones where S1 < S'2 so the
outcome S'1 < S1 < S'2 < S'2 of above is now forbidden.
If S2 << S'1 then the only possible outcome is S1 < S2 < S'1 < S'2, when there is only a single outcome I believe we have
sequential consistency (the converse is not true).
Note that if A << B these doesn't mean that there is a mechanism in the code to force an execution order where A < B.
Synchronization actions are affected by the synchronization order they do not impose any materialization of it.
Some synchronization actions (e.g. locks) impose a particular execution order (and thereby a synchronization order) but some don't (e.g. reads/writes of volatiles).
It is the execution order that create the synchronization order, this is completely orthogonal to the synchronize-with relationship.
Long story short, the "subsequent" adjective refers to any synchronization order, that is any valid (according to each thread
program order) order that encompasses all the synchronization actions.
The JLS then continues defining when a data race happens (when two conflicting accesses are not ordered by happens-before)
and what it means to be happens-before consistent.
Those are out of scope.
I'm confused what does subsequent means in context of multithreading. Does this sentence implies some global clock for all processors and cores...?
Subsequent means (according to the dictionary) coming after in time. There certainly is a global clock across all CPUs in a computer (think X Ghz) and the document is trying to say that if thread-1 did something at clock tick 1 then thread-2 does something on another CPU at clock tick 2, it's actions are considered subsequent.
A write to a volatile field happens-before every subsequent read of that same field.
The key phrase that could be added to this sentence to make it more clear is "in another thread". It might make more sense to understand it as:
A write to a volatile field happens-before every subsequent read of that same field in another thread.
What this is saying that if a read of a volatile field happens in Thread-2 after (in time) the write in Thread-1, then Thread-2 will be guaranteed to see the updated value. Further up in the documentation you point to is the section (emphasis mine):
... The results of a write by one thread are guaranteed to be visible to a read by another thread only if the write operation happens-before the read operation. The synchronized and volatile constructs, as well as the Thread.start() and Thread.join() methods, can form happens-before relationships. In particular.
Notice the highlighted phrase. The Java compiler is free to reorder instructions in any one thread's execution for optimization purposes as long as the reordering doesn't violate the definition of the language – this is called execution order and is critically different than program order.
Let's look at the following example with variables a and b that are non-volatile ints initialized to 0 with no synchronized clauses. What is shown is program order and the time in which the threads are encountering the lines of code.
Time Thread-1 Thread-2
1 a = 1;
2 b = 2;
3 x = a;
4 y = b;
5 c = a + b; z = x + y;
If Thread-1 adds a + b at Time 5, it is guaranteed to be 3. However, if Thread-2 adds x + y at Time 5, it might get 0, 1, 2, or 3 depends on race conditions. Why? Because the compiler might have reordered the instructions in Thread-1 to set a after b because of efficiency reasons. Also, Thread-1 may not have appropriately published the values of a and b so that Thread-2 might get out of date values. Even if Thread-1 gets context-switched out or crosses a write memory barrier and a and b are published, Thread-2 needs to cross a read barrier to update any cached values of a and b.
If a and b were marked as volatile then the write to a must happen-before (in terms of visibility guarantees) the subsequent read of a on line 3 and the write to b must happen-before the subsequent read of b on line 4. Both threads would get 3.
We use volatile and synchronized keywords in java to ensure happens-before guarantees. A write memory barrier is crossed when assigning a volatile or exiting a synchronized block and a read barrier is crossed when reading a volatile or entering a synchronized block. The Java compiler cannot reorder write instructions past these memory barriers so the order of updates is assured. These keywords control instruction reordering and insure proper memory synchronization.
NOTE: volatile is unnecessary in a single-threaded application because program order assures the reads and writes will be consistent. A single-threaded application might see any value of (non-volatile) a and b at times 3 and 4 but it always sees 3 at Time 5 because of language guarantees. So although use of volatile changes the reordering behavior in a single-threaded application, it is only required when you share data between threads.
This is more a definition of what will not happen rather than what will happen.
Essentially it is saying that once a write to an atomic variable has happened there cannot be any other thread that, on reading the variable, will read a stale value.
Consider the following situation.
Thread A is continuously incrementing an atomic value a.
Thread B occasionally reads A.a and exposes that value as a
non-atomic b variable.
Thread C occasionally reads both A.a and B.b.
Given that a is atomic it is possible to reason that from the point of view of C, b may occasionally be less than a but will never be greater than a.
If a was not atomic no such guarantee could be given. Under certain caching situations it would be quite possible for C to see b progress beyond a at any time.
This is a simplistic demonstration of how the Java memory model allows you to reason about what can and cannot happen in a multi-threaded environment. In real life the potential race conditions between reading and writing to data structures can be much more complex but the reasoning process is the same.
Let's consider the following piece of code in Java
int x = 0;
int who = 1
Thread #1:
(1) x++;
(2) who = 2;
Thread #2
while(who == 1);
x++;
print x; ( the value should be equal to 2 but, perhaps, it is not* )
(I don't know Java memory models- let assume that it is strong memory model- I mean: (1) and (2) will be doesn't swapped)
Java memory model guarantees that access/store to the 32 bit variables is atomic so our program is safe. But, nevertheless we should use a attribute volatile because *. The value of x may be equal to 1 because x can be kept in register when Thread#2 read it. To resolve it we should make the x variable volatile. It is clear.
But, what about that situation:
int x = 0;
mutex m; ( just any mutex)
Thread #1:
mutex.lock()
x++;
mutex.unlock()
Thread #2
mutex.lock()
x++;
print x; // the value is always 2, why**?
mutex.unlock()
The value of x is always 2 though we don't make it volatile. Do I correctly understand that locking/unlocking mutex is connected with inserting memory barriers?
I'll try to tackle this. The Java memory model is kind of involved and hard to contain in a single StackOverflow post. Please refer to Brian Goetz's Java Concurrency in Practice for the full story.
The value of x is always 2 though we don't make it volatile. Do I correctly understand that locking/unlocking mutex is connected with inserting memory barriers?
First if you want to understand the Java memory model, it's always Chapter 17 of the spec you want to read through.
That spec says:
An unlock on a monitor happens-before every subsequent lock on that monitor.
So yes, there's a memory visibility event at the unlock of your monitor. (I assume by "mutex" you mean monitor. Most of the locks and other classes in the java.utils.concurrent package also have happens-before semantics, check the documentation.)
Happens-before is what Java means when it guarantees not just that the events are ordered, but also that memory visibility is guaranteed.
We say that a read r of a variable v is allowed to observe a write w
to v if, in the happens-before partial order of the execution trace:
r is not ordered before w (i.e., it is not the case that
hb(r, w)), and
there is no intervening write w' to v (i.e. no write w' to v such
that hb(w, w') and hb(w', r)).
Informally, a read r is allowed to see the result of a write w if there
is no happens-before ordering to prevent that read.
This is all from 17.4.5. It's a little confusing to read through, but the info is all there if you do read through it.
Let's go over some things. The following statement is true: Java memory model guarantees that access/store to the 32 bit variables is atomic. However, it does not follow that the first pseudoprogram you listed is safe. Simply because two statements are ordered syntactically does not mean that the visibility of their updates are also so ordered as viewed by other threads. Thread #2 may see the update caused by who=2 before the increment in x is visible. Making x volatile would still not make the program correct. Instead, making the variable 'who' voliatile would make the program correct. That is because volatile interacts with the java memory model in specific ways.
I feel like there is some notion of 'writing back to main memory' at the core of a common sense understanding of volatile which is incorrect. Volatile does not write back the value to main memory in Java. What reading from and writing to a volatile variable does is create what's called a happens-before relationship. When thread #1 writes to a volatile variable you're creating a relationship that ensures that any other threads #2 viewing that volatile variable will also be able to 'view' all the actions thread #1 has taken before that. In your example that means making 'who' volatile. By writing the value 2 to 'who' you are creating a happens-before relationship so that when thread #2 views who=2 it will similarly see an updated version of x.
In your second example (assuming you meant to have the 'who' variable too) the mutex unlocking creates a happens-before relationship as I specified above. Since that means other threads viewing the unlock of the mutex (ie. they are able to lock it themselves) they will see the updated version of x.
Note
By saying that a memory access can (or cannot) be reordered I meand that it can be
reordered either by the compiler when emitting byte code byte or by the JIT when emitting
machine code or by the CPU when executing out of order (eventually requiring barriers to prevent this) with respect to any other memory access.
If often read that accesses to volatile variables cannot be reordered due to the Happens-Before relationship (HBR).
I found that an HBR exists between every two consecutive (in program order) actions of
a given thread and yet they can be reordered.
Also a volatile access HB only with accesses on the same variable/field.
What I thinks makes the volatile not reorderable is this
A write to a volatile field (§8.3.1.4) happens-before every subsequent read [of any thread]
of that field.
If there are others threads a reordering of the variables will becomes visible as in this
simple example
volatile int a, b;
Thread 1 Thread 2
a = 1; while (b != 2);
b = 2; print(a); //a must be 1
So is not the HBR itself that prevent the ordering but the fact that volatile extends this relationship with other threads, the presence of other threads is the element that prevent reordering.
If the compiler could prove that a reordering of a volatile variable would not change the
program semantic it could reorder it even if there is an HBR.
If a volatile variable is never accessed by other threads than its accesses
could be reordered
volatile int a, b, c;
Thread 1 Thread 2
a = 1; while (b != 2);
b = 2; print(a); //a must be 1
c = 3; //c never accessed by Thread 2
I think c=3 could very well be reordered before a=1, this quote from the specs
confirm this
It should be noted that the presence of a happens-before relationship between
two actions does not necessarily imply that they have to take place in that order
in an implementation. If the reordering produces results consistent with a legal
execution, it is not illegal.
So I made these simple java programs
public class vtest1 {
public static volatile int DO_ACTION, CHOOSE_ACTION;
public static void main(String[] args) {
CHOOSE_ACTION = 34;
DO_ACTION = 1;
}
}
public class vtest2 {
public static volatile int DO_ACTION, CHOOSE_ACTION;
public static void main(String[] args) {
(new Thread(){
public void run() {
while (DO_ACTION != 1);
System.out.println(CHOOSE_ACTION);
}
}).start();
CHOOSE_ACTION = 34;
DO_ACTION = 1;
}
}
In both cases both fields are marked as volatile and accessed with putstatic.
Since these are all the information the JIT has1, the machine code would be identical,
thus the vtest1 accesses will not be optimized2.
My question
Are volatile accesses really never reordered by specification or they could be3, but this is never done in practice?
If volatile accesses can never be reordered, what parts of the specs say so? and would this means that all volatile accesses are executed and seen in program order by the CPUs?
1Or the JIT can known that a field will never be accessed by other thread? If yes, how?.
2Memory barriers will be present for example.
3For example if no other threads are involved.
What the JLS says (from JLS-8.3.1.4. volatile Fields) is, in part, that
The Java programming language provides a second mechanism, volatile fields, that is more convenient than locking for some purposes.
A field may be declared volatile, in which case the Java Memory Model ensures that all threads see a consistent value for the variable (§17.4).
Which means the access may be reordered, but the results of any reordering must eventually be consistent (when accessed by another thread) with the original order. A field in a single threaded application wouldn't need locking (from volatile or synchronization).
The Java memory model provides sequential consistency (SC) for correctly synchronized programs. SC in simple terms means that if all possible executions of some program, can be explained by different executions in which all memory actions are executed in some sequential order and this order is consistent with the program order (PO) of each of the threads, then this program is consistent with these sequential executions; so it is sequential consistent (hence the name).
What this effectively means that the JIT/CPU/memory subsystem can reorder volatile writes and reads as much as it wants as long as there exists a sequential execution that could also explain the outcome of the actual execution. So the actual execution isn't that important.
If we look at the following example:
volatile int a, b, c;
Thread 1 Thread 2
a = 1; while (c != 1);
b = 1; print(b);
c = 1;
There is a happens before relation between a=1 and b=2 (PO), and a happens before relation between c=2 and c=3 (PO) and a happens before relation c=1 and c!=0 (Volatile variable rule) and a happens before relation between c!=0 and print(b) (PO).
Since the happens before relation is transitive, there is a happens before relation between a=1 and print(b). So in that sense, it can't be reordered. However, there is nobody to prove that a reordering happened, so it can still be reordered.
I'm going to be using notation from JLS §17.4.5.
In your second example, (if you'll excuse my loose notation) you have
Thread 1 ordering:
hb(a = 1, b = 2)
hb(b = 2, c = 3)
Volatile guarantees:
hb(b = 2, b != 2)
hb(a = 1, access a for print)
Thread 2 ordering:
hb(while(b != 2);, print(a))
and we have (emphasis mine)
More specifically, if two actions share a happens-before relationship,
they do not necessarily have to appear to have happened in that order
to any code with which they do not share a happens-before
relationship. Writes in one thread that are in a data race with reads
in another thread may, for example, appear to occur out of order to
those reads.
There is no happens-before relationship between c=3 and Thread 2. The implementation is free to reorder c=3 to its heart's content.
From 17.4. Memory Model of JLS
The memory model describes possible behaviors of a program. An implementation is free to produce any code it likes, as long as all resulting executions of a program produce a result that can be predicted by the memory model.
This provides a great deal of freedom for the implementor to perform a myriad of code transformations, including the reordering of actions and removal of unnecessary synchronization.
I have encountered the following claim: "Reading or writing to a volatile variable imposes a memory barrier in which the entire cache is flushed/invalidated."
Now consider the following execution scenario:
initial volatile boolean barrier;
initial int b = 0;
thread 1 b = 1; // write1
thread 1 barrier = true; // write2
thread 2 barrier = true; // write3
thread 2 print(b); // r1
Question: is thread 2 guaranteed to print 1?
Based on the claim, I would answer yes: thread 1 flushes its cache on write2 (so that b = 1 ends up in main memory), and thread 2 invalidates its cache on write3 (so that it will read b from main memory).
However, in the relevant JLS sections I am unable to find a guarantee for this behaviour, since write3 is a write, and not a read. Thus the following seemingly crucial clause does not apply:
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
Is there some other information I am missing, or am I perhaps misunderstanding something?
(Relevant questions:
Volatile variables and other variables
Is a write to a volatile a memory-barrier in Java)
I think, you highlighted a wrong word in the phrase you quoted (I mean, the fact that it synchronizes with reads is by far not the main issue here): "A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread".
Note that it does not say anything at all about reads of other variables. Thread-1's version of b might still be sitting in the register for all you know.
Reasoning of volatile in terms of cache flushing/invalidation will lead to more confusion and it doesn't present the full picture.
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
volatile write of variable followed by a read of that variable in other thread establishes a weaker consistency in terms of happens before gaurantee.
i.e changing a little in op code
thread 1 b = 1; // write1
thread 1 barrier = true; // write2
thread 2 while(!barrier) // read barrier
thread 2 print(b); // r1
now it is guaranteed to print b as 1.
Now if you see when thread 2 reads barrier as true and then thread 2 is guaranteed to see everything which happened before writing to barrier in program order.
If you need to see volatile in terms of barrier then please refer
http://gee.cs.oswego.edu/dl/jmm/cookbook.html.
In brief volatile implementation invoke two kinds of things
first compiler is restricted in what can be optimize w.r.t to volatile read and write. Secondly and more importantly it is implemented
as volatile read -> loadload
volatile wtire - > storeStore|loadStore
before write and StoreLoad after write.
here loadload means process the invalidate queue
before reading the next value ensures reading lates values.
StoreStore means flush store buffer before writing the next value.
http://gee.cs.oswego.edu/dl/jmm/cookbook.html.
http://www.puppetmastertrading.com/images/hwViewForSwHackers.pdf
After reading more blogs/articles etc, I am now really confused about the behavior of load/store before/after memory barrier.
Following are 2 quotes from Doug Lea in one of his clarification article about JMM, which are both very straighforward:
Anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.
Note that it is important for both threads to access the same volatile variable in order to properly set up the happens-before relationship. It is not the case that everything visible to thread A when it writes volatile field f becomes visible to thread B after it reads volatile field g.
But then when I looked into another blog about memory barrier, I got these:
A store barrier, “sfence” instruction on x86, forces all store instructions prior to the barrier to happen before the barrier and have the store buffers flushed to cache for the CPU on which it is issued.
A load barrier, “lfence” instruction on x86, forces all load instructions after the barrier to happen after the barrier and then wait on the load buffer to drain for that CPU.
To me, Doug Lea's clarification is more strict than the other one: basically, it means if the load barrier and store barrier are on different monitors, the data consistency will not be guaranteed. But the later one means even if the barriers are on different monitors, the data consistency will be guaranteed. I am not sure if I understanding these 2 correctly and also I am not sure which of them is correct.
Considering the following codes:
public class MemoryBarrier {
volatile int i = 1, j = 2;
int x;
public void write() {
x = 14; //W01
i = 3; //W02
}
public void read1() {
if (i == 3) { //R11
if (x == 14) //R12
System.out.println("Foo");
else
System.out.println("Bar");
}
}
public void read2() {
if (j == 2) { //R21
if (x == 14) //R22
System.out.println("Foo");
else
System.out.println("Bar");
}
}
}
Let's say we have 1 write thread TW1 first call the MemoryBarrier's write() method, then we have 2 reader threads TR1 and TR2 call MemoryBarrier's read1() and read2() method.Consider this program run on CPU which does not preserve ordering (x86 DO preserve ordering for such cases which is not the case), according to memory model, there will be a StoreStore barrier (let's say SB1) between W01/W02, as well as 2 LoadLoad barrier between R11/R12 and R21/R22 (let's say RB1 and RB2).
Since SB1 and RB1 are on same monitor i, so thread TR1 which calls read1 should always see 14 on x, also "Foo" is always printed.
SB1 and RB2 are on different monitors, if Doug Lea is correct, thread TR2 will not be guaranteed to see 14 on x, which means "Bar" may be printed occasionally. But if memory barrier runs like Martin Thompson described in the blog, the Store barrier will push all data to main memory and Load barrier will pull all data from main memory to cache/buffer, then TR2 will also be guaranteed to see 14 on x.
I am not sure which one is correct, or both of them are but what Martin Thompson described is just for x86 architecture. JMM does not guarantee change to x is visible to TR2 but x86 implementation does.
Thanks~
Doug Lea is right. You can find the relevant part in section §17.4.4 of the Java Language Specification:
§17.4.4 Synchronization Order
[..] A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order). [..]
The memory model of the concrete machine doesn't matter, because the semantics of the Java Programming Language are defined in terms of an abstract machine -- independent of the concrete machine. It's the responsibility of the Java runtime environment to execute the code in such a way, that it complies with the guarantees given by the Java Language Specification.
Regarding the actual question:
If there is no further synchronization, the method read2 can print "Bar", because read2 can be executed before write.
If there is an additional synchronization with a CountDownLatch to make sure that read2 is executed after write, then method read2 will never print "Bar", because the synchronization with CountDownLatch removes the data race on x.
Independent volatile variables:
Does it make sense, that a write to a volatile variable does not synchronize-with a read of any other volatile variable?
Yes, it makes sense. If two threads need to interact with each other, they usually have to use the same volatile variable in order to exchange information. On the other hand, if a thread uses a volatile variable without a need for interacting with all other threads, we don't want to pay the cost for a memory barrier.
It is actually important in practice. Let's make an example. The following class uses a volatile member variable:
class Int {
public volatile int value;
public Int(int value) { this.value = value; }
}
Imagine this class is used only locally within a method. The JIT compiler can easily detect, that the object is only used within this method (Escape analysis).
public int deepThought() {
return new Int(42).value;
}
With the above rule, the JIT compiler can remove all effects of the volatile reads and writes, because the volatile variable can not be accesses from any other thread.
This optimization actually exists in the Java JIT compiler:
src/share/vm/opto/memnode.cpp
As far as I understood the question is actually about volatile read/writes and its happens-before guarantees. Speaking of that part, I have only one thing to add to nosid's answer:
Volatile writes cannot be moved before normal writes, volatile reads cannot be moved after normal reads. That's why read1() and read2() results will be as nosid wrote.
Speaking about barriers - the defininition sounds fine for me, but the one thing that probably confused you is that these are things/tools/way to/mechanism (call it whatever you like) to implement behavior described in JMM in hotspot. When using Java, you should rely on JMM guarantees, not implementation details.