I know that volatile keyword refresh all the invisible data i.e. if some thread read volatile variable all potential invisible variables/references (not only the variable that will be read) will be okey(visible i.e. with correct values) after this reading. Right? But what about synchronized ? Is it the same ? If in synchronized block we read 3 variables for example will all other varibles will be visible?
What will hapen if one thread change the value of some variable (for example set varible "age" from 2 to 33) from non-synchronized block and after this thread die ? The value could be written in the thread stack, but main thread maybe will not see this change, the background thread will die and the new value is gone and can not be retrieved?
And last question if we have 2 background threads and we know that our main thread will be notified (in some way) just before every one of them will die and our thread will wait both of them to finish their work and will continue after that, how we can assure that all variables changes(which are made by the background threads) will be visible to the main thread? We can just put synchronized block after the background thread finishes or ? We don't want to access variables that are changed from the background threads with synchronized blocks every time after this threads are dead (because it's overhead), but we need to have their right values ? But it is unnatural to read some fake volatile variable or to use fake synchronized block(if it refresh all data) just to refresh all data.
I hope that my questions are explained well.
Thanks in advance.
Reading the value of a volatile variable creates a happens-before relationship between the writing from one thread and the reading of another.
See http://jeremymanson.blogspot.co.uk/2008/11/what-volatile-means-in-java.html:
The Java volatile modifier is an example of a special mechanism to
guarantee that communication happens between threads. When one thread
writes to a volatile variable, and another thread sees that write, the
first thread is telling the second about all of the contents of memory
up until it performed the write to that volatile variable.
Synchronized blocks create a happens-before relationship as well. See http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/package-summary.html
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
Which have the same effect on visibility.
If a value is written without any kind of synchronization then there is no guarantee that other threads will ever see this value. If you want to share values across threads, you should be adding some kind of synchronization or locking, or using volatile.
All your questions are answered in the java.util.concurrent package documentation.
But what about synchronized?
Here's what the documentation says:
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
.
The value could be written in the thread stack, but main thread maybe will not see this change, the background thread will die and the new value is gone and can not be retrieved?
If the value is written in the thread stack, then we're talking about a local variable. Local variable are not accessible anywhere except in the method declaring that variable. So if the thread dies, of course the stack doesn't exist and the local variable doesn't exist either. If you're talking about a field of an object, that's stored on the heap. If no other thread has a reference to this object, and if the reference is unreachable from any static variable, then it will be garbage collected.
and our thread will wait both of them to finish their work and will continue after that, how we can assure that all variables changes(which are made by the background threads) will be visible to the main thread
The documentation says:
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
So, since the main thread waits for the background threads to die, it uses join(), and every action made by the background threads will be visible by the main thread after join() returns.
Fully covering the whole topic from the level your question seems to imply will require more than a StackOverflow.com answer, so I recommend looking for a good book on multi threading programming.
volatile guarantee that the read and write accesses to the qualified variable are totally ordered with respect to other accesses to the samevolatile variable1.
It does this by preventing the volatile read and write accesses to be reorder with previous or future instructions and enforcing that all the side effects before the access of the writing thread are visible to a reading thread.
This means that volatile variable are read and written as you see in your code and like the instructions were executed one at a time, beginning the next one only when all the side effects of the previous are completed and visible to every other thread.
To better understand what this means and why this is necessary, look at this question of mine on difference between Memory Barriers and lock prefixed instruction.
Note that volatile in Java is much much much stronger than volatile in C or C++. It does guarantee more that the usual treating of read/write access as a side effect with regard to optimization purposes. This means that is doesn't simply imply that the variable is read from the memory every time, a Java volatile is a memory barrier.
The synchronized block simply guarantees exclusive (i.e. one thread at a time) execution of a block of code.
It doesn't imply that all the thread see the memory access is the same order, a thread could see first the write to a protected shared data structure and then the write to the lock!
1 In some circumstances, where a full memory barrier is emitted, this may enforce that writes and reads to all volatile variables made by a thread T will be visible to other threads in T program order. Note that this not suffices for synchronization as there is still no relationship between inter-thread accesses.
No.
Shared vars are not on the stack of any thread, copy of their value can be but the variables exists independently of any thread.
When a thread dies gracefully, the write is done and can be retrieved (beware of memory ordering again).
If the thread dies forcefully, it could be interrupted anywhere. In any way the actual Java assignment is implemented if the thread stops before the write to the shaded var (by direct assignment or by copying a local value on the stack) then the write never happened.
You don't need synchronized, just volatile as the main thread only reads and the background threads only write (different vars).
Related
I have read below references:
http://tutorials.jenkov.com/java-concurrency/volatile.html
https://www.geeksforgeeks.org/volatile-keyword-in-java/
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomic.html
But, I am still not clear on what is the expected behavior in below case:
I have a thread pool in which thread is reused.
I have a non-volatile var which is accessed by different threads form that thread pool.
Threads are run in sequential.(Yes, they can be run in one thread. But, just for this case. So, don't ask why not use one thread)
And the part I am not clear is that. Do I still need volatile to make sure the change made in previous thread is visible to the next thread execution.
Like does java flash thread local cache to memory after each execution?
And does the thread reload the local cache before each execution?
It keep complain there is code section that is not quoted. So, I have to do this. Please help fix.
Java Memory Model design can answer your question:
In the Java Memory Model a volatile field has a store barrier inserted after a write to it and a load barrier inserted before a read of it. Qualified final fields of a class have a store barrier inserted after their initialisation to ensure these fields are visible once the constructor completes when a reference to the object is available.
see https://dzone.com/articles/memory-barriersfences
In other words: when a thread tries to read from volatile var JMM forces all CPUs owning this memory area to write back to memory from the local CPU's cache. Then CPU loads its value to the local cache if necessary.
And the part I am not clear is that. Do I still need volatile to make sure the change made in previous thread is visible to the next thread execution.
Yes and no. Volatile keyword is just for making a value of variable visible to other threads. But if you need only one thread read-write at the moment = you need synchronization. If you need to provide just visibility = volatile keyword is enough.
If you do not use the volatile keyword and share the value through the threads it might be not up to date or even be corrupted. For LONG type Java do 2 writes by 32bits and they are not atomic. Let's say another thread can read the value between these two writes happened.
It doesn't matter whether the Threads are running sequentially for parallely, if you don't use volatile keyword there is no visibility guarantee. That means there is no guarantee that the other thread will see the latest copy as the thread may read the values from register.
Do I only need to mark a field volatile if multiple threads are reading it at the same time?
What about the scenario where Thread A changes the value of a field, and Thread B evaluates it after Thread A is guaranteed to have completed?
In my scenario, is there a happens-before relationship enforced (without the volatile keyword)?
You need the volatile keyword or some other synchronization mechanism to enforce the "happens before" relationship that guarantees visibility of the variable in threads other than the thread in which it was written. Without such synchronization, everything is treated as happening "at the same time", even if it doesn't by wall clock time.
In your particular example, one thing that may happen without synchronization is that the value written by thread A is never flushed from cache to main memory, and thread B executes on another processor and never sees the value written by thread A.
When you are dealing with threads, wall clock time means nothing. You must synchronize properly if you want data to pass between threads properly. There's no shortcut to proper synchronization that won't cause you headaches later on.
In the case of your original question, some ways proper synchronization can be achieved are by using the volatile keyword, by using synchronized blocks, or by having the thread that is reading the value of the variable join() the thread in which the variable is written.
Edit: In response to your comment, a Future has internal synchronization such that calling get() on a Future establishes a "happens before" relationship when the call returns, so that also achieves proper synchronization.
No, you don't need volatile...
is there a happens-before relationship enforced (without the volatile keyword)?
...but your code needs to do something to establish "happens-before."
There will be a happens-before relationship if (and only if) your code does something that the "Java Language Specification" (JLS) says will establish "happens-before."
What about the scenario where Thread A changes the value of a field, and Thread B evaluates it after Thread A is guaranteed to have completed?
Depends on what you mean by "guaranteed." If "guaranteed" means, "established happens-before," then your code will work as expected.
One way you can guarantee it is for thread B to call threadA.join(). The JLS guarantees that if thread B calls threadA.join(), then everything thread A did must "happen before" the join() call returns.
You do not need any of the shared variables to be volatile if thread B only accesses them after joining thread A.
You can chose one of available options to achieve the same purpose.
You can use volatile to force all threads to get latest value of the variable from main memory.
You can use synchronization to guard critical data
You can use Lock API
You can use Atomic variables
Refer to this documentation page on high level concurrency constructs.
Have a look at related SE questions:
Avoid synchronized(this) in Java?
What is the difference between atomic / volatile / synchronized?
I've read this answer in the end of which the following's written:
Anything that you can with volatile can be done with synchronized, but
not vice versa.
It's not clear. JLS 8.3.1.4 defines volatile fields as follows:
A field may be declared volatile, in which case the Java Memory Model
ensures that all threads see a consistent value for the variable
(§17.4).
So, the volatile fields are about memory visibility. Also, as far as I got from the answer I cited, reading and writing to volatile fields are synched.
Synchronization, in turn guarantees that the only one thread has access to a synched block. As I got, it has nothing to do with memory visibility. What did I miss?
In fact synchronization is also related to memory visibilty as the JVM adds a memory barrier in the exit of the synchronized block. This ensures that the results of writes by the thread in the synchronization block are guaranteed to be visible by reads by another threads once the first thread has exited the synchronized block.
Note : following #PaŭloEbermann's comment, if the other thread go through a read memory barrier (by getting in a synchronized block for example), their local cache will not be invalidated and therefore they might read an old value.
The exit of a synchronized block is a happens-before in this doc : http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html#MemoryVisibility
Look for these extracts:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
and
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
Synchronized and volatile are different, but usually both of them are used to solve same common problem.
Synchronized is to make sure that only one thread will access the shared resource at a given point of time.
Whereas, those shared resources are often declared as volatile, it is because, if a thread has changed the shared resource value, it has to updated in the other thread also. But without volatile, the Runtime, just optimizes the code, by reading the value from the cache. So what volatile does is, whenever any thread access volatile, it wont read the value from the cache, instead it actually gets it from the actual memory and the same is used.
Was going through log4j code and this is what I found.
/**
* Config should be consistent across threads.
*/
protected volatile PrivateConfig config;
If multiple threads write to a shared volatile variable and they also need to use a previous value of it, it can create a race condition. So at this point you need use synchronization.
... if two threads are both reading and writing to a shared variable, then using the volatile keyword for that is not enough. You need to use a synchronized in that case to guarantee that the reading and writing of the variable is atomic. Reading or writing a volatile variable does not block threads reading or writing. For this to happen you must use the synchronized keyword around critical sections.
For detailed tutorial about volatile, see 'volatile' is not always enough.
That's wrong. Synchronization has to do with memory visibility. Every thread has is own cache. If you got a lock the cache is refresehd. If you release a lock the cache is flused to the main memory.
If you read a volatile field there is also a refresh, if you write a volatile field there is a flush.
I read this in an upvoted comment on StackOverflow:
But if you want to be safe, you can add simple synchronized(this) {}
at the end of you #PostConstruct [method]
[note that variables were NOT volatile]
I was thinking that happens-before is forced only if both write and read is executed in synchronized block or at least read is volatile.
Is the quoted sentence correct? Does an empty synchronized(this) {} block flush all variables changed in current method to "general visible" memory?
Please consider some scenerios
what if second thread never calls lock on this? (suppose that second thread reads in other methods). Remember that question is about: flush changes to other threads, not give other threads a way (synchronized) to poll changes made by original thread. Also no-synchronization in other methods is very likely in Spring #PostConstruct context - as original comment says.
is memory visibility of changes forced only in second and subsequent calls by another thread? (remember that this synchronized block is a last call in our method) - this would mark this way of synchronization as very bad practice (stale values in first call)
Much of what's written about this on SO, including many of the answers/comments in this thread, are, sadly, wrong.
The key rule in the Java Memory Model that applies here is: an unlock operation on a given monitor happens-before a subsequent lock operation on that same monitor. If only one thread ever acquires the lock, it has no meaning. If the VM can prove that the lock object is thread-confined, it can elide any fences it might otherwise emit.
The quote you highlight assumes that releasing a lock acts as a full fence. And sometimes that might be true, but you can't count on it. So your skeptical questions are well-founded.
See Java Concurrency in Practice, Ch 16 for more on the Java Memory Model.
All writes that occur prior to a monitor exit are visible to all threads after a monitor enter.
A synchronized(this){} can be turned into bytecode like
monitorenter
monitorexit
So if you have a bunch of writes prior to the synchronized(this){} they would have occurred before the monitorexit.
This brings us to the next point of my first sentence.
visible to all threads after a monitor enter
So now, in order for a thread to ensure the writes ocurred it must execute the same synchronization ie synchornized(this){}. This will issue at the very least a monitorenter and establish your happens before ordering.
So to answer your question
Does an empty synchronized(this) {} block flush all variables changed
in current method to "general visible" memory?
Yes, as long as you maintain the same synchronization when you want to read those non-volatile variables.
To address your other questions
what if second thread never calls lock on this? (suppose that second
thread reads in other methods). Remember that question is about: flush
changes to other threads, not give other threads a way (synchronized)
to poll changes made by original thread. Also no-synchronization in
other methods is very likely in Spring #PostConstruct context
Well in this case using synchronized(this) without any other context is relatively useless. There is no happens-before relationship and it's in theory just as useful as not including it.
is memory visibility of changes forced only in second and subsequent
calls by another thread? (remember that this synchronized block is a
last call in our method) - this would mark this way of synchronization
as very bad practice (stale values in first call)
Memory visibility is forced by the first thread calling synchronized(this), in that it will write directly to memory. Now, this doesn't necessarily mean each threads needs to read directly from memory. They can still read from their own processor caches. Having a thread call synchronized(this) ensures it pulls the value of the field(s) from memory and retrieve most up to date value.
Does memory visibility depend on which monitor is used? Lock B is acquired after lock A is released, is it enough for memory visibility?
for example following code:
int state; // shared
// thread A
synchronized (A) {
state += 1;
}
Thread.sleep(10000000);
// thread B
Thread.sleep(1000);
synchronized(B) {
state += 1;
}
threads are started in the same time and thread B sleep time may be arbitrarily high, just to ensure that it is executed after thread A used state variable. Thread A sleep time is used to ensure that thread does not finish before thread B uses state shared variable.
UPDATE
From http://www.ibm.com/developerworks/library/j-jtp03304/
When a thread exits a synchronized block as part of releasing the associated monitor, the JMM requires that the local processor cache be flushed to main memory.
Similarly, as part of acquiring the monitor when entering a synchronized block, local caches are invalidated so that subsequent reads will go directly to main memory and not the local cache.
If this is true then I see no reason for state variable not to be visible to thread B
Further, however they say that monitor should be the same, but it is not implied from aforementioned statements.
This process guarantees that when a variable is written by one thread during a synchronized block protected by a given monitor and read by another thread during a synchronized block protected by the same monitor, the write to the variable will be visible by the reading thread.
It seems that process of local memory flush is not so straightforward as it is described in the first statement and may not happen on every lock release?
Yes, it depends. You can read this doc about this. Relevant section is "17.4.4. Synchronization Order":
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
You see, a concrete monitor object m is specified there. If monitors are different, then you are not getting synchronizes-with relationship, hence, you do not get happens-before relationship (from the 17.4.5):
If an action x synchronizes-with a following action y, then we also have hb(x, y).
So, your updates will be performed out of order with possible missing updates.
Does memory visibility depend on which monitor is used? Yes.
Lock B is acquired after lock A is released, is it enough for memory visibility? No.
The two threads have to synchronize on the same monitor in order to see each others' writes. In your example, both threads could see state having the value 1. No matter what sleep intervals you insert. It of course depends on the implementation of the JVM you're using and different JVMs could yield different results. Basically, you have unsynchronized access to a field and that should always be avoided (because it's not deterministic what value state has).
Read more in the excellent chapter on the Memory Model in the Java Specification.