Does a non-volatile variable need synchronized? - java

Consider 2 threads and an array int[] values.
The first thread is performing:
synchronized (values) {
values[i] = 58;
}
while the second thread is performing:
if (values[i] == 58) {
}
outside a synchronized block.
If the first thread first performs values[i]= 58, is it guaranteed that if the second threads executes slightly later, that the if of the second thread reads 58 even though the second thread reads values[i] outside a synchronized block?

If the first thread first performs values[i]= 58, is it guaranteed
that if the second threads executes slightly later, that the if of the
second thread reads 58 even though the second thread reads values[i]
outside a synchronized block?
No
Synchronising this way does not stop other threads to perform any operation on the array simultaneously. However, other threads will be prevented to grab a lock on the array.

Aforementioned behavior is not guaranteed. The guarantee of such a "visibility" is actually a subject of happens-before relationship:
The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.
Happens-before relationship (according to JLS) can be achieved as such:
Each action in a thread happens-before every action in that thread that comes later in the program's order.
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
A write to a volatile field happens-before every subsequent read of that same field. Writes and reads of volatile fields have similar memory consistency effects as entering and exiting monitors, but do not entail mutual exclusion locking.
A call to start on a thread happens-before any action in the started thread.
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
So, in your particular case, you actually need either synchronization using a shared monitor or AtomicIntegerArray in order to make access to the array thread-safe; volatile modifier won't help as is, because it only affects the variable pointing to the array, not the array's elements (more detailed explanation).

Related

Impact of wait() method on a thread in synchronization block in java

synchronized (lockObject) {
// update some value of the common shared resource
lockObject.wait();
}
As on call of the wait() method, the thread will release the lock, I want to know after releasing the lock does it also update the value in the main memory of the shared resource object or it only updates the value after the execution of the synchronized block.
It is a fallacy to think that due to synchronization (e.g. synchronized or volatile) data needs to be written to main memory. CPU caches on modern CPUs are always coherent due to the cache coherence protocol.
An object.wait causes the thread to release the lock. And as soon as another thread sends a notify, the lock is reacquired. The object.wait has no semantics in the Java memory model; only acquire and release of the lock are relevant.
So in your particular case, if a thread does a wait, it triggers a lock release. If another thread would read that state after acquiring the same lock, then there is a happens-before edge between the thread that did the release of the lock (due to wait) and the thread that acquired the lock. And therefore the second thread is guaranteed to see the changes of the first.
In the JLS 17.4.5 it says
The wait methods of class Object (§17.2.1) have lock and unlock actions associated with them; their happens-before relationships are defined by these associated actions.
Also, from JLS 17.4.4:
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
If a thread waits it gives up the lock so that other threads can act. It only makes sense that any changes the thread made before waiting (which triggers an unlock action) should be visible to other threads acquiring the lock.

Do we need to make a field 'volatile', if Thread1 enters sync block, updates it, is still inside the sync block, Thread2 outside of sync reads field?

Let's say we have 'class A' which has a data member 'var'.
class A
{
int var;
method()
{
read(var);
synchronized block
{
update var here;
}
}
Let's say Thread 1 acquires the lock and enters the synchronized block. It updates the field value 'var'. Let's say the field value is cached in the processor's core by the Thread. It updates the value in the cache.
Now thread 2 starts running, enters method(), and reads field value 'var'.
Wil thread 2, surely get the updated value? Does synchronize makes sure thread 2 will get updated value even when Thread 1 has not exited Synchronized. In this case, do we need to make 'var' volatile?
PS - Everything is happening on the same object.
If you would not make 'var' volatile, then there is no happens-before edge between writing and reading 'var'. So you have a data race on your hands and weird things can happen like the compiler messing things up.
So in short: make it volatile (or make sure you read 'var' using the same lock).
Does synchronize makes sure thread 2 will get updated value
No. synchronized doesn't do anything for a thread if the thread does not enter a synchronized block. The way to think about synchronized is this: Whatever one thread does before it leaves a synchronized block is guaranteed to become visible to some other thread by the time the second thread subsequently* enters a block that is synchronized on the same lock object.
For a single int variable, you could make it volatile instead. volatile makes a guarantee similar to the guarantee of synchronized: Whatever one thread does before it updates a volatile variable is guaranteed to become visible to some other thread by the time the second thread subsequently* reads the same volatile variable.
* Edit: I added "subsequently" to make it clear that neither synchronized nor volatile is sufficient to ensure that the threads will access var in some particular order. If you wish to ensure that thread 2 will not try to read var until after thread 1 has assigned it, then you will have to use some other means† to coordinate their activity. Correct use of synchronized or volatile only can ensure that IF thread 1 updates var before thread 2 examines it, then thread 2 will see the update.
† There are many "other means." One example is a Semaphore. Thread 1 could release() a permit to the semaphore after it updates var, and thread 2 could acquire() the permit before it reads var. The acquire() call would cause thread 2 to wait if it arrived at the semaphore before thread 1 had done its job.

Preferring synchronized to volatile

I've read this answer in the end of which the following's written:
Anything that you can with volatile can be done with synchronized, but
not vice versa.
It's not clear. JLS 8.3.1.4 defines volatile fields as follows:
A field may be declared volatile, in which case the Java Memory Model
ensures that all threads see a consistent value for the variable
(§17.4).
So, the volatile fields are about memory visibility. Also, as far as I got from the answer I cited, reading and writing to volatile fields are synched.
Synchronization, in turn guarantees that the only one thread has access to a synched block. As I got, it has nothing to do with memory visibility. What did I miss?
In fact synchronization is also related to memory visibilty as the JVM adds a memory barrier in the exit of the synchronized block. This ensures that the results of writes by the thread in the synchronization block are guaranteed to be visible by reads by another threads once the first thread has exited the synchronized block.
Note : following #PaŭloEbermann's comment, if the other thread go through a read memory barrier (by getting in a synchronized block for example), their local cache will not be invalidated and therefore they might read an old value.
The exit of a synchronized block is a happens-before in this doc : http://docs.oracle.com/javase/6/docs/api/java/util/concurrent/package-summary.html#MemoryVisibility
Look for these extracts:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
and
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
Synchronized and volatile are different, but usually both of them are used to solve same common problem.
Synchronized is to make sure that only one thread will access the shared resource at a given point of time.
Whereas, those shared resources are often declared as volatile, it is because, if a thread has changed the shared resource value, it has to updated in the other thread also. But without volatile, the Runtime, just optimizes the code, by reading the value from the cache. So what volatile does is, whenever any thread access volatile, it wont read the value from the cache, instead it actually gets it from the actual memory and the same is used.
Was going through log4j code and this is what I found.
/**
* Config should be consistent across threads.
*/
protected volatile PrivateConfig config;
If multiple threads write to a shared volatile variable and they also need to use a previous value of it, it can create a race condition. So at this point you need use synchronization.
... if two threads are both reading and writing to a shared variable, then using the volatile keyword for that is not enough. You need to use a synchronized in that case to guarantee that the reading and writing of the variable is atomic. Reading or writing a volatile variable does not block threads reading or writing. For this to happen you must use the synchronized keyword around critical sections.
For detailed tutorial about volatile, see 'volatile' is not always enough.
That's wrong. Synchronization has to do with memory visibility. Every thread has is own cache. If you got a lock the cache is refresehd. If you release a lock the cache is flused to the main memory.
If you read a volatile field there is also a refresh, if you write a volatile field there is a flush.

Java volatile and synchronized

I know that volatile keyword refresh all the invisible data i.e. if some thread read volatile variable all potential invisible variables/references (not only the variable that will be read) will be okey(visible i.e. with correct values) after this reading. Right? But what about synchronized ? Is it the same ? If in synchronized block we read 3 variables for example will all other varibles will be visible?
What will hapen if one thread change the value of some variable (for example set varible "age" from 2 to 33) from non-synchronized block and after this thread die ? The value could be written in the thread stack, but main thread maybe will not see this change, the background thread will die and the new value is gone and can not be retrieved?
And last question if we have 2 background threads and we know that our main thread will be notified (in some way) just before every one of them will die and our thread will wait both of them to finish their work and will continue after that, how we can assure that all variables changes(which are made by the background threads) will be visible to the main thread? We can just put synchronized block after the background thread finishes or ? We don't want to access variables that are changed from the background threads with synchronized blocks every time after this threads are dead (because it's overhead), but we need to have their right values ? But it is unnatural to read some fake volatile variable or to use fake synchronized block(if it refresh all data) just to refresh all data.
I hope that my questions are explained well.
Thanks in advance.
Reading the value of a volatile variable creates a happens-before relationship between the writing from one thread and the reading of another.
See http://jeremymanson.blogspot.co.uk/2008/11/what-volatile-means-in-java.html:
The Java volatile modifier is an example of a special mechanism to
guarantee that communication happens between threads. When one thread
writes to a volatile variable, and another thread sees that write, the
first thread is telling the second about all of the contents of memory
up until it performed the write to that volatile variable.
Synchronized blocks create a happens-before relationship as well. See http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/package-summary.html
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor. And because the happens-before relation
is transitive, all actions of a thread prior to unlocking
happen-before all actions subsequent to any thread locking that
monitor.
Which have the same effect on visibility.
If a value is written without any kind of synchronization then there is no guarantee that other threads will ever see this value. If you want to share values across threads, you should be adding some kind of synchronization or locking, or using volatile.
All your questions are answered in the java.util.concurrent package documentation.
But what about synchronized?
Here's what the documentation says:
An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
.
The value could be written in the thread stack, but main thread maybe will not see this change, the background thread will die and the new value is gone and can not be retrieved?
If the value is written in the thread stack, then we're talking about a local variable. Local variable are not accessible anywhere except in the method declaring that variable. So if the thread dies, of course the stack doesn't exist and the local variable doesn't exist either. If you're talking about a field of an object, that's stored on the heap. If no other thread has a reference to this object, and if the reference is unreachable from any static variable, then it will be garbage collected.
and our thread will wait both of them to finish their work and will continue after that, how we can assure that all variables changes(which are made by the background threads) will be visible to the main thread
The documentation says:
All actions in a thread happen-before any other thread successfully returns from a join on that thread.
So, since the main thread waits for the background threads to die, it uses join(), and every action made by the background threads will be visible by the main thread after join() returns.
Fully covering the whole topic from the level your question seems to imply will require more than a StackOverflow.com answer, so I recommend looking for a good book on multi threading programming.
volatile guarantee that the read and write accesses to the qualified variable are totally ordered with respect to other accesses to the samevolatile variable1.
It does this by preventing the volatile read and write accesses to be reorder with previous or future instructions and enforcing that all the side effects before the access of the writing thread are visible to a reading thread.
This means that volatile variable are read and written as you see in your code and like the instructions were executed one at a time, beginning the next one only when all the side effects of the previous are completed and visible to every other thread.
To better understand what this means and why this is necessary, look at this question of mine on difference between Memory Barriers and lock prefixed instruction.
Note that volatile in Java is much much much stronger than volatile in C or C++. It does guarantee more that the usual treating of read/write access as a side effect with regard to optimization purposes. This means that is doesn't simply imply that the variable is read from the memory every time, a Java volatile is a memory barrier.
The synchronized block simply guarantees exclusive (i.e. one thread at a time) execution of a block of code.
It doesn't imply that all the thread see the memory access is the same order, a thread could see first the write to a protected shared data structure and then the write to the lock!
1 In some circumstances, where a full memory barrier is emitted, this may enforce that writes and reads to all volatile variables made by a thread T will be visible to other threads in T program order. Note that this not suffices for synchronization as there is still no relationship between inter-thread accesses.
No.
Shared vars are not on the stack of any thread, copy of their value can be but the variables exists independently of any thread.
When a thread dies gracefully, the write is done and can be retrieved (beware of memory ordering again).
If the thread dies forcefully, it could be interrupted anywhere. In any way the actual Java assignment is implemented if the thread stops before the write to the shaded var (by direct assignment or by copying a local value on the stack) then the write never happened.
You don't need synchronized, just volatile as the main thread only reads and the background threads only write (different vars).

memory visibility on lock acquirement

Does memory visibility depend on which monitor is used? Lock B is acquired after lock A is released, is it enough for memory visibility?
for example following code:
int state; // shared
// thread A
synchronized (A) {
state += 1;
}
Thread.sleep(10000000);
// thread B
Thread.sleep(1000);
synchronized(B) {
state += 1;
}
threads are started in the same time and thread B sleep time may be arbitrarily high, just to ensure that it is executed after thread A used state variable. Thread A sleep time is used to ensure that thread does not finish before thread B uses state shared variable.
UPDATE
From http://www.ibm.com/developerworks/library/j-jtp03304/
When a thread exits a synchronized block as part of releasing the associated monitor, the JMM requires that the local processor cache be flushed to main memory.
Similarly, as part of acquiring the monitor when entering a synchronized block, local caches are invalidated so that subsequent reads will go directly to main memory and not the local cache.
If this is true then I see no reason for state variable not to be visible to thread B
Further, however they say that monitor should be the same, but it is not implied from aforementioned statements.
This process guarantees that when a variable is written by one thread during a synchronized block protected by a given monitor and read by another thread during a synchronized block protected by the same monitor, the write to the variable will be visible by the reading thread.
It seems that process of local memory flush is not so straightforward as it is described in the first statement and may not happen on every lock release?
Yes, it depends. You can read this doc about this. Relevant section is "17.4.4. Synchronization Order":
An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).
You see, a concrete monitor object m is specified there. If monitors are different, then you are not getting synchronizes-with relationship, hence, you do not get happens-before relationship (from the 17.4.5):
If an action x synchronizes-with a following action y, then we also have hb(x, y).
So, your updates will be performed out of order with possible missing updates.
Does memory visibility depend on which monitor is used? Yes.
Lock B is acquired after lock A is released, is it enough for memory visibility? No.
The two threads have to synchronize on the same monitor in order to see each others' writes. In your example, both threads could see state having the value 1. No matter what sleep intervals you insert. It of course depends on the implementation of the JVM you're using and different JVMs could yield different results. Basically, you have unsynchronized access to a field and that should always be avoided (because it's not deterministic what value state has).
Read more in the excellent chapter on the Memory Model in the Java Specification.

Categories

Resources