I read some java code, and found these functions:
synchronized void setConnected(boolean connected){
this.connected = connected;
}
synchronized boolean isConnected(){
return connected;
}
I wonder if synchronization makes any sense here, or just author didn't understand the need for synchronized keyword?
I'd suppose that synchronized is useless here. Or am I mistaken?
The keyword synchronized is one way of ensuring thread-safety. Beware: there's (way) more to thread-safety than deadlocks, or missing updates because of two threads incrementing an int without synchronization.
Consider the following class:
class Connection {
private boolean connected;
synchronized void setConnected(boolean connected){
this.connected = connected;
}
synchronized boolean isConnected(){
return connected;
}
}
If multiple threads share an instance of Connection and one thread calls setConnected(true), without synchronized it is possible that other threads keep seeing isConnected() == false. The synchronized keyword guarantees that all threads sees the current value of the field.
In more technical terms, the synchronized keyword ensures a memory barrier (hint: google that).
In more details: every write made before releasing a monitor (ie, before leaving a synchronized block) is guaranteed to be seen by every read made after acquiring the same monitor (ie, after entering a block synchronizing on the same object). In Java, there's something called happens-before (hint: google that), which is not as trivial as "I wrote the code in this order, so things get executed in this order". Using synchronized is a way to establish a happens-before relationship and guarantee that threads see the memory as you would expect them to see.
Another way to achieve the same guarantees, in this case, would be to eliminate the synchronized keyword and mark the field volatile. The guarantees provided by volatile are as follows: all writes made by a thread before a volatile write are guaranteed to be visible to a thread after a subsequent volatile read of the same field.
As a final note, in this particular case it might be better to use a volatile field instead of synchronized accessors, because the two approaches provide the same guarantees and the volatile-field approach allows simultaneous accesses to the field from different threads (which might improve performance if the synchronized version has too much contention).
Synchronization is needed here to prevent memory consistency errors, see http://docs.oracle.com/javase/tutorial/essential/concurrency/memconsist.html. Though in this concrete case volatile would be much more efficient solution
private volatile boolean connected;
void setConnected(boolean connected){
this.connected = connected;
}
boolean isConnected(){
return connected;
}
The author has probably designed the code with a multi-threaded approach in mind. This means that the methods are synchronized and more than one thread will not be able to access the synchronized code at the same time on the same object instance.
Related
In the book "Java Concurrency in Practice", under the section, 3.1.1 State data, there is a code
#NotThreadSafe
public class MutableInteger {
private int value;
public int get() { return value; }
public void set(int value) { this.value = value; }
}
which is not thread safe,because:
if one thread calls set, other threads calling get may or may not see
that update.
whereas using synchronized keyword on both set and get methods makes it "correct". How?
#ThreadSafe
public class SynchronizedInteger {
#GuardedBy("this") private int value;
public synchronized int get() { return value; }
public synchronized void set(int value) { this.value = value; }
}
Here too if value is 0, and Thread A has called set(2) while Thread B has called get(), B may get value 0 and then A will set it to 2...which previous code was already doing. So what benefit we got from synchronizing the code..
May be I am missing something, but please guide. Thank you
The issue you fix this way is not that thread B executes the set immediately after A executes a get, that one will still return the "old" (well, technically correct at the time, but soon to be wrong) value.
The issue the synchronization fixes is that even if thread B wrote before thread A read, A could read an old value due to caching (most likely CPU caches, but this depends on the JVM implementation). A non-synchronized read from a non-volatile variable can use a cached value. In other words: the synchronized creates a read-barrier, which means "you have to re-read this value, even if you already have it in your CPU cache".
Note that for this specific case, simply adding volatile to value would have the same effect, but for more complex access patterns synchronized (or it's equivalence in newer APIs Lock) is necessary.
When you use synchronized method you get the exclusive access to the object that is in "Race Condition" risks. In that case you get the exclusive access to the value.
This goal is obtained because the synchronized method use semaphore.
Synchronized in Java
Java Doc - Here you can find a good example.
From Java Doc:
If count is an instance of SynchronizedCounter, then making these methods synchronized has two effects:
First,
it is not possible for two invocations of synchronized methods on the
same object to interleave.
When one thread is executing a synchronized method for an object, all
other threads that invoke synchronized methods for the same object
block (suspend execution) until the first thread is done with the
object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
It's all about the "Happens Before Relationship", as termed by the official Java documentation.
In your case of two synchronised getter & setter methods reading & writing the same instance variable respectively, it depends on the sequence of operations, ie, whether the getter or setter was called first.
This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.
Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
Synchronisation is one of ways to achieve this consistency. Another one, in your particular case would be to make the variable as volatile.
From the official Java docs:
Using volatile variables reduces the risk of memory consistency
errors, because any write to a volatile variable establishes a
happens-before relationship with subsequent reads of that same
variable. This means that changes to a volatile variable are always
visible to other threads.
I often use the following pattern to create a cancellable thread:
public class CounterLoop implements Runnable {
private volatile AtomicBoolean cancelPending = new AtomicBoolean(false);
#Override
public void run() {
while (!cancelPending.get()) {
//count
}
}
public void cancel() {
cancelPending.set(true);
}
}
But I'm not sure that cancelPending MUST be a AtomicBoolean. Can we just use a normal boolean in this case?
Using both volatile and AtomicBoolean is unnecessary. If you declare the cancelPending variable as final as follows:
private final AtomicBoolean cancelPending = new AtomicBoolean(false);
the JLS semantics for final fields mean that synchronization (or volatile) will not be needed. All threads will see the correct value for the cancelPending reference. JLS 17.5 states:
"An object is considered to be completely initialized when its constructor finishes. A thread that can only see a reference to an object after that object has been completely initialized is guaranteed to see the correctly initialized values for that object's final fields."
... but there are no such guarantees for normal fields; i.e. not final and not volatile.
You could also just declare cancelPending as a volatile boolean ... since you don't appear to be using the test-and-set capability of AtomicBoolean.
However, if you used a non-volatile boolean you would need to use synchronized to ensure that all threads see an up-to-date copy of the cancelPending flag.
You can use a volatile boolean instead with no issues.
Note that this only applies in cases much like this where the boolean is only being changed to a specific value (true in this case). If the boolean might be changed to either true or false at any time then you may need an AtomicBoolean to detect and act on race conditions.
However - the pattern you describe has an innate smell. By looping on a boolean (volatile or not) you are likely to find yourself trying to insert some sort of sleep mechanism or having to interrupt your thread.
A much cleaner route is to split up the process into finer steps. I recently posted an answer here covering the options of pausing threads that may be of interest.
No, you can not. Because if you will change the boolean value from another thread without proper synchronization then this change can be invisible to another threads. You can use valotile boolean in your case to make any modification visible to all threads.
Yes you can. You can either use a non volatile AtomicBoolean (relying on its built in thread safety), or use any other volatile variable.
According to the Java Memory Model (JMM), both options result in a properly synchronized program, where the read and write of the cancelPending variable can't produce a data race.
Using a volatile boolean variable in this context is safe, though some may consider it bad practice. Consult this thread to see why.
Your solution of using an Atomic* variable seems the best option, even though the synchronization may introduce unnecessary overhead in comparison to a volatile variable.
You can also use a critical section
Object lock = new Object();
#Override
public void run() {
synchronized (lock) {
if (cancelPending) {
return;
}
}
}
or a synchronized method.
synchronized public boolean shouldStop() {
return shouldStop;
}
synchronized public void setStop(boolean stop) {
shouldStop = stop;
}
For example I have a class with 2 counters (in multi-threaded environment):
public class MyClass {
private int counter1;
private int counter2;
public synchronized void increment1() {
counter1++;
}
public synchronized void increment2() {
counter2++;
}
}
Theres 2 increment operations not related with each other. But I use same object for lock (this).
It is true that if clients simultaneously calls increment1() and increment2() methods, then increment2 invocation will be blocked until increment1() releases the this monitor?
If it's true, does it mean that I need to provide different monitor locks for each operation (for performance reasons)?
It is true that if clients simultaneously calls increment1() and increment2() methods, then increment2 invocation will be blocked until increment1() releases the this monitor?
If they're called on the same instance, then yes.
If it's true, does it mean that I need to provide different monitor locks for each operation (for performance reasons)?
Only you can know that. We don't know your performance requirements. Is this actually a problem in your real code? Are your real operations long-lasting? Do they occur very frequently? Have you performed any diagnostics to estimate the impact of this? Have you profiled your application to find out how much time is being spent waiting for the monitor at all, let alone when it's unnecessary?
I would actually suggest not synchronizing on this for entirely different reasons. It's already hard enough to reason about threading when you do control everything - but when you don't know everything which can acquire a monitor, you're on a hiding to nothing. When you synchronize on this, it means that any other code which has a reference to your object can also synchronize on the same monitor. For example, a client could use:
synchronized (myClass) {
// Do something entirely different
}
This can lead to deadlocks, performance issues, all kinds of things.
If you use a private final field in your class instead, with an object created just to be a monitor, then you know that the only code acquiring that monitor will be your code.
1) yes it's true that increment1() blocks increment2() and vice versa because they both are implicitly synchronizing on this
2) if you need a better performance consider the lock-free java.util.concurrent.atomic.AtomicInteger class
private AtomicInteger counter1 = new AtomicInteger();
private AtomicInteger counter2 = new AtomicInteger();
public void increment1() {
counter1.getAndIncrement();
}
public void increment2() {
counter2.getAndIncrement();
}
If you synchonize on the method, as what you did here, you lock the whole object, so two thread accessing a different variable from this same object would block each other anyway.
If you want to syncrhonize only a counter at a time so two thread won't block each other while accessing different variables, you have to add the two counters here in two synchronized block, and use different variables as the "lock" of the two blocks.
You are right it will be a performance bottleneck if you use same Object. You can use different lock for individual counter or use java.util.concurrent.atomic.AtomicInteger for concurrent counter.
Like:
public class Counter {
private AtomicInteger count = new AtomicInteger(0);
public void incrementCount() {
count.incrementAndGet();
}
public int getCount() {
return count.get();
}
}
Yes the given code is identical to the following:
public void increment1() {
synchronized(this) {
counter1++;
}
}
public oid increment2() {
synchronized(this) {
counter2++;
}
}
which means that only one method can be executed at the same time. You should either provide different locks (and locking on this is a bad idea to begin with), or some other solution. The second one is the one you actually want here: AtomicInteger
Yes if multiple threads try to call methods on your object they will wait trying to get the lock (although the order of who gets the lock isn't guaranteed.) As with everything there is no reason to optimise until you know this is the bottle neck in you code.
If you need the performance benefits that can be had from being able to call both operations in parallel, then yes, you do not to provide different monitor objects for the different operations.
However, there is something to be said for premature optimization and that you should make sure that you need it before making your program more complex to accommodate it.
Sorry this is such a long question.
Ive been doing lots of research lately into multi-threading as I slowly implement it into a personal project. However, probably due to an abundance of slightly incorrect examples, the use of synchronized blocks and volatility in certain situations is still a bit unclear to me.
My core question is this: Are changes to references and primitives automatically volatile (that is, performed on the main memory and not a cache) when a thread is inside a synchronized block, or does the read also have to be synchronized for it to work properly?
If so What is the purpose of synchronizing a simple getter method? (see example 1 ) Also, are ALL changes sent to main memory as long as the thread has synchronized on anything? eg if it is sent off to do loads of work all over the place inside a very high level sync will every single change then made be to main memory, and nothing ever to cache, until its unlocked again?
If not Does the change have to be explicitly inside a synchronized block, or can java actually pick up on, for example, uses of the Lock object? (see example 3)
If either Does the synchronized object need to be related to the reference/primitive being changed in any way (eg the immediate object that contains it)? Can I write by syncing on one object and read with another if its otherwise safe? (see example 2)
(please note for the following examples that I know that synchronized methods and synchronized(this) are frowned upon and why, but discussion about that is beyond the scope of my question)
Example 1:
class Counter{
int count = 0;
public synchronized void increment(){
count++;
}
public int getCount(){
return count;
}
}
In this example, increment() needs to be synchronized since ++ is not an atomic operation. As such, two threads incremending at the same time may result in a overall increase of 1 to the count. The count primitive needs to be atomic (eg not long/double/reference), and it is so thats fine.
Does getCount() need to be synchronized here and why exactly? The explanation I have heard the most is that I will have no guarantee whether the count returned will be the pre- or post-increment. However, this seems like the explanation for something slightly different, thats found itself in the wrong place. I mean if I were to synchronize getCount(), then I still see no guarantee - its now down to not knowing the locking order, insead of not knowing whether the actual read happens to be before/after the actual write.
Example 2:
Is the following example threadsafe, if you assume that through trickery not shown here that none of these methods will never be called at the same time? Will count increment in an expected way if its done so using a random method each time, and then be read properly, or does the lock have to be the same object? (btw I fully realise how rediculous this example is but Im more interested in theory than practice)
class Counter{
private final Object lock1 = new Object();
private final Object lock2 = new Object();
private final Object lock3 = new Object();
int count = 0;
public void increment1(){
synchronized(lock1){
count++;
}
}
public void increment2(){
synchronized(lock2){
count++;
}
}
public int getCount(){
synchronized(lock3){
return count;
}
}
}
Example 3:
Is the happens-before relationship simply a java concept, or is it an actual thing built into the JVM? Even though I can guarantee a conceptual happens-before relationship for this next example, is java smart enough to pick it up if its a built in thing? I am assuming it is not, but is this example actually threadsafe? If its threadsafe, what about if getCount() did no locking?
class Counter{
private final Lock lock = new Lock();
int count = 0;
public void increment(){
lock.lock();
count++;
lock.unlock();
}
public int getCount(){
lock.lock();
int count = this.count;
lock.unlock();
return count;
}
}
Yes, the read has to be synchronized as well. This page says:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
[...]
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor
The same page says:
Actions prior to "releasing" synchronizer methods such as Lock.unlock,
Semaphore.release, and CountDownLatch.countDown happen-before actions
subsequent to a successful "acquiring" method such as Lock.lock
So locks offer the same visibility guarantees as synchronized blocks.
Whether you use synchronized blocks or locks, the visibility is only guaranteed if the reader thread uses the same monitor or lock as the writer thread.
Your Example 1 is incorrect: the getter must be synchronized as well if you want to see the latest value of the count.
Your example 2 is incorrect because it uses different locks to guard the same count.
Your example 3 is OK. If the getter did not lock, you could see an older value of the count. The happens-before is something that is guaranteed by the JVM. The JVM has to respect the rules specified, by flushing caches to the main memory for example.
Try to view it in terms of two distinct, simple operations:
Locking (mutual exclusion),
Memory barrier (cache sync, instruction reordering barrier).
Entering a synchronized block entails both locking and memory barrier; leaving the synchronized block entails unlocking + memory barrier; reading/writing a volatile field entails memory barrier only. Thinking in these terms I think you can clarify for yourself all the question above.
As for Example 1, the reading thread will not have any kind of memory barrier. It's not just between seeing the value before/after read, it's about never observing any change to the var after a thread is started.
Example 2. is the most interesting issue you raise. You are indeed given no guarantees by the JLS in this case. In practice you won't be given any ordering guarantees (it's as if the locking aspect wasn't there at all), but you'll still have the benefit of the memory barriers so you will observe changes, unlike the first example. Basically, this is exactly the same as removing synchronized and tagging the int as volatile (apart from the runtime costs of acquiring locks).
Regarding Example 3, by "just a Java thing" I feel you have generics with erasure in mind, something that only the static code checking is aware of. This is not like that -- both locks and memory barriers are pure runtime artifacts. In fact, the compiler can't reason about them at all.
I know the difference between synchronized method and synchronized block but I am not sure about the synchronized block part.
Assuming I have this code
class Test {
private int x=0;
private Object lockObject = new Object();
public void incBlock() {
synchronized(lockObject) {
x++;
}
System.out.println("x="+x);
}
public void incThis() { // same as synchronized method
synchronized(this) {
x++;
}
System.out.println("x="+x);
}
}
In this case what is the difference between using lockObject and using this as the lock? It seems to be the same to me..
When you decide to use synchronized block, how do you decide which object to be the lock?
Personally I almost never lock on "this". I usually lock on a privately held reference which I know that no other code is going to lock on. If you lock on "this" then any other code which knows about your object might choose to lock on it. While it's unlikely to happen, it certainly could do - and could cause deadlocks, or just excessive locking.
There's nothing particularly magical about what you lock on - you can think of it as a token, effectively. Anyone locking with the same token will be trying to acquire the same lock. Unless you want other code to be able to acquire the same lock, use a private variable. I'd also encourage you to make the variable final - I can't remember a situation where I've ever wanted to change a lock variable over the lifetime of an object.
I had this same question when I was reading Java Concurrency In Practice, and I thought I'd add some added perspective on the answers provided by Jon Skeet and spullara.
Here's some example code which will block even the "quick" setValue(int)/getValue() methods while the doStuff(ValueHolder) method executes.
public class ValueHolder {
private int value = 0;
public synchronized void setValue(int v) {
// Or could use a sychronized(this) block...
this.value = 0;
}
public synchronized int getValue() {
return this.value;
}
}
public class MaliciousClass {
public void doStuff(ValueHolder holder) {
synchronized(holder) {
// Do something "expensive" so setter/getter calls are blocked
}
}
}
The downside of using this for synchronization is other classes can synchronize on a reference to your class (not via this, of course). Malicious or unintentional use of the synchronized keyword while locking on your object's reference can cause your class to behave poorly under concurrent usage, as an external class can effectively block your this-synchronized methods and there is nothing you can do (in your class) to prohibit this at runtime. To avoid this potential pitfall, you would synchronize on a private final Object or use the Lock interface in java.util.concurrent.locks.
For this simple example, you could alternately use an AtomicInteger rather than synchronizing the setter/getter.
Item 67 of Effective Java Second Edition is Avoid excessive synchronization, thus I would synchronize on a private lock object.
Every object in Java can act as a monitor. Choosing one is dependent on what granularity you want. Choosing 'this' has the advantage and disadvantage that other classes could also synchronize on the same monitor. My advice though is to avoid using the synchronize keyword directly and instead use constructs from the java.util.concurrency library which are higher level and have well defined semantics. This book has a lot of great advice in it from very notable experts:
Java Concurrency in Practice
http://amzn.com/0321349601
In this case it does not matter which object you choose for lock. But you must consistently use the same object for locking to achieve correct synchronization. Above code does not ensure proper synchronization as you once use the 'this' object as lock and next the 'lockObject' as lock.