Java threading synchronized block behavior - synchronized vs synchronized()? [duplicate] - java

This question already has answers here:
Is there an advantage to use a Synchronized Method instead of a Synchronized Block?
(23 answers)
Closed 9 years ago.
I have simple question but has problem to find answer on it.
Question is if synchronized method is equal to synchronized(this) - mean do same locking.
I want to write thread safe code with reduced thread locking (not want use always synchronized methods but sometime partial synchronization critical sections only).
Could you explain me if this code is equal or not and why in short words (examples is simplified to show atomic problem)?
Examples
Is this mixed locking code is equal to brute force code bellow:
public class SynchroMixed {
int counter = 0;
synchronized void writer() {
// some not locked code
int newCounter = counter + 1;
// critical section
synchronized(this) {
counter = newCounter;
}
}
synchronized int reader() {
return counter;
}
}
Brute force code (each method is locked including not critical section:
public class SynchroSame {
int counter = 0;
synchronized void writer() {
int newCounter = counter + 1;
counter = newCounter;
}
synchronized int reader() {
return counter;
}
}
Or I should write this code (this is for sure valid but more micro coding and unclear).
public class SynchroMicro {
int counter = 0;
void writer() {
// some not locked code
int newCounter = counter + 1;
// critical section
synchronized(this) {
counter = newCounter;
}
}
int reader() {
synchronized (this) {
return counter;
}
}
}

synchronized method and synchronized(this) means absolutely the same thing, and uses the same mutex behind. It's more question of taste what notation to prefer.
Personally I prefer synchronized(this), because it explicitly specifies the scope of the mutex lock which could be smaller than the whole method

All three examples are equivalent. Using synchronized on a method is the same as wrapping the entire body within synchronized(this) {}.
Then, by using synchronized(this) {} for some statements, the thread is only re-acquiring a lock it already owns: it's pointless here.

There is definitely no point in synchronized(this) within a synchronized method since entering the method is already implicitly synchronized(this).
That was just a syntax mistake on your part since you clearly intend to reduce the scope of the critical section, but the reduced scope introduces a data race into your code: you must both read and write the shared variable within the same synchronized block.
In addition, even if a method only reads the shared variable, it still must do that in a synchronized block; otherwise it may never observe any writes by other threads. This is the basic semantics of Java's Memory Model.
Now, if what you are showing is really representative of your full problem, then you shouldn't even be using synchronized, but a simple AtomicInteger, which will have the best concurrent performance.

Synchronized method and block are absolutely similar from functional point of view. They both do the same task i.e. to avoid concurrent access to particular method or block of code within a method.
synchronized() block is more flexible and handy when you have a long method and just need a part of it to be synchronized. You need not lock access to the entire method, as we know synchronization has some performance issues associated with it. Hence it is always recommended to synchronize only need part of the code and not the entire method (if not required).

Related

Why this Thread code print wrong unpredicted result sometimes? [duplicate]

This question already has answers here:
Java MultiThreading skips loop and gives wrong result [duplicate]
(3 answers)
Closed 1 year ago.
I'm java beginner and it's first time to use thread.
class Counter2 {
private int value = 0;
public void increment() {
value++;
printCounter();
}
public void decrement() {
value--;
printCounter();
}
public void printCounter() {
System.out.println(value);
}
}
class MyThread3 extends Thread {
Counter2 sharedCounter;
public MyThread3(Counter2 c) {
this.sharedCounter = c;
}
public void run() {
int i = 0;
while (i <= 100) {
sharedCounter.increment();
sharedCounter.decrement();
try {
sleep((int) (Math.random() * 2));
} catch (InterruptedException e) {
}
// System.out.println(i);
i++;
}
}
}
public class MyTest {
public static void main(String[] args) {
Thread t1, t2;
Counter2 c = new Counter2();
t1 = new MyThread3(c);
t1.start();
t2 = new MyThread3(c);
t2.start();
}
}
This code has 2 threads and 1 Counter, which is shared between the threads. The threads just repeat plus 1, minus 1 to the counter value. So, if I guess, the result should be 0. Because initial value was 0 and the number of incremented and decremented are the same. But some times the last printing number is not the 0, but -1 or -2 etc. please explain why this is this.
The Answer by Ranwala is correct.
AtomicInteger
An alternative solution I prefer is the use of the Atomic… classes. Specifically here, AtomicInteger. This class is a thread-safe wrapper around an integer.
Change your member field from Counter2 sharedCounter; to AtomicInteger sharedCounter;. Then use the various methods on that class to increment, to decrement, and to interrogate for current value.
You can then discard your Counter2 class entirely.
Executors
Also, you should know that in modern Java, we rarely need to address the Thread class directly. Instead we use the executors framework added to Java 5.
Define your tasks as either a Runnable or Callable. No need to extend from Thread.
See tutorial by Oracle, and search existing posts here on Stack Overflow.
There are two issues here. They are atomicity and visibility aspects of concurrency. Both increment and decrement are compound actions and need to be atomically performed in a multi-threaded environment. Apart from that you should not read a stale value whenever you read the counter. None of these are guaranteed by your current implementation.
Coming back to the solution, one naive way of achieving this is by using synchronized methods which uses a lock on the current instance to achieve the thread-safety. But that comes at a fairly high cost and incurs more lock overhead.
A much better approach would be to use CAS based non-blocking synchronization to achieve the task at hand. Here's how it looks in practice.
class Counter2 {
private LongAdder value = new LongAdder();
public void increment() {
value.increment();;
printCounter();
}
public void decrement() {
value.decrement();;
printCounter();
}
public void printCounter() {
System.out.println(value.intValue());
}
}
Since you are a beginner, I would recommend you to read the great book Java Concurrency in Practice 1st Edition which explains all these basics in a very nice, graspable manner by some of the great authors in our era ! If you have any questions about the contents of the book, you are welcome to post the questions here too. Read it from cover to cover at least twice !
Update
CAS is so called ComparaAndSwap is a lock free synchronization scheme achieved by using low level CPU instructions. Here it reads the value of the counter before the increment/decrement and then at the time it is updated, it checks whether the initial value is still there. If so, it updates the value successfully. Otherwise, chances are that another thread concurrently updating the value of the counter, hence the increment/decrement operation fails and it retries it again.

Class with all methods synchronised will behave as a synchronised block?

public class SynchronizedCounter {
private int c = 0;
public synchronized void increment() {
c++;
}
public synchronized void decrement() {
c--;
}
public synchronized int value() {
return c;
}
}
If there are two threads, each having the same instance of SynchronizedCounter, does this mean that if one thread is calling increment, the other can not call decrement. Is the above code equivalent to a synchronised object? i.e.
public void run(){
synchronised( objectReferenceSynchronisedCounter){
if(conditionToIncrement)
objectReference....Counter.increment();
else
objectReference....Counter.decrement();
}
}
There are 2 questions:
If there are two threads, each having the same instance of SynchronizedCounter, does this mean that if one thread is calling increment, the other can not call decrement.
That is correct. The call to decrement will be blocked while the other thread executes increment. And vice-versa.
Is the above code equivalent to a synchronised object? [code follows]
Your second example is slightly different because you include an if statement in the synchronized block. And generally speaking if a synchronization bloc includes multiple calls, it is not equivalent to synchronizing each individual call.
There is no such thing as a synchronized object in Java. You synchronize methods or code blocks.
However, and maybe that is what you meant, in both your examples the lock is held on the same object, namely the instance of the object whose methods are called. So apart from the slightly different scope, the 2 examples synchronize in the same way on the same object.
The answer is no.
the synchronized scope is the modified method
See http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/Semaphore.html
It is exactly a synchronized object. "synchronized" on the method locks "this" once one of your methods is called the others cannot execute until the method id exited.

Are java getters thread-safe?

Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Consider:
public class A
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
}
I've heard people argue that it's possible for getA() and incrementA() to be called at roughly the same time and have getA() return to wrong thing. However it seems like, in the case that they're called at the same time, even if the getter is synchronized you can get the wrong thing. In fact the "right thing" doesn't even seem defined if these are called concurrently. The big thing for me is that the state remains consistent.
I've also heard talk about JIT optimizations. Given an instance of the above class and the following code(the code would be depending on a to be set in another thread):
while(myA.getA() < 10)
{
//incrementA is not called here
}
it is apparently a legal JIT optimization to change this to:
int temp = myA.getA();
while(temp < 10)
{
//incrementA is not called here
}
which can obviously result in an infinite loop.
Why is this a legal optimization? Would this be illegal if a was volatile?
Update
I did a little bit of testing into this.
public class Test
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
public static void main(String[] args)
{
final Test myA = new Test();
Thread t = new Thread(new Runnable(){
public void run() {
while(true)
{
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
myA.incrementA();
}
}});
t.start();
while(myA.getA() < 15)
{
System.out.println(myA.getA());
}
}
}
Using several different sleep times, this worked even when a is not volatile. This of course isn't conclusive, it still may be legal. Does anyone have some examples that could trigger such JIT behaviour?
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Depends on the particulars. It is important to realize that synchronization does two important things. It is not just about atomicity but it is also required because of memory synchronization. If one thread updates the a field, then other threads may not see the update because of memory caching on the local processor. Making the int a field be volatile solves this problem. Making both the get and the set method be synchronized will as well but it is more expensive.
If you want to be able to change and read a from multiple threads, the best mechanism is to use an AtomicInteger.
private AtomicInteger a = new AtomicInteger(5);
public void setA(int a) {
// no need to synchronize because of the magic of the `AtomicInteger` code
this.a.set(a);
}
public int getA() {
// AtomicInteger also takes care of the memory synchronization
return a.get();
}
I've heard people argue that it's possible for getA() and setA() to be called at roughly the same time and have getA() return to wrong thing.
This is true but you can get the wrong value if getA() is called after setA() as well. A bad cache value can stick forever.
which can obviously result in an infinite loop. Why is this a legal optimization?
It is a legal optimization because threads running with their own memory cache asynchronously is one of the important reasons why you see performance improvements with them. If all memory accesses where synchronized with main memory then the per-CPU memory caches would not be used and threaded programs would run a lot slower.
Would this be illegal if a was volatile?
It is not legal if there is some way for a to be altered – by another thread possibly. If a was final then the JIT could make that optimization. If a was volatile or the get method marked as synchronized then it would certainly not be a legal optimization.
It's not thread safe because that getter does not ensure that a thread will see the latest value, as the value may be stale. Having the getter be synchronized ensures that any thread calling the getter will see the latest value instead of a possible stale one.
You basically have two options:
1) Make your int volatile
2) Use an atomic type like AtomicInt
using a normal int without synchronization is not thread safe at all.
Your best solution is to use an AtomicInteger, they were basically designed for exactly this use case.
If this is more of a theoretical "could this be done question", I think something like the following would be safe (but still not perform as well as an AtomicInteger):
public class A
{
private volatile int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
final int tmp = a + 1;
a = tmp;
}
}
public int getA()
{
return a;
}
}
The short answer is your example will be thread-safe, if
the variable is declared as volatile, or
the getter is declared as synchronized.
The reason that your example class A is not thread-safe is that one can create a program using it that doesn't have a "well-formed execution" (see JLS 17.4.7).
For instance, consider
// in thread #1
int a1 = A.getA();
Thread.sleep(...);
int a2 = A.getA();
if (a1 == a2) {
System.out.println("no increment");
// in thread #2
A.incrementA();
in the scenario that the increment happens during the sleep.
For this execution to be well-formed, there must be a "happens before" (HB) chain between the assignment to a in incrementA called by thread #2, and the subsequent read of a in getA called by thread #1.
If the two threads synchronize using the same lock object, then there is a HB between one thread releasing the lock and a second thread acquiring the lock. So we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #2 releases lock --HB-->
thread #1 acquires lock --HB-->
thread #1 reads a
If two threads share a a volatile variable, there is a HB between any write and any subsequent read (without an intervening write). So we typically get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #1 reads a
Note that incrementA needs to be synchronized to avoid race conditions with other threads calling incrementA.
If neither of the above is true, we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a // No HB!!
thread #1 reads a
Since there is no HB between the write by thread #2 and the subsequent read by thread #1, the JLS does not guarantee that the latter will see the value written by the former.
Note that this is a simplified version of the rules. For the complete version, you need to read all of JLS Chapter 17.

Synchronization of non-final field

A warning is showing every time I synchronize on a non-final class field. Here is the code:
public class X
{
private Object o;
public void setO(Object o)
{
this.o = o;
}
public void x()
{
synchronized (o) // synchronization on a non-final field
{
}
}
}
so I changed the coding in the following way:
public class X
{
private final Object o;
public X()
{
o = new Object();
}
public void x()
{
synchronized (o)
{
}
}
}
I am not sure the above code is the proper way to synchronize on a non-final class field. How can I synchronize a non final field?
First of all, I encourage you to really try hard to deal with concurrency issues on a higher level of abstraction, i.e. solving it using classes from java.util.concurrent such as ExecutorServices, Callables, Futures etc.
That being said, there's nothing wrong with synchronizing on a non-final field per se. You just need to keep in mind that if the object reference changes, the same section of code may be run in parallel. I.e., if one thread runs the code in the synchronized block and someone calls setO(...), another thread can run the same synchronized block on the same instance concurrently.
Synchronize on the object which you need exclusive access to (or, better yet, an object dedicated to guarding it).
It's really not a good idea - because your synchronized blocks are no longer really synchronized in a consistent way.
Assuming the synchronized blocks are meant to be ensuring that only one thread accesses some shared data at a time, consider:
Thread 1 enters the synchronized block. Yay - it has exclusive access to the shared data...
Thread 2 calls setO()
Thread 3 (or still 2...) enters the synchronized block. Eek! It think it has exclusive access to the shared data, but thread 1 is still furtling with it...
Why would you want this to happen? Maybe there are some very specialized situations where it makes sense... but you'd have to present me with a specific use case (along with ways of mitigating the sort of scenario I've given above) before I'd be happy with it.
I agree with one of John's comment: You must always use a final lock dummy while accessing a non-final variable to prevent inconsistencies in case of the variable's reference changes. So in any cases and as a first rule of thumb:
Rule#1: If a field is non-final, always use a (private) final lock dummy.
Reason #1: You hold the lock and change the variable's reference by yourself. Another thread waiting outside the synchronized lock will be able to enter the guarded block.
Reason #2: You hold the lock and another thread changes the variable's reference. The result is the same: Another thread can enter the guarded block.
But when using a final lock dummy, there is another problem: You might get wrong data, because your non-final object will only be synchronized with RAM when calling synchronize(object). So, as a second rule of thumb:
Rule#2: When locking a non-final object you always need to do both: Using a final lock dummy and the lock of the non-final object for the sake of RAM synchronisation. (The only alternative will be declaring all fields of the object as volatile!)
These locks are also called "nested locks". Note that you must call them always in the same order, otherwise you will get a dead lock:
public class X {
private final LOCK;
private Object o;
public void setO(Object o){
this.o = o;
}
public void x() {
synchronized (LOCK) {
synchronized(o){
//do something with o...
}
}
}
}
As you can see I write the two locks directly on the same line, because they always belong together. Like this, you could even do 10 nesting locks:
synchronized (LOCK1) {
synchronized (LOCK2) {
synchronized (LOCK3) {
synchronized (LOCK4) {
//entering the locked space
}
}
}
}
Note that this code won't break if you just acquire an inner lock like synchronized (LOCK3) by another threads. But it will break if you call in another thread something like this:
synchronized (LOCK4) {
synchronized (LOCK1) { //dead lock!
synchronized (LOCK3) {
synchronized (LOCK2) {
//will never enter here...
}
}
}
}
There is only one workaround around such nested locks while handling non-final fields:
Rule #2 - Alternative: Declare all fields of the object as volatile. (I won't talk here about the disadvantages of doing this, e.g. preventing any storage in x-level caches even for reads, aso.)
So therefore aioobe is quite right: Just use java.util.concurrent. Or begin to understand everything about synchronisation and do it by yourself with nested locks. ;)
For more details why synchronisation on non-final fields breaks, have a look into my test case: https://stackoverflow.com/a/21460055/2012947
And for more details why you need synchronized at all due to RAM and caches have a look here: https://stackoverflow.com/a/21409975/2012947
I'm not really seeing the correct answer here, that is, It's perfectly alright to do it.
I'm not even sure why it's a warning, there is nothing wrong with it. The JVM makes sure that you get some valid object back (or null) when you read a value, and you can synchronize on any object.
If you plan on actually changing the lock while it's in use (as opposed to e.g. changing it from an init method, before you start using it), you have to make the variable that you plan to change volatile. Then all you need to do is to synchronize on both the old and the new object, and you can safely change the value
public volatile Object lock;
...
synchronized (lock) {
synchronized (newObject) {
lock = newObject;
}
}
There. It's not complicated, writing code with locks (mutexes) is actally quite easy. Writing code without them (lock free code) is what's hard.
EDIT: So this solution (as suggested by Jon Skeet) might have an issue with atomicity of implementation of "synchronized(object){}" while object reference is changing. I asked separately and according to Mr. erickson it is not thread safe - see: Is entering synchronized block atomic?. So take it as example how to NOT do it - with links why ;)
See the code how it would work if synchronised() would be atomic:
public class Main {
static class Config{
char a='0';
char b='0';
public void log(){
synchronized(this){
System.out.println(""+a+","+b);
}
}
}
static Config cfg = new Config();
static class Doer extends Thread {
char id;
Doer(char id) {
this.id = id;
}
public void mySleep(long ms){
try{Thread.sleep(ms);}catch(Exception ex){ex.printStackTrace();}
}
public void run() {
System.out.println("Doer "+id+" beg");
if(id == 'X'){
synchronized (cfg){
cfg.a=id;
mySleep(1000);
// do not forget to put synchronize(cfg) over setting new cfg - otherwise following will happend
// here it would be modifying different cfg (cos Y will change it).
// Another problem would be that new cfg would be in parallel modified by Z cos synchronized is applied on new object
cfg.b=id;
}
}
if(id == 'Y'){
mySleep(333);
synchronized(cfg) // comment this and you will see inconsistency in log - if you keep it I think all is ok
{
cfg = new Config(); // introduce new configuration
// be aware - don't expect here to be synchronized on new cfg!
// Z might already get a lock
}
}
if(id == 'Z'){
mySleep(666);
synchronized (cfg){
cfg.a=id;
mySleep(100);
cfg.b=id;
}
}
System.out.println("Doer "+id+" end");
cfg.log();
}
}
public static void main(String[] args) throws InterruptedException {
Doer X = new Doer('X');
Doer Y = new Doer('Y');
Doer Z = new Doer('Z');
X.start();
Y.start();
Z.start();
}
}
AtomicReference suits for your requirement.
From java documentation about atomic package:
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
In above example, you replace String with your own Object
Related SE question:
When to use AtomicReference in Java?
If o never changes for the lifetime of an instance of X, the second version is better style irrespective of whether synchronization is involved.
Now, whether there's anything wrong with the first version is impossible to answer without knowing what else is going on in that class. I would tend to agree with the compiler that it does look error-prone (I won't repeat what the others have said).
Just adding my two cents: I had this warning when I used component that is instantiated through designer, so it's field cannot really be final, because constructor cannot takes parameters. In other words, I had quasi-final field without the final keyword.
I think that's why it is just warning: you are probably doing something wrong, but it might be right as well.

Is there an advantage to use a Synchronized Method instead of a Synchronized Block?

Can any one tell me the advantage of synchronized method over synchronized block with an example?
Can anyone tell me the advantage of the synchronized method over the synchronized block with an example? Thanks.
There is not a clear advantage of using synchronized method over the block.
Perhaps the only one ( but I wouldn't call it an advantage ) is you don't need to include the object reference this.
Method:
public synchronized void method() { // blocks "this" from here....
...
...
...
} // to here
Block:
public void method() {
synchronized( this ) { // blocks "this" from here ....
....
....
....
} // to here...
}
See? No advantage at all.
Blocks do have advantages over methods though, mostly in flexibility because you can use another object as lock whereas syncing the method would lock the entire object.
Compare:
// locks the whole object
...
private synchronized void someInputRelatedWork() {
...
}
private synchronized void someOutputRelatedWork() {
...
}
vs.
// Using specific locks
Object inputLock = new Object();
Object outputLock = new Object();
private void someInputRelatedWork() {
synchronized(inputLock) {
...
}
}
private void someOutputRelatedWork() {
synchronized(outputLock) {
...
}
}
Also if the method grows you can still keep the synchronized section separated:
private void method() {
... code here
... code here
... code here
synchronized( lock ) {
... very few lines of code here
}
... code here
... code here
... code here
... code here
}
The only real difference is that a synchronized block can choose which object it synchronizes on. A synchronized method can only use 'this' (or the corresponding Class instance for a synchronized class method). For example, these are semantically equivalent:
synchronized void foo() {
...
}
void foo() {
synchronized (this) {
...
}
}
The latter is more flexible since it can compete for the associated lock of any object, often a member variable. It's also more granular because you could have concurrent code executing before and after the block but still within the method. Of course, you could just as easily use a synchronized method by refactoring the concurrent code into separate non-synchronized methods. Use whichever makes the code more comprehensible.
Synchronized Method
Pros:
Your IDE can indicate the synchronized methods.
The syntax is more compact.
Forces to split the synchronized blocks to separate methods.
Cons:
Synchronizes to this and so makes it possible to outsiders to synchronize to it too.
It is harder to move code outside the synchronized block.
Synchronized block
Pros:
Allows using a private variable for the lock and so forcing the lock to stay inside the class.
Synchronized blocks can be found by searching references to the variable.
Cons:
The syntax is more complicated and so makes the code harder to read.
Personally I prefer using synchronized methods with classes focused only to the thing needing synchronization. Such class should be as small as possible and so it should be easy to review the synchronization. Others shouldn't need to care about synchronization.
The main difference is that if you use a synchronized block you may lock on an object other than this which allows to be much more flexible.
Assume you have a message queue and multiple message producers and consumers. We don't want producers to interfere with each other, but the consumers should be able to retrieve messages without having to wait for the producers.
So we just create an object
Object writeLock = new Object();
And from now on every time a producers wants to add a new message we just lock on that:
synchronized(writeLock){
// do something
}
So consumers may still read, and producers will be locked.
Synchronized method
Synchronized methods have two effects.
First, when one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
Note that constructors cannot be synchronized — using the synchronized keyword with a constructor is a syntax error. Synchronizing constructors doesn't make sense, because only the thread that creates an object should have access to it while it is being constructed.
Synchronized Statement
Unlike synchronized methods, synchronized statements must specify the object that provides the intrinsic lock: Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
Q: Intrinsic Locks and Synchronization
Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a "monitor.") Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object's state and establishing happens-before relationships that are essential to visibility.
Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and then release the intrinsic lock when it's done with them. A thread is said to own the intrinsic lock between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock.
package test;
public class SynchTest implements Runnable {
private int c = 0;
public static void main(String[] args) {
new SynchTest().test();
}
public void test() {
// Create the object with the run() method
Runnable runnable = new SynchTest();
Runnable runnable2 = new SynchTest();
// Create the thread supplying it with the runnable object
Thread thread = new Thread(runnable,"thread-1");
Thread thread2 = new Thread(runnable,"thread-2");
// Here the key point is passing same object, if you pass runnable2 for thread2,
// then its not applicable for synchronization test and that wont give expected
// output Synchronization method means "it is not possible for two invocations
// of synchronized methods on the same object to interleave"
// Start the thread
thread.start();
thread2.start();
}
public synchronized void increment() {
System.out.println("Begin thread " + Thread.currentThread().getName());
System.out.println(this.hashCode() + "Value of C = " + c);
// If we uncomment this for synchronized block, then the result would be different
// synchronized(this) {
for (int i = 0; i < 9999999; i++) {
c += i;
}
// }
System.out.println("End thread " + Thread.currentThread().getName());
}
// public synchronized void decrement() {
// System.out.println("Decrement " + Thread.currentThread().getName());
// }
public int value() {
return c;
}
#Override
public void run() {
this.increment();
}
}
Cross check different outputs with synchronized method, block and without synchronization.
Note: static synchronized methods and blocks work on the Class object.
public class MyClass {
// locks MyClass.class
public static synchronized void foo() {
// do something
}
// similar
public static void foo() {
synchronized(MyClass.class) {
// do something
}
}
}
When java compiler converts your source code to byte code, it handles synchronized methods and synchronized blocks very differently.
When the JVM executes a synchronized method, the executing thread identifies that the method's method_info structure has the ACC_SYNCHRONIZED flag set, then it automatically acquires the object's lock, calls the method, and releases the lock. If an exception occurs, the thread automatically releases the lock.
Synchronizing a method block, on the other hand, bypasses the JVM's built-in support for acquiring an object's lock and exception handling and requires that the functionality be explicitly written in byte code. If you read the byte code for a method with a synchronized block, you will see more than a dozen additional operations to manage this functionality.
This shows calls to generate both a synchronized method and a synchronized block:
public class SynchronizationExample {
private int i;
public synchronized int synchronizedMethodGet() {
return i;
}
public int synchronizedBlockGet() {
synchronized( this ) {
return i;
}
}
}
The synchronizedMethodGet() method generates the following byte code:
0: aload_0
1: getfield
2: nop
3: iconst_m1
4: ireturn
And here's the byte code from the synchronizedBlockGet() method:
0: aload_0
1: dup
2: astore_1
3: monitorenter
4: aload_0
5: getfield
6: nop
7: iconst_m1
8: aload_1
9: monitorexit
10: ireturn
11: astore_2
12: aload_1
13: monitorexit
14: aload_2
15: athrow
One significant difference between synchronized method and block is that, Synchronized block generally reduce scope of lock. As scope of lock is inversely proportional to performance, its always better to lock only critical section of code. One of the best example of using synchronized block is double checked locking in Singleton pattern where instead of locking whole getInstance() method we only lock critical section of code which is used to create Singleton instance. This improves performance drastically because locking is only required one or two times.
While using synchronized methods, you will need to take extra care if you mix both static synchronized and non-static synchronized methods.
Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
In the following code one thread modifying the list will not block waiting for a thread that is modifying the map. If the methods were synchronized on the object then each method would have to wait even though the modifications they are making would not conflict.
private List<Foo> myList = new ArrayList<Foo>();
private Map<String,Bar) myMap = new HashMap<String,Bar>();
public void put( String s, Bar b ) {
synchronized( myMap ) {
myMap.put( s,b );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public void hasKey( String s, ) {
synchronized( myMap ) {
myMap.hasKey( s );
}
}
public void add( Foo f ) {
synchronized( myList ) {
myList.add( f );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public Thing getMedianFoo() {
Foo med = null;
synchronized( myList ) {
Collections.sort(myList);
med = myList.get(myList.size()/2);
}
return med;
}
With synchronized blocks, you can have multiple synchronizers, so that multiple simultaneous but non-conflicting things can go on at the same time.
Synchronized methods can be checked using reflection API. This can be useful for testing some contracts, such as all methods in model are synchronized.
The following snippet prints all the synchronized methods of Hashtable:
for (Method m : Hashtable.class.getMethods()) {
if (Modifier.isSynchronized(m.getModifiers())) {
System.out.println(m);
}
}
Important note on using the synchronized block: careful what you use as lock object!
The code snippet from user2277816 above illustrates this point in that a reference to a string literal is used as locking object.
Realize that string literals are automatically interned in Java and you should begin to see the problem: every piece of code that synchronizes on the literal "lock", shares the same lock! This can easily lead to deadlocks with completely unrelated pieces of code.
It is not just String objects that you need to be careful with. Boxed primitives are also a danger, since autoboxing and the valueOf methods can reuse the same objects, depending on the value.
For more information see:
https://www.securecoding.cert.org/confluence/display/java/LCK01-J.+Do+not+synchronize+on+objects+that+may+be+reused
Often using a lock on a method level is too rude. Why lock up a piece of code that does not access any shared resources by locking up an entire method. Since each object has a lock, you can create dummy objects to implement block level synchronization.
The block level is more efficient because it does not lock the whole method.
Here some example
Method Level
class MethodLevel {
//shared among threads
SharedResource x, y ;
public void synchronized method1() {
//multiple threads can't access
}
public void synchronized method2() {
//multiple threads can't access
}
public void method3() {
//not synchronized
//multiple threads can access
}
}
Block Level
class BlockLevel {
//shared among threads
SharedResource x, y ;
//dummy objects for locking
Object xLock = new Object();
Object yLock = new Object();
public void method1() {
synchronized(xLock){
//access x here. thread safe
}
//do something here but don't use SharedResource x, y
// because will not be thread-safe
synchronized(xLock) {
synchronized(yLock) {
//access x,y here. thread safe
}
}
//do something here but don't use SharedResource x, y
//because will not be thread-safe
}//end of method1
}
[Edit]
For Collection like Vector and Hashtable they are synchronized when ArrayList or HashMap are not and you need set synchronized keyword or invoke Collections synchronized method:
Map myMap = Collections.synchronizedMap (myMap); // single lock for the entire map
List myList = Collections.synchronizedList (myList); // single lock for the entire list
The only difference : synchronized blocks allows granular locking unlike synchronized method
Basically synchronized block or methods have been used to write thread safe code by avoiding memory inconsistency errors.
This question is very old and many things have been changed during last 7 years.
New programming constructs have been introduced for thread safety.
You can achieve thread safety by using advanced concurrency API instead of synchronied blocks. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Better replacement for synchronized is ReentrantLock, which uses Lock API
A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
Example with locks:
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Refer to this related question too:
Synchronization vs Lock
Synchronized method is used for lock all the objects
Synchronized block is used to lock specific object
In general these are mostly the same other than being explicit about the object's monitor that's being used vs the implicit this object. One downside of synchronized methods that I think is sometimes overlooked is that in using the "this" reference to synchronize on you are leaving open the possibility of external objects locking on the same object. That can be a very subtle bug if you run into it. Synchronizing on an internal explicit Object or other existing field can avoid this issue, completely encapsulating the synchronization.
As already said here synchronized block can use user-defined variable as lock object, when synchronized function uses only "this". And of course you can manipulate with areas of your function which should be synchronized.
But everyone says that no difference between synchronized function and block which covers whole function using "this" as lock object. That is not true, difference is in byte code which will be generated in both situations. In case of synchronized block usage should be allocated local variable which holds reference to "this". And as result we will have a little bit larger size for function (not relevant if you have only few number of functions).
More detailed explanation of the difference you can find here:
http://www.artima.com/insidejvm/ed2/threadsynchP.html
In case of synchronized methods, lock will be acquired on an Object. But if you go with synchronized block you have an option to specify an object on which the lock will be acquired.
Example :
Class Example {
String test = "abc";
// lock will be acquired on String test object.
synchronized (test) {
// do something
}
lock will be acquired on Example Object
public synchronized void testMethod() {
// do some thing
}
}
I know this is an old question, but with my quick read of the responses here, I didn't really see anyone mention that at times a synchronized method may be the wrong lock.
From Java Concurrency In Practice (pg. 72):
public class ListHelper<E> {
public List<E> list = Collections.syncrhonizedList(new ArrayList<>());
...
public syncrhonized boolean putIfAbsent(E x) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
The above code has the appearance of being thread-safe. However, in reality it is not. In this case the lock is obtained on the instance of the class. However, it is possible for the list to be modified by another thread not using that method. The correct approach would be to use
public boolean putIfAbsent(E x) {
synchronized(list) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
}
The above code would block all threads trying to modify list from modifying the list until the synchronized block has completed.
As a practical matter, the advantage of synchronized methods over synchronized blocks is that they are more idiot-resistant; because you can't choose an arbitrary object to lock on, you can't misuse the synchronized method syntax to do stupid things like locking on a string literal or locking on the contents of a mutable field that gets changed out from under the threads.
On the other hand, with synchronized methods you can't protect the lock from getting acquired by any thread that can get a reference to the object.
So using synchronized as a modifier on methods is better at protecting your cow-orkers from hurting themselves, while using synchronized blocks in conjunction with private final lock objects is better at protecting your own code from the cow-orkers.
From a Java specification summary:
http://www.cs.cornell.edu/andru/javaspec/17.doc.html
The synchronized statement (§14.17) computes a reference to an object;
it then attempts to perform a lock action on that object and does not
proceed further until the lock action has successfully completed. ...
A synchronized method (§8.4.3.5) automatically performs a lock action
when it is invoked; its body is not executed until the lock action has
successfully completed. If the method is an instance method, it
locks the lock associated with the instance for which it was invoked
(that is, the object that will be known as this during execution of
the body of the method). If the method is static, it locks the
lock associated with the Class object that represents the class in
which the method is defined. ...
Based on these descriptions, I would say most previous answers are correct, and a synchronized method might be particularly useful for static methods, where you would otherwise have to figure out how to get the "Class object that represents the class in which the method was defined."
Edit: I originally thought these were quotes of the actual Java spec. Clarified that this page is just a summary/explanation of the spec
TLDR; Neither use the synchronized modifier nor the synchronized(this){...} expression but synchronized(myLock){...} where myLock is a final instance field holding a private object.
The difference between using the synchronized modifier on the method declaration and the synchronized(..){ } expression in the method body are this:
The synchronized modifier specified on the method's signature
is visible in the generated JavaDoc,
is programmatically determinable via reflection when testing a method's modifier for Modifier.SYNCHRONIZED,
requires less typing and indention compared to synchronized(this) { .... }, and
(depending on your IDE) is visible in the class outline and code completion,
uses the this object as lock when declared on non-static method or the enclosing class when declared on a static method.
The synchronized(...){...} expression allows you
to only synchronize the execution of parts of a method's body,
to be used within a constructor or a (static) initialization block,
to choose the lock object which controls the synchronized access.
However, using the synchronized modifier or synchronized(...) {...} with this as the lock object (as in synchronized(this) {...}), have the same disadvantage. Both use it's own instance as the lock object to synchronize on. This is dangerous because not only the object itself but any other external object/code that holds a reference to that object can also use it as a synchronization lock with potentially severe side effects (performance degradation and deadlocks).
Therefore best practice is to neither use the synchronized modifier nor the synchronized(...) expression in conjunction with this as lock object but a lock object private to this object. For example:
public class MyService {
private final lock = new Object();
public void doThis() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
public void doThat() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
}
You can also use multiple lock objects but special care needs to be taken to ensure this does not result in deadlocks when used nested.
public class MyService {
private final lock1 = new Object();
private final lock2 = new Object();
public void doThis() {
synchronized(lock1) {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThat() and doMore().
}
}
public void doThat() {
synchronized(lock1) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doMore() may execute concurrently
}
}
public void doMore() {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doThat() may execute concurrently
}
}
}
I suppose this question is about the difference between Thread Safe Singleton and Lazy initialization with Double check locking. I always refer to this article when I need to implement some specific singleton.
Well, this is a Thread Safe Singleton:
// Java program to create Thread Safe
// Singleton class
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
//synchronized method to control simultaneous access
synchronized public static GFG getInstance()
{
if (instance == null)
{
// if instance is null, initialize
instance = new GFG();
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is thread safe.
Cons:
getInstance() method is synchronized so it causes slow performance as multiple threads can’t access it simultaneously.
This is a Lazy initialization with Double check locking:
// Java code to explain double check locking
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
public static GFG getInstance()
{
if (instance == null)
{
//synchronized block to remove overhead
synchronized (GFG.class)
{
if(instance==null)
{
// if instance is null, initialize
instance = new GFG();
}
}
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is also thread safe.
Performance reduced because of synchronized keyword is overcome.
Cons:
First time, it can affect performance.
As cons. of double check locking method is bearable so it can be
used for high performance multi-threaded applications.
Please refer to this article for more details:
https://www.geeksforgeeks.org/java-singleton-design-pattern-practices-examples/
Synchronizing with threads.
1) NEVER use synchronized(this) in a thread it doesn't work. Synchronizing with (this) uses the current thread as the locking thread object. Since each thread is independent of other threads, there is NO coordination of synchronization.
2) Tests of code show that in Java 1.6 on a Mac the method synchronization does not work.
3) synchronized(lockObj) where lockObj is a common shared object of all threads synchronizing on it will work.
4) ReenterantLock.lock() and .unlock() work. See Java tutorials for this.
The following code shows these points. It also contains the thread-safe Vector which would be substituted for the ArrayList, to show that many threads adding to a Vector do not lose any information, while the same with an ArrayList can lose information.
0) Current code shows loss of information due to race conditions
A) Comment the current labeled A line, and uncomment the A line above it, then run, method loses data but it shouldn't.
B) Reverse step A, uncomment B and // end block }. Then run to see results no loss of data
C) Comment out B, uncomment C. Run, see synchronizing on (this) loses data, as expected.
Don't have time to complete all the variations, hope this helps.
If synchronizing on (this), or the method synchronization works, please state what version of Java and OS you tested. Thank you.
import java.util.*;
/** RaceCondition - Shows that when multiple threads compete for resources
thread one may grab the resource expecting to update a particular
area but is removed from the CPU before finishing. Thread one still
points to that resource. Then thread two grabs that resource and
completes the update. Then thread one gets to complete the update,
which over writes thread two's work.
DEMO: 1) Run as is - see missing counts from race condition, Run severa times, values change
2) Uncomment "synchronized(countLock){ }" - see counts work
Synchronized creates a lock on that block of code, no other threads can
execute code within a block that another thread has a lock.
3) Comment ArrayList, unComment Vector - See no loss in collection
Vectors work like ArrayList, but Vectors are "Thread Safe"
May use this code as long as attribution to the author remains intact.
/mf
*/
public class RaceCondition {
private ArrayList<Integer> raceList = new ArrayList<Integer>(); // simple add(#)
// private Vector<Integer> raceList = new Vector<Integer>(); // simple add(#)
private String countLock="lock"; // Object use for locking the raceCount
private int raceCount = 0; // simple add 1 to this counter
private int MAX = 10000; // Do this 10,000 times
private int NUM_THREADS = 100; // Create 100 threads
public static void main(String [] args) {
new RaceCondition();
}
public RaceCondition() {
ArrayList<Thread> arT = new ArrayList<Thread>();
// Create thread objects, add them to an array list
for( int i=0; i<NUM_THREADS; i++){
Thread rt = new RaceThread( ); // i );
arT.add( rt );
}
// Start all object at once.
for( Thread rt : arT ){
rt.start();
}
// Wait for all threads to finish before we can print totals created by threads
for( int i=0; i<NUM_THREADS; i++){
try { arT.get(i).join(); }
catch( InterruptedException ie ) { System.out.println("Interrupted thread "+i); }
}
// All threads finished, print the summary information.
// (Try to print this informaiton without the join loop above)
System.out.printf("\nRace condition, should have %,d. Really have %,d in array, and count of %,d.\n",
MAX*NUM_THREADS, raceList.size(), raceCount );
System.out.printf("Array lost %,d. Count lost %,d\n",
MAX*NUM_THREADS-raceList.size(), MAX*NUM_THREADS-raceCount );
} // end RaceCondition constructor
class RaceThread extends Thread {
public void run() {
for ( int i=0; i<MAX; i++){
try {
update( i );
} // These catches show when one thread steps on another's values
catch( ArrayIndexOutOfBoundsException ai ){ System.out.print("A"); }
catch( OutOfMemoryError oome ) { System.out.print("O"); }
}
}
// so we don't lose counts, need to synchronize on some object, not primitive
// Created "countLock" to show how this can work.
// Comment out the synchronized and ending {, see that we lose counts.
// public synchronized void update(int i){ // use A
public void update(int i){ // remove this when adding A
// synchronized(countLock){ // or B
// synchronized(this){ // or C
raceCount = raceCount + 1;
raceList.add( i ); // use Vector
// } // end block for B or C
} // end update
} // end RaceThread inner class
} // end RaceCondition outter class

Categories

Resources