The code is from ArrayBlockingQueue,JAVA 8.
The comment says:Lock only for visibility, not mutual exclusion.
final Object[] items;
int putIndex;
int count;
public ArrayBlockingQueue(int capacity, boolean fair,
Collection<? extends E> c) {
this(capacity, fair);
final ReentrantLock lock = this.lock;
lock.lock(); // Lock only for visibility, not mutual exclusion
try {
int i = 0;
try {
for (E e : c) {
checkNotNull(e);
items[i++] = e;
}
} catch (ArrayIndexOutOfBoundsException ex) {
throw new IllegalArgumentException();
}
count = i;
putIndex = (i == capacity) ? 0 : i;
} finally {
lock.unlock();
}
}
I think the lock guarantees the visibility of count&putIndex.
But why dont use volatile?
The lock guarantees the visibility of all writes during: to count, to putIndex, and to the elements of items that it changes.
It doesn't need to guarantee mutual exclusion, as it is in the constructor and since the reference to this hasn't been given to other threads, there is no need for mutual exclusion (but it would guarantee that as well if the reference to this was given out before that point)
The comment is merely saying that the purpose of the lock is the visibility effects.
As to why you can't use volatile:
The methods that retrieve values from the queue, like poll, take and peek do need to lock for mutual exclusion. Making a variable volatile is not necessary; it could have an adverse performance impact.
It would also be hard to get it right because of the ordering: a volatile write happens before (JLS terminology) a volatile read on the same variable. That means that the constructor would have to write to the volatile variable as its last action, while all code that needs to be correctly synchronized needs to read that volatile variable first before doing anything else.
Locks are much easier to reason about and to get the ordering of accesses right, and, in this case - they are required in any case to execute multiple writes as one atomic action.
The lock guarantees all writes will be visible, including writes to the items array. Making the array volatile would not be enough to ensure writes to array elements are visible to all threads.
I understand (or at least I think I do;) ) the principle behind volatile keyword.
When looking into ConcurrentHashMap source, you can see that all nodes and values are declared volatile, which makes sense because the value can be written/read from more than one thread:
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
...
}
However, looking into ArrayBlockingQueue source, it's a plain array that is being updated/read from multiple threads:
private void enqueue(E x) {
// assert lock.getHoldCount() == 1;
// assert items[putIndex] == null;
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length)
putIndex = 0;
count++;
notEmpty.signal();
}
How is it guaranteed that the value inserted into items[putIndex] will be visible from another thread, providing that the element inside the array is not volatile (i know that declaring the array itself doesnt have any effect anyhow on the elements themselves) ?
Couldn't another thread hold a cached copy of the array?
Thanks
Notice that enqueue is private. Look for all calls to it (offer(E), offer(E, long, TimeUnit), put(E)). Notice that every one of those looks like:
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
// Do stuff.
enqueue(e);
} finally {
lock.unlock();
}
}
So you can conclude that every call to enqueue is protected by a lock.lock() ... lock.unlock() so you don't need volatile because lock.lock/unlock are also a memory barrier.
According to my understanding volatile is not needed as all BlockingQueue implementations already have a locking mechanism unlike the ConcurrentHashMap.
If you look at he public methods of the Queue you will find a ReentrantLock that guards for concurrent access.
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Consider:
public class A
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
}
I've heard people argue that it's possible for getA() and incrementA() to be called at roughly the same time and have getA() return to wrong thing. However it seems like, in the case that they're called at the same time, even if the getter is synchronized you can get the wrong thing. In fact the "right thing" doesn't even seem defined if these are called concurrently. The big thing for me is that the state remains consistent.
I've also heard talk about JIT optimizations. Given an instance of the above class and the following code(the code would be depending on a to be set in another thread):
while(myA.getA() < 10)
{
//incrementA is not called here
}
it is apparently a legal JIT optimization to change this to:
int temp = myA.getA();
while(temp < 10)
{
//incrementA is not called here
}
which can obviously result in an infinite loop.
Why is this a legal optimization? Would this be illegal if a was volatile?
Update
I did a little bit of testing into this.
public class Test
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
public static void main(String[] args)
{
final Test myA = new Test();
Thread t = new Thread(new Runnable(){
public void run() {
while(true)
{
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
myA.incrementA();
}
}});
t.start();
while(myA.getA() < 15)
{
System.out.println(myA.getA());
}
}
}
Using several different sleep times, this worked even when a is not volatile. This of course isn't conclusive, it still may be legal. Does anyone have some examples that could trigger such JIT behaviour?
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Depends on the particulars. It is important to realize that synchronization does two important things. It is not just about atomicity but it is also required because of memory synchronization. If one thread updates the a field, then other threads may not see the update because of memory caching on the local processor. Making the int a field be volatile solves this problem. Making both the get and the set method be synchronized will as well but it is more expensive.
If you want to be able to change and read a from multiple threads, the best mechanism is to use an AtomicInteger.
private AtomicInteger a = new AtomicInteger(5);
public void setA(int a) {
// no need to synchronize because of the magic of the `AtomicInteger` code
this.a.set(a);
}
public int getA() {
// AtomicInteger also takes care of the memory synchronization
return a.get();
}
I've heard people argue that it's possible for getA() and setA() to be called at roughly the same time and have getA() return to wrong thing.
This is true but you can get the wrong value if getA() is called after setA() as well. A bad cache value can stick forever.
which can obviously result in an infinite loop. Why is this a legal optimization?
It is a legal optimization because threads running with their own memory cache asynchronously is one of the important reasons why you see performance improvements with them. If all memory accesses where synchronized with main memory then the per-CPU memory caches would not be used and threaded programs would run a lot slower.
Would this be illegal if a was volatile?
It is not legal if there is some way for a to be altered – by another thread possibly. If a was final then the JIT could make that optimization. If a was volatile or the get method marked as synchronized then it would certainly not be a legal optimization.
It's not thread safe because that getter does not ensure that a thread will see the latest value, as the value may be stale. Having the getter be synchronized ensures that any thread calling the getter will see the latest value instead of a possible stale one.
You basically have two options:
1) Make your int volatile
2) Use an atomic type like AtomicInt
using a normal int without synchronization is not thread safe at all.
Your best solution is to use an AtomicInteger, they were basically designed for exactly this use case.
If this is more of a theoretical "could this be done question", I think something like the following would be safe (but still not perform as well as an AtomicInteger):
public class A
{
private volatile int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
final int tmp = a + 1;
a = tmp;
}
}
public int getA()
{
return a;
}
}
The short answer is your example will be thread-safe, if
the variable is declared as volatile, or
the getter is declared as synchronized.
The reason that your example class A is not thread-safe is that one can create a program using it that doesn't have a "well-formed execution" (see JLS 17.4.7).
For instance, consider
// in thread #1
int a1 = A.getA();
Thread.sleep(...);
int a2 = A.getA();
if (a1 == a2) {
System.out.println("no increment");
// in thread #2
A.incrementA();
in the scenario that the increment happens during the sleep.
For this execution to be well-formed, there must be a "happens before" (HB) chain between the assignment to a in incrementA called by thread #2, and the subsequent read of a in getA called by thread #1.
If the two threads synchronize using the same lock object, then there is a HB between one thread releasing the lock and a second thread acquiring the lock. So we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #2 releases lock --HB-->
thread #1 acquires lock --HB-->
thread #1 reads a
If two threads share a a volatile variable, there is a HB between any write and any subsequent read (without an intervening write). So we typically get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #1 reads a
Note that incrementA needs to be synchronized to avoid race conditions with other threads calling incrementA.
If neither of the above is true, we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a // No HB!!
thread #1 reads a
Since there is no HB between the write by thread #2 and the subsequent read by thread #1, the JLS does not guarantee that the latter will see the value written by the former.
Note that this is a simplified version of the rules. For the complete version, you need to read all of JLS Chapter 17.
I'm Learning Java multithreading and I have problem, I can't understand Semaphores. How can I execute threads in this order? for example : on image1 : the 5-th thread start running only then 1-st and 2-nd is finished to execute.
Image 2:
Image 1:
I upload now images for better understanding . :))
Usually in java you use mutexes (also called monitors), which prohibits that two or more threads access the code region proctected by that mutex
That code region is defined using the sychronized statement
sychronized(mutex) {
// mutual exclusive code begin
// ...
// ...
// mutual exclusive code end
}
where mutex is defined as e.g:
Object mutex = new Object();
To prevent a task from beeing started you need advanced technics, such as barriers, defined in java.util.concurrency package.
But first make yourself confortable with the synchronized statement.
If you think that you will often use multi threading in java, you might want to read
"Java Concurrency in Practise"
Synchronized is used so that each thread will enter that method or that portion of the code on at a time. If you want to
public class CountingSemaphore {
private int value = 0;
private int waitCount = 0;
private int notifyCount = 0;
public CountingSemaphore(int initial) {
if (initial > 0) {
value = initial;
}
}
public synchronized void waitForNotify() {
if (value <= waitCount) {
waitCount++;
try {
do {
wait();
} while (notifyCount == 0);
} catch (InterruptedException e) {
notify();
} finally {
waitCount--;
}
notifyCount--;
}
value--;
}
public synchronized void notifyToWakeup() {
value++;
if (waitCount > notifyCount) {
notifyCount++;
notify();
}
}
}
This is an implementation of a counting semaphore. It maintains counter variables ‘value’, ‘waitCount’ and ‘notifyCount’. This makes the thread to wait if value is lesser than waitCount and notifyCount is empty.
You can use Java Counting Semaphore. Conceptually, a semaphore maintains a set of permits. Each acquire() blocks if necessary until a permit is available, and then takes it. Each release() adds a permit, potentially releasing a blocking acquirer. However, no actual permit objects are used; the Semaphore just keeps a count of the number available and acts accordingly.
Semaphores are often used to restrict the number of threads than can access some (physical or logical) resource. For example, here is a class that uses a semaphore to control access to a pool of items:
class Pool {
private static final MAX_AVAILABLE = 100;
private final Semaphore available = new Semaphore(MAX_AVAILABLE, true);
public Object getItem() throws InterruptedException {
available.acquire();
return getNextAvailableItem();
}
public void putItem(Object x) {
if (markAsUnused(x))
available.release();
}
// Not a particularly efficient data structure; just for demo
protected Object[] items = ... whatever kinds of items being managed
protected boolean[] used = new boolean[MAX_AVAILABLE];
protected synchronized Object getNextAvailableItem() {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (!used[i]) {
used[i] = true;
return items[i];
}
}
return null; // not reached
}
protected synchronized boolean markAsUnused(Object item) {
for (int i = 0; i < MAX_AVAILABLE; ++i) {
if (item == items[i]) {
if (used[i]) {
used[i] = false;
return true;
} else
return false;
}
}
return false;
}
}
Before obtaining an item each thread must acquire a permit from the semaphore, guaranteeing that an item is available for use. When the thread has finished with the item it is returned back to the pool and a permit is returned to the semaphore, allowing another thread to acquire that item. Note that no synchronization lock is held when acquire() is called as that would prevent an item from being returned to the pool. The semaphore encapsulates the synchronization needed to restrict access to the pool, separately from any synchronization needed to maintain the consistency of the pool itself.
A semaphore initialized to one, and which is used such that it only has at most one permit available, can serve as a mutual exclusion lock. This is more commonly known as a binary semaphore, because it only has two states: one permit available, or zero permits available. When used in this way, the binary semaphore has the property (unlike many Lock implementations), that the "lock" can be released by a thread other than the owner (as semaphores have no notion of ownership). This can be useful in some specialized contexts, such as deadlock recovery.
Can any one tell me the advantage of synchronized method over synchronized block with an example?
Can anyone tell me the advantage of the synchronized method over the synchronized block with an example? Thanks.
There is not a clear advantage of using synchronized method over the block.
Perhaps the only one ( but I wouldn't call it an advantage ) is you don't need to include the object reference this.
Method:
public synchronized void method() { // blocks "this" from here....
...
...
...
} // to here
Block:
public void method() {
synchronized( this ) { // blocks "this" from here ....
....
....
....
} // to here...
}
See? No advantage at all.
Blocks do have advantages over methods though, mostly in flexibility because you can use another object as lock whereas syncing the method would lock the entire object.
Compare:
// locks the whole object
...
private synchronized void someInputRelatedWork() {
...
}
private synchronized void someOutputRelatedWork() {
...
}
vs.
// Using specific locks
Object inputLock = new Object();
Object outputLock = new Object();
private void someInputRelatedWork() {
synchronized(inputLock) {
...
}
}
private void someOutputRelatedWork() {
synchronized(outputLock) {
...
}
}
Also if the method grows you can still keep the synchronized section separated:
private void method() {
... code here
... code here
... code here
synchronized( lock ) {
... very few lines of code here
}
... code here
... code here
... code here
... code here
}
The only real difference is that a synchronized block can choose which object it synchronizes on. A synchronized method can only use 'this' (or the corresponding Class instance for a synchronized class method). For example, these are semantically equivalent:
synchronized void foo() {
...
}
void foo() {
synchronized (this) {
...
}
}
The latter is more flexible since it can compete for the associated lock of any object, often a member variable. It's also more granular because you could have concurrent code executing before and after the block but still within the method. Of course, you could just as easily use a synchronized method by refactoring the concurrent code into separate non-synchronized methods. Use whichever makes the code more comprehensible.
Synchronized Method
Pros:
Your IDE can indicate the synchronized methods.
The syntax is more compact.
Forces to split the synchronized blocks to separate methods.
Cons:
Synchronizes to this and so makes it possible to outsiders to synchronize to it too.
It is harder to move code outside the synchronized block.
Synchronized block
Pros:
Allows using a private variable for the lock and so forcing the lock to stay inside the class.
Synchronized blocks can be found by searching references to the variable.
Cons:
The syntax is more complicated and so makes the code harder to read.
Personally I prefer using synchronized methods with classes focused only to the thing needing synchronization. Such class should be as small as possible and so it should be easy to review the synchronization. Others shouldn't need to care about synchronization.
The main difference is that if you use a synchronized block you may lock on an object other than this which allows to be much more flexible.
Assume you have a message queue and multiple message producers and consumers. We don't want producers to interfere with each other, but the consumers should be able to retrieve messages without having to wait for the producers.
So we just create an object
Object writeLock = new Object();
And from now on every time a producers wants to add a new message we just lock on that:
synchronized(writeLock){
// do something
}
So consumers may still read, and producers will be locked.
Synchronized method
Synchronized methods have two effects.
First, when one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
Note that constructors cannot be synchronized — using the synchronized keyword with a constructor is a syntax error. Synchronizing constructors doesn't make sense, because only the thread that creates an object should have access to it while it is being constructed.
Synchronized Statement
Unlike synchronized methods, synchronized statements must specify the object that provides the intrinsic lock: Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
Q: Intrinsic Locks and Synchronization
Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a "monitor.") Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object's state and establishing happens-before relationships that are essential to visibility.
Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and then release the intrinsic lock when it's done with them. A thread is said to own the intrinsic lock between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock.
package test;
public class SynchTest implements Runnable {
private int c = 0;
public static void main(String[] args) {
new SynchTest().test();
}
public void test() {
// Create the object with the run() method
Runnable runnable = new SynchTest();
Runnable runnable2 = new SynchTest();
// Create the thread supplying it with the runnable object
Thread thread = new Thread(runnable,"thread-1");
Thread thread2 = new Thread(runnable,"thread-2");
// Here the key point is passing same object, if you pass runnable2 for thread2,
// then its not applicable for synchronization test and that wont give expected
// output Synchronization method means "it is not possible for two invocations
// of synchronized methods on the same object to interleave"
// Start the thread
thread.start();
thread2.start();
}
public synchronized void increment() {
System.out.println("Begin thread " + Thread.currentThread().getName());
System.out.println(this.hashCode() + "Value of C = " + c);
// If we uncomment this for synchronized block, then the result would be different
// synchronized(this) {
for (int i = 0; i < 9999999; i++) {
c += i;
}
// }
System.out.println("End thread " + Thread.currentThread().getName());
}
// public synchronized void decrement() {
// System.out.println("Decrement " + Thread.currentThread().getName());
// }
public int value() {
return c;
}
#Override
public void run() {
this.increment();
}
}
Cross check different outputs with synchronized method, block and without synchronization.
Note: static synchronized methods and blocks work on the Class object.
public class MyClass {
// locks MyClass.class
public static synchronized void foo() {
// do something
}
// similar
public static void foo() {
synchronized(MyClass.class) {
// do something
}
}
}
When java compiler converts your source code to byte code, it handles synchronized methods and synchronized blocks very differently.
When the JVM executes a synchronized method, the executing thread identifies that the method's method_info structure has the ACC_SYNCHRONIZED flag set, then it automatically acquires the object's lock, calls the method, and releases the lock. If an exception occurs, the thread automatically releases the lock.
Synchronizing a method block, on the other hand, bypasses the JVM's built-in support for acquiring an object's lock and exception handling and requires that the functionality be explicitly written in byte code. If you read the byte code for a method with a synchronized block, you will see more than a dozen additional operations to manage this functionality.
This shows calls to generate both a synchronized method and a synchronized block:
public class SynchronizationExample {
private int i;
public synchronized int synchronizedMethodGet() {
return i;
}
public int synchronizedBlockGet() {
synchronized( this ) {
return i;
}
}
}
The synchronizedMethodGet() method generates the following byte code:
0: aload_0
1: getfield
2: nop
3: iconst_m1
4: ireturn
And here's the byte code from the synchronizedBlockGet() method:
0: aload_0
1: dup
2: astore_1
3: monitorenter
4: aload_0
5: getfield
6: nop
7: iconst_m1
8: aload_1
9: monitorexit
10: ireturn
11: astore_2
12: aload_1
13: monitorexit
14: aload_2
15: athrow
One significant difference between synchronized method and block is that, Synchronized block generally reduce scope of lock. As scope of lock is inversely proportional to performance, its always better to lock only critical section of code. One of the best example of using synchronized block is double checked locking in Singleton pattern where instead of locking whole getInstance() method we only lock critical section of code which is used to create Singleton instance. This improves performance drastically because locking is only required one or two times.
While using synchronized methods, you will need to take extra care if you mix both static synchronized and non-static synchronized methods.
Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
In the following code one thread modifying the list will not block waiting for a thread that is modifying the map. If the methods were synchronized on the object then each method would have to wait even though the modifications they are making would not conflict.
private List<Foo> myList = new ArrayList<Foo>();
private Map<String,Bar) myMap = new HashMap<String,Bar>();
public void put( String s, Bar b ) {
synchronized( myMap ) {
myMap.put( s,b );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public void hasKey( String s, ) {
synchronized( myMap ) {
myMap.hasKey( s );
}
}
public void add( Foo f ) {
synchronized( myList ) {
myList.add( f );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public Thing getMedianFoo() {
Foo med = null;
synchronized( myList ) {
Collections.sort(myList);
med = myList.get(myList.size()/2);
}
return med;
}
With synchronized blocks, you can have multiple synchronizers, so that multiple simultaneous but non-conflicting things can go on at the same time.
Synchronized methods can be checked using reflection API. This can be useful for testing some contracts, such as all methods in model are synchronized.
The following snippet prints all the synchronized methods of Hashtable:
for (Method m : Hashtable.class.getMethods()) {
if (Modifier.isSynchronized(m.getModifiers())) {
System.out.println(m);
}
}
Important note on using the synchronized block: careful what you use as lock object!
The code snippet from user2277816 above illustrates this point in that a reference to a string literal is used as locking object.
Realize that string literals are automatically interned in Java and you should begin to see the problem: every piece of code that synchronizes on the literal "lock", shares the same lock! This can easily lead to deadlocks with completely unrelated pieces of code.
It is not just String objects that you need to be careful with. Boxed primitives are also a danger, since autoboxing and the valueOf methods can reuse the same objects, depending on the value.
For more information see:
https://www.securecoding.cert.org/confluence/display/java/LCK01-J.+Do+not+synchronize+on+objects+that+may+be+reused
Often using a lock on a method level is too rude. Why lock up a piece of code that does not access any shared resources by locking up an entire method. Since each object has a lock, you can create dummy objects to implement block level synchronization.
The block level is more efficient because it does not lock the whole method.
Here some example
Method Level
class MethodLevel {
//shared among threads
SharedResource x, y ;
public void synchronized method1() {
//multiple threads can't access
}
public void synchronized method2() {
//multiple threads can't access
}
public void method3() {
//not synchronized
//multiple threads can access
}
}
Block Level
class BlockLevel {
//shared among threads
SharedResource x, y ;
//dummy objects for locking
Object xLock = new Object();
Object yLock = new Object();
public void method1() {
synchronized(xLock){
//access x here. thread safe
}
//do something here but don't use SharedResource x, y
// because will not be thread-safe
synchronized(xLock) {
synchronized(yLock) {
//access x,y here. thread safe
}
}
//do something here but don't use SharedResource x, y
//because will not be thread-safe
}//end of method1
}
[Edit]
For Collection like Vector and Hashtable they are synchronized when ArrayList or HashMap are not and you need set synchronized keyword or invoke Collections synchronized method:
Map myMap = Collections.synchronizedMap (myMap); // single lock for the entire map
List myList = Collections.synchronizedList (myList); // single lock for the entire list
The only difference : synchronized blocks allows granular locking unlike synchronized method
Basically synchronized block or methods have been used to write thread safe code by avoiding memory inconsistency errors.
This question is very old and many things have been changed during last 7 years.
New programming constructs have been introduced for thread safety.
You can achieve thread safety by using advanced concurrency API instead of synchronied blocks. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Better replacement for synchronized is ReentrantLock, which uses Lock API
A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
Example with locks:
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Refer to this related question too:
Synchronization vs Lock
Synchronized method is used for lock all the objects
Synchronized block is used to lock specific object
In general these are mostly the same other than being explicit about the object's monitor that's being used vs the implicit this object. One downside of synchronized methods that I think is sometimes overlooked is that in using the "this" reference to synchronize on you are leaving open the possibility of external objects locking on the same object. That can be a very subtle bug if you run into it. Synchronizing on an internal explicit Object or other existing field can avoid this issue, completely encapsulating the synchronization.
As already said here synchronized block can use user-defined variable as lock object, when synchronized function uses only "this". And of course you can manipulate with areas of your function which should be synchronized.
But everyone says that no difference between synchronized function and block which covers whole function using "this" as lock object. That is not true, difference is in byte code which will be generated in both situations. In case of synchronized block usage should be allocated local variable which holds reference to "this". And as result we will have a little bit larger size for function (not relevant if you have only few number of functions).
More detailed explanation of the difference you can find here:
http://www.artima.com/insidejvm/ed2/threadsynchP.html
In case of synchronized methods, lock will be acquired on an Object. But if you go with synchronized block you have an option to specify an object on which the lock will be acquired.
Example :
Class Example {
String test = "abc";
// lock will be acquired on String test object.
synchronized (test) {
// do something
}
lock will be acquired on Example Object
public synchronized void testMethod() {
// do some thing
}
}
I know this is an old question, but with my quick read of the responses here, I didn't really see anyone mention that at times a synchronized method may be the wrong lock.
From Java Concurrency In Practice (pg. 72):
public class ListHelper<E> {
public List<E> list = Collections.syncrhonizedList(new ArrayList<>());
...
public syncrhonized boolean putIfAbsent(E x) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
The above code has the appearance of being thread-safe. However, in reality it is not. In this case the lock is obtained on the instance of the class. However, it is possible for the list to be modified by another thread not using that method. The correct approach would be to use
public boolean putIfAbsent(E x) {
synchronized(list) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
}
The above code would block all threads trying to modify list from modifying the list until the synchronized block has completed.
As a practical matter, the advantage of synchronized methods over synchronized blocks is that they are more idiot-resistant; because you can't choose an arbitrary object to lock on, you can't misuse the synchronized method syntax to do stupid things like locking on a string literal or locking on the contents of a mutable field that gets changed out from under the threads.
On the other hand, with synchronized methods you can't protect the lock from getting acquired by any thread that can get a reference to the object.
So using synchronized as a modifier on methods is better at protecting your cow-orkers from hurting themselves, while using synchronized blocks in conjunction with private final lock objects is better at protecting your own code from the cow-orkers.
From a Java specification summary:
http://www.cs.cornell.edu/andru/javaspec/17.doc.html
The synchronized statement (§14.17) computes a reference to an object;
it then attempts to perform a lock action on that object and does not
proceed further until the lock action has successfully completed. ...
A synchronized method (§8.4.3.5) automatically performs a lock action
when it is invoked; its body is not executed until the lock action has
successfully completed. If the method is an instance method, it
locks the lock associated with the instance for which it was invoked
(that is, the object that will be known as this during execution of
the body of the method). If the method is static, it locks the
lock associated with the Class object that represents the class in
which the method is defined. ...
Based on these descriptions, I would say most previous answers are correct, and a synchronized method might be particularly useful for static methods, where you would otherwise have to figure out how to get the "Class object that represents the class in which the method was defined."
Edit: I originally thought these were quotes of the actual Java spec. Clarified that this page is just a summary/explanation of the spec
TLDR; Neither use the synchronized modifier nor the synchronized(this){...} expression but synchronized(myLock){...} where myLock is a final instance field holding a private object.
The difference between using the synchronized modifier on the method declaration and the synchronized(..){ } expression in the method body are this:
The synchronized modifier specified on the method's signature
is visible in the generated JavaDoc,
is programmatically determinable via reflection when testing a method's modifier for Modifier.SYNCHRONIZED,
requires less typing and indention compared to synchronized(this) { .... }, and
(depending on your IDE) is visible in the class outline and code completion,
uses the this object as lock when declared on non-static method or the enclosing class when declared on a static method.
The synchronized(...){...} expression allows you
to only synchronize the execution of parts of a method's body,
to be used within a constructor or a (static) initialization block,
to choose the lock object which controls the synchronized access.
However, using the synchronized modifier or synchronized(...) {...} with this as the lock object (as in synchronized(this) {...}), have the same disadvantage. Both use it's own instance as the lock object to synchronize on. This is dangerous because not only the object itself but any other external object/code that holds a reference to that object can also use it as a synchronization lock with potentially severe side effects (performance degradation and deadlocks).
Therefore best practice is to neither use the synchronized modifier nor the synchronized(...) expression in conjunction with this as lock object but a lock object private to this object. For example:
public class MyService {
private final lock = new Object();
public void doThis() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
public void doThat() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
}
You can also use multiple lock objects but special care needs to be taken to ensure this does not result in deadlocks when used nested.
public class MyService {
private final lock1 = new Object();
private final lock2 = new Object();
public void doThis() {
synchronized(lock1) {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThat() and doMore().
}
}
public void doThat() {
synchronized(lock1) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doMore() may execute concurrently
}
}
public void doMore() {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doThat() may execute concurrently
}
}
}
I suppose this question is about the difference between Thread Safe Singleton and Lazy initialization with Double check locking. I always refer to this article when I need to implement some specific singleton.
Well, this is a Thread Safe Singleton:
// Java program to create Thread Safe
// Singleton class
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
//synchronized method to control simultaneous access
synchronized public static GFG getInstance()
{
if (instance == null)
{
// if instance is null, initialize
instance = new GFG();
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is thread safe.
Cons:
getInstance() method is synchronized so it causes slow performance as multiple threads can’t access it simultaneously.
This is a Lazy initialization with Double check locking:
// Java code to explain double check locking
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
public static GFG getInstance()
{
if (instance == null)
{
//synchronized block to remove overhead
synchronized (GFG.class)
{
if(instance==null)
{
// if instance is null, initialize
instance = new GFG();
}
}
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is also thread safe.
Performance reduced because of synchronized keyword is overcome.
Cons:
First time, it can affect performance.
As cons. of double check locking method is bearable so it can be
used for high performance multi-threaded applications.
Please refer to this article for more details:
https://www.geeksforgeeks.org/java-singleton-design-pattern-practices-examples/
Synchronizing with threads.
1) NEVER use synchronized(this) in a thread it doesn't work. Synchronizing with (this) uses the current thread as the locking thread object. Since each thread is independent of other threads, there is NO coordination of synchronization.
2) Tests of code show that in Java 1.6 on a Mac the method synchronization does not work.
3) synchronized(lockObj) where lockObj is a common shared object of all threads synchronizing on it will work.
4) ReenterantLock.lock() and .unlock() work. See Java tutorials for this.
The following code shows these points. It also contains the thread-safe Vector which would be substituted for the ArrayList, to show that many threads adding to a Vector do not lose any information, while the same with an ArrayList can lose information.
0) Current code shows loss of information due to race conditions
A) Comment the current labeled A line, and uncomment the A line above it, then run, method loses data but it shouldn't.
B) Reverse step A, uncomment B and // end block }. Then run to see results no loss of data
C) Comment out B, uncomment C. Run, see synchronizing on (this) loses data, as expected.
Don't have time to complete all the variations, hope this helps.
If synchronizing on (this), or the method synchronization works, please state what version of Java and OS you tested. Thank you.
import java.util.*;
/** RaceCondition - Shows that when multiple threads compete for resources
thread one may grab the resource expecting to update a particular
area but is removed from the CPU before finishing. Thread one still
points to that resource. Then thread two grabs that resource and
completes the update. Then thread one gets to complete the update,
which over writes thread two's work.
DEMO: 1) Run as is - see missing counts from race condition, Run severa times, values change
2) Uncomment "synchronized(countLock){ }" - see counts work
Synchronized creates a lock on that block of code, no other threads can
execute code within a block that another thread has a lock.
3) Comment ArrayList, unComment Vector - See no loss in collection
Vectors work like ArrayList, but Vectors are "Thread Safe"
May use this code as long as attribution to the author remains intact.
/mf
*/
public class RaceCondition {
private ArrayList<Integer> raceList = new ArrayList<Integer>(); // simple add(#)
// private Vector<Integer> raceList = new Vector<Integer>(); // simple add(#)
private String countLock="lock"; // Object use for locking the raceCount
private int raceCount = 0; // simple add 1 to this counter
private int MAX = 10000; // Do this 10,000 times
private int NUM_THREADS = 100; // Create 100 threads
public static void main(String [] args) {
new RaceCondition();
}
public RaceCondition() {
ArrayList<Thread> arT = new ArrayList<Thread>();
// Create thread objects, add them to an array list
for( int i=0; i<NUM_THREADS; i++){
Thread rt = new RaceThread( ); // i );
arT.add( rt );
}
// Start all object at once.
for( Thread rt : arT ){
rt.start();
}
// Wait for all threads to finish before we can print totals created by threads
for( int i=0; i<NUM_THREADS; i++){
try { arT.get(i).join(); }
catch( InterruptedException ie ) { System.out.println("Interrupted thread "+i); }
}
// All threads finished, print the summary information.
// (Try to print this informaiton without the join loop above)
System.out.printf("\nRace condition, should have %,d. Really have %,d in array, and count of %,d.\n",
MAX*NUM_THREADS, raceList.size(), raceCount );
System.out.printf("Array lost %,d. Count lost %,d\n",
MAX*NUM_THREADS-raceList.size(), MAX*NUM_THREADS-raceCount );
} // end RaceCondition constructor
class RaceThread extends Thread {
public void run() {
for ( int i=0; i<MAX; i++){
try {
update( i );
} // These catches show when one thread steps on another's values
catch( ArrayIndexOutOfBoundsException ai ){ System.out.print("A"); }
catch( OutOfMemoryError oome ) { System.out.print("O"); }
}
}
// so we don't lose counts, need to synchronize on some object, not primitive
// Created "countLock" to show how this can work.
// Comment out the synchronized and ending {, see that we lose counts.
// public synchronized void update(int i){ // use A
public void update(int i){ // remove this when adding A
// synchronized(countLock){ // or B
// synchronized(this){ // or C
raceCount = raceCount + 1;
raceList.add( i ); // use Vector
// } // end block for B or C
} // end update
} // end RaceThread inner class
} // end RaceCondition outter class