Related
java.util.concurrent API provides a class called as Lock, which would basically serialize the control in order to access the critical resource. It gives method such as park() and unpark().
We can do similar things if we can use synchronized keyword and using wait() and notify() notifyAll() methods.
I am wondering which one of these is better in practice and why?
If you're simply locking an object, I'd prefer to use synchronized
Example:
Lock.acquire();
doSomethingNifty(); // Throws a NPE!
Lock.release(); // Oh noes, we never release the lock!
You have to explicitly do try{} finally{} everywhere.
Whereas with synchronized, it's super clear and impossible to get wrong:
synchronized(myObject) {
doSomethingNifty();
}
That said, Locks may be more useful for more complicated things where you can't acquire and release in such a clean manner. I would honestly prefer to avoid using bare Locks in the first place, and just go with a more sophisticated concurrency control such as a CyclicBarrier or a LinkedBlockingQueue, if they meet your needs.
I've never had a reason to use wait() or notify() but there may be some good ones.
I am wondering which one of these is better in practice and why?
I've found that Lock and Condition (and other new concurrent classes) are just more tools for the toolbox. I could do most everything I needed with my old claw hammer (the synchronized keyword), but it was awkward to use in some situations. Several of those awkward situations became much simpler once I added more tools to my toolbox: a rubber mallet, a ball-peen hammer, a prybar, and some nail punches. However, my old claw hammer still sees its share of use.
I don't think one is really "better" than the other, but rather each is a better fit for different problems. In a nutshell, the simple model and scope-oriented nature of synchronized helps protect me from bugs in my code, but those same advantages are sometimes hindrances in more complex scenarios. Its these more complex scenarios that the concurrent package was created to help address. But using this higher level constructs requires more explicit and careful management in the code.
===
I think the JavaDoc does a good job of describing the distinction between Lock and synchronized (the emphasis is mine):
Lock implementations provide more extensive locking operations than can be obtained using synchronized methods and statements. They allow more flexible structuring, may have quite different properties, and may support multiple associated Condition objects.
...
The use of synchronized methods or statements provides access to the implicit monitor lock associated with every object, but forces all lock acquisition and release to occur in a block-structured way: when multiple locks are acquired they must be released in the opposite order, and all locks must be released in the same lexical scope in which they were acquired.
While the scoping mechanism for synchronized methods and statements makes it much easier to program with monitor locks, and helps avoid many common programming errors involving locks, there are occasions where you need to work with locks in a more flexible way. For example, **some algorithms* for traversing concurrently accessed data structures require the use of "hand-over-hand" or "chain locking": you acquire the lock of node A, then node B, then release A and acquire C, then release B and acquire D and so on. Implementations of the Lock interface enable the use of such techniques by allowing a lock to be acquired and released in different scopes, and allowing multiple locks to be acquired and released in any order.
With this increased flexibility comes additional responsibility. The absence of block-structured locking removes the automatic release of locks that occurs with synchronized methods and statements. In most cases, the following idiom should be used:
...
When locking and unlocking occur in different scopes, care must be taken to ensure that all code that is executed while the lock is held is protected by try-finally or try-catch to ensure that the lock is released when necessary.
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing a non-blocking attempt to acquire a lock (tryLock()), an attempt to acquire the lock that can be interrupted (lockInterruptibly(), and an attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
...
You can achieve everything the utilities in java.util.concurrent do with the low-level primitives like synchronized, volatile, or wait / notify
However, concurrency is tricky, and most people get at least some parts of it wrong, making their code either incorrect or inefficient (or both).
The concurrent API provides a higher-level approach, which is easier (and as such safer) to use. In a nutshell, you should not need to use synchronized, volatile, wait, notify directly anymore.
The Lock class itself is on the lower-level side of this toolbox, you may not even need to use that directly either (you can use Queues and Semaphore and stuff, etc, most of the time).
There are 4 main factors into why you would want to use synchronized or java.util.concurrent.Lock.
Note: Synchronized locking is what I mean when I say intrinsic locking.
When Java 5 came out with
ReentrantLocks, they proved to have
quite a noticeble throughput
difference then intrinsic locking.
If youre looking for faster locking
mechanism and are running 1.5
consider j.u.c.ReentrantLock. Java
6's intrinsic locking is now
comparable.
j.u.c.Lock has different mechanisms
for locking. Lock interruptable -
attempt to lock until the locking
thread is interrupted; timed lock -
attempt to lock for a certain amount
of time and give up if you do not
succeed; tryLock - attempt to lock,
if some other thread is holding the
lock give up. This all is included
aside from the simple lock.
Intrinsic locking only offers simple
locking
Style. If both 1 and 2 do not fall
into categories of what you are
concerned with most people,
including myself, would find the
intrinsic locking semenatics easier
to read and less verbose then
j.u.c.Lock locking.
Multiple Conditions. An object you
lock on can only be notified and
waited for a single case. Lock's
newCondition method allows for a
single Lock to have mutliple reasons
to await or signal. I have yet to
actually need this functionality in
practice, but is a nice feature for
those who need it.
I would like to add some more things on top of Bert F answer.
Locks support various methods for finer grained lock control, which are more expressive than implicit monitors (synchronized locks)
A Lock provides exclusive access to a shared resource: only one thread at a time can acquire the lock and all access to the shared resource requires that the lock be acquired first. However, some locks may allow concurrent access to a shared resource, such as the read lock of a ReadWriteLock.
Advantages of Lock over Synchronization from documentation page
The use of synchronized methods or statements provides access to the implicit monitor lock associated with every object, but forces all lock acquisition and release to occur in a block-structured way
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing a non-blocking attempt to acquire a lock (tryLock()), an attempt to acquire the lock that can be interrupted (lockInterruptibly(), and an attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
A Lock class can also provide behavior and semantics that is quite different from that of the implicit monitor lock, such as guaranteed ordering, non-reentrant usage, or deadlock detection
ReentrantLock: In simple terms as per my understanding, ReentrantLock allows an object to re-enter from one critical section to other critical section . Since you already have lock to enter one critical section, you can other critical section on same object by using current lock.
ReentrantLock key features as per this article
Ability to lock interruptibly.
Ability to timeout while waiting for lock.
Power to create fair lock.
API to get list of waiting thread for lock.
Flexibility to try for lock without blocking.
You can use ReentrantReadWriteLock.ReadLock, ReentrantReadWriteLock.WriteLock to further acquire control on granular locking on read and write operations.
Apart from these three ReentrantLocks, java 8 provides one more Lock
StampedLock:
Java 8 ships with a new kind of lock called StampedLock which also support read and write locks just like in the example above. In contrast to ReadWriteLock the locking methods of a StampedLock return a stamp represented by a long value.
You can use these stamps to either release a lock or to check if the lock is still valid. Additionally stamped locks support another lock mode called optimistic locking.
Have a look at this article on usage of different type of ReentrantLock and StampedLock locks.
The main difference is fairness, in other words are requests handled FIFO or can there be barging? Method level synchronization ensures fair or FIFO allocation of the lock. Using
synchronized(foo) {
}
or
lock.acquire(); .....lock.release();
does not assure fairness.
If you have lots of contention for the lock you can easily encounter barging where newer requests get the lock and older requests get stuck. I've seen cases where 200 threads arrive in short order for a lock and the 2nd one to arrive got processed last. This is ok for some applications but for others it's deadly.
See Brian Goetz's "Java Concurrency In Practice" book, section 13.3 for a full discussion of this topic.
Major difference between lock and synchronized:
with locks, you can release and acquire the locks in any order.
with synchronized, you can release the locks only in the order it was acquired.
Brian Goetz's "Java Concurrency In Practice" book, section 13.3:
"...Like the default ReentrantLock, intrinsic locking offers no deterministic fairness guarantees, but the
statistical fairness guarantees of most locking implementations are good enough for almost all situations..."
Lock makes programmers' life easier. Here are a few situations that can be achieved easily with lock.
Lock in one method, and release the lock in another method.
If You have two threads working on two different pieces of code, however, in the first thread has a pre-requisite on a certain piece of code in the second thread (while some other threads also working on the same piece of code in the second thread simultaneously). A shared lock can solve this problem quite easily.
Implementing monitors. For example, a simple queue where the put and get methods are executed from many other threads. However, you do not want multiple put (or get) methods running simultaneously, neither the put and get method running simultaneously. A private lock makes your life a lot easier to achieve this.
While, the lock, and conditions build on the synchronized mechanism. Therefore, can certainly be able to achieve the same functionality that you can achieve using the lock. However, solving complex scenarios with synchronized may make your life difficult and can deviate you from solving the actual problem.
Lock and synchronize block both serves the same purpose but it depends on the usage. Consider the below part
void randomFunction(){
.
.
.
synchronize(this){
//do some functionality
}
.
.
.
synchronize(this)
{
// do some functionality
}
} // end of randomFunction
In the above case , if a thread enters the synchronize block, the other block is also locked. If there are multiple such synchronize block on the same object, all the blocks are locked. In such situations , java.util.concurrent.Lock can be used to prevent unwanted locking of blocks
One of the first things we all learnt about concurrency in Java is that we use locks (synchronized keyword, Lock/ReadWriteLock interface) to protect against concurrent access. For example:
synchronized void eat(){
//some code
}
In theory, this method eat() can be executed by a single thread. Even though there are many threads waiting to execute it, only one will take the lock. But then comes parallelism, which made me think twice about what I just said.
I have 4 core CPU. That means I can do 4 tasks parallelly. Yes, lock can be taken by a single thread. But could it happen that 4 threads call method eat() and take a lock at the LITERALLY same time, although there is a lock that needs to be acquired to actually do anything?
Can something like that even happen in Java? I guess it can't but I had to ask this. And how is it even dealing with a case I just said?
...4 threads...take a lock at the LITERALLY same time...
Can't happen. Any "synchronized" operation (e.g., "take a lock") must operate on the system's main memory, and in any conventional computer system, there is only one memory bus. It is physically impossible for more than one CPU to access the main memory at the same time.
If two CPUs decide to access the memory at literally the same time, the hardware guarantees that one of them will "win" the race and go first, while the other one is forced to wait its turn.
Short answer - JVM keeps a list of threads that are trying to get the lock.
More details
https://wiki.openjdk.java.net/display/HotSpot/Synchronization
Also, worth mentioning that list (aka "inflated lock") is only created when really required (== when contention on a lock happens).
I was trying to dive into synchronization.
here, its mentioned "Every object has an intrinsic lock associated with it.".
I understand Object class. Its ( quite known to us ) does not have any property like a lock ( thus i guess its called intrinsic lock ). What exactly is this lock (ie is it a Lock.java class ? it is some sort of hidden field ? ) and how is it associated with an object (ie is there some mystery implicit reference to the lock from an object, something happening natively ) ?
When several threads try to acquire the same lock, one or more threads will be suspended and they will be resumed later. Well where are these threads stored ? What data structure keeps records of waiting threads ?
What logic is used under the covers to pick a thread waiting to enter synchronized method, when there are many waiting ?
Any reference to what happens under the covers (step by step) from "synchronized keyword" to "intrinsic lock aquisition" ?
Is there an upper limit on number of threads that are allowed to wait on synchronized ?
1) Yes, the lock is essentially a hidden field on the object. You can't access it except through synchronization.
2) Waiting threads essentially just sleep until the lock is available. They aren't "stored" anyplace special, nor is there a visible queue of waiting threads. Details of implementation are hidden.
3) I don't think there's any promised order. If you explicitly need round-robin scheduling or prioritization or something of that sort, it's your responsibility to implement it (or use a class which implements it for you) on top of the synchronization lock mechanism.
4) This is likely to be handled as an OS semaphore, if you understand those. If you don't, defining them strikes me as too much detail to be addressed properly here... and you don't really need to understand this unless you're reimplementing it.
5) There is no explicit limit, as far as I know. (I haven't checked the official Java Specification, but I understand how this kind of thing gets implemented at the OS level.) Of course at some point you're going to run out of system resources, but I think you'll generally run out of other resources (like memory to run those threads in) first.
One additional note: It's also worth looking at the Atomic... classes. When these can be used, they will be somewhat more efficient in modern processors than traditional Java synchronization can be.
I am trying to understand the usefulness of fairness property in Semaphore class.
Specifically to quote the Javadoc mentions that:
Generally, semaphores used to control resource access should be initialized as fair, to ensure that no thread is starved out from accessing a resource. When using semaphores for other kinds of synchronization control, the throughput advantages of non-fair ordering often outweigh fairness considerations.
Could someone provide an example where barging might be desired here. I cannot think past resource access use case. Also, why is that the default is non-fair behavior?
Lastly, are there any performance implications in using the fairness behavior?
Java's built-in concurrency constructs (synchronized, wait(), notify(),...) do not specify which thread should be freed when a lock is released. It is up to the JVM implementation to decide which algorithm to use.
Fairness gives you more control: when the lock is released, the thread with the longest wait time is given the lock (FIFO processing). Without fairness (and with a very bad algorithm) you might have a situation where a thread is always waiting for the lock because there is a continuous stream of other threads.
If the Semaphore is set to be fair, there's a small overhead because it needs to maintain a queue of all the threads waiting for the lock. Unless you're writing a high throughput/high performance/many cores application, you won't probably see the difference though!
Scenario where fairness is not needed
If you have N identical worker threads, it doesn't matter which one gets a task to execute
Scenario where fairness is needed
If you have N tasks queues, you don't want one queue to be waiting forever and never acquiring the lock.
java.util.concurrent API provides a class called as Lock, which would basically serialize the control in order to access the critical resource. It gives method such as park() and unpark().
We can do similar things if we can use synchronized keyword and using wait() and notify() notifyAll() methods.
I am wondering which one of these is better in practice and why?
If you're simply locking an object, I'd prefer to use synchronized
Example:
Lock.acquire();
doSomethingNifty(); // Throws a NPE!
Lock.release(); // Oh noes, we never release the lock!
You have to explicitly do try{} finally{} everywhere.
Whereas with synchronized, it's super clear and impossible to get wrong:
synchronized(myObject) {
doSomethingNifty();
}
That said, Locks may be more useful for more complicated things where you can't acquire and release in such a clean manner. I would honestly prefer to avoid using bare Locks in the first place, and just go with a more sophisticated concurrency control such as a CyclicBarrier or a LinkedBlockingQueue, if they meet your needs.
I've never had a reason to use wait() or notify() but there may be some good ones.
I am wondering which one of these is better in practice and why?
I've found that Lock and Condition (and other new concurrent classes) are just more tools for the toolbox. I could do most everything I needed with my old claw hammer (the synchronized keyword), but it was awkward to use in some situations. Several of those awkward situations became much simpler once I added more tools to my toolbox: a rubber mallet, a ball-peen hammer, a prybar, and some nail punches. However, my old claw hammer still sees its share of use.
I don't think one is really "better" than the other, but rather each is a better fit for different problems. In a nutshell, the simple model and scope-oriented nature of synchronized helps protect me from bugs in my code, but those same advantages are sometimes hindrances in more complex scenarios. Its these more complex scenarios that the concurrent package was created to help address. But using this higher level constructs requires more explicit and careful management in the code.
===
I think the JavaDoc does a good job of describing the distinction between Lock and synchronized (the emphasis is mine):
Lock implementations provide more extensive locking operations than can be obtained using synchronized methods and statements. They allow more flexible structuring, may have quite different properties, and may support multiple associated Condition objects.
...
The use of synchronized methods or statements provides access to the implicit monitor lock associated with every object, but forces all lock acquisition and release to occur in a block-structured way: when multiple locks are acquired they must be released in the opposite order, and all locks must be released in the same lexical scope in which they were acquired.
While the scoping mechanism for synchronized methods and statements makes it much easier to program with monitor locks, and helps avoid many common programming errors involving locks, there are occasions where you need to work with locks in a more flexible way. For example, **some algorithms* for traversing concurrently accessed data structures require the use of "hand-over-hand" or "chain locking": you acquire the lock of node A, then node B, then release A and acquire C, then release B and acquire D and so on. Implementations of the Lock interface enable the use of such techniques by allowing a lock to be acquired and released in different scopes, and allowing multiple locks to be acquired and released in any order.
With this increased flexibility comes additional responsibility. The absence of block-structured locking removes the automatic release of locks that occurs with synchronized methods and statements. In most cases, the following idiom should be used:
...
When locking and unlocking occur in different scopes, care must be taken to ensure that all code that is executed while the lock is held is protected by try-finally or try-catch to ensure that the lock is released when necessary.
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing a non-blocking attempt to acquire a lock (tryLock()), an attempt to acquire the lock that can be interrupted (lockInterruptibly(), and an attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
...
You can achieve everything the utilities in java.util.concurrent do with the low-level primitives like synchronized, volatile, or wait / notify
However, concurrency is tricky, and most people get at least some parts of it wrong, making their code either incorrect or inefficient (or both).
The concurrent API provides a higher-level approach, which is easier (and as such safer) to use. In a nutshell, you should not need to use synchronized, volatile, wait, notify directly anymore.
The Lock class itself is on the lower-level side of this toolbox, you may not even need to use that directly either (you can use Queues and Semaphore and stuff, etc, most of the time).
There are 4 main factors into why you would want to use synchronized or java.util.concurrent.Lock.
Note: Synchronized locking is what I mean when I say intrinsic locking.
When Java 5 came out with
ReentrantLocks, they proved to have
quite a noticeble throughput
difference then intrinsic locking.
If youre looking for faster locking
mechanism and are running 1.5
consider j.u.c.ReentrantLock. Java
6's intrinsic locking is now
comparable.
j.u.c.Lock has different mechanisms
for locking. Lock interruptable -
attempt to lock until the locking
thread is interrupted; timed lock -
attempt to lock for a certain amount
of time and give up if you do not
succeed; tryLock - attempt to lock,
if some other thread is holding the
lock give up. This all is included
aside from the simple lock.
Intrinsic locking only offers simple
locking
Style. If both 1 and 2 do not fall
into categories of what you are
concerned with most people,
including myself, would find the
intrinsic locking semenatics easier
to read and less verbose then
j.u.c.Lock locking.
Multiple Conditions. An object you
lock on can only be notified and
waited for a single case. Lock's
newCondition method allows for a
single Lock to have mutliple reasons
to await or signal. I have yet to
actually need this functionality in
practice, but is a nice feature for
those who need it.
I would like to add some more things on top of Bert F answer.
Locks support various methods for finer grained lock control, which are more expressive than implicit monitors (synchronized locks)
A Lock provides exclusive access to a shared resource: only one thread at a time can acquire the lock and all access to the shared resource requires that the lock be acquired first. However, some locks may allow concurrent access to a shared resource, such as the read lock of a ReadWriteLock.
Advantages of Lock over Synchronization from documentation page
The use of synchronized methods or statements provides access to the implicit monitor lock associated with every object, but forces all lock acquisition and release to occur in a block-structured way
Lock implementations provide additional functionality over the use of synchronized methods and statements by providing a non-blocking attempt to acquire a lock (tryLock()), an attempt to acquire the lock that can be interrupted (lockInterruptibly(), and an attempt to acquire the lock that can timeout (tryLock(long, TimeUnit)).
A Lock class can also provide behavior and semantics that is quite different from that of the implicit monitor lock, such as guaranteed ordering, non-reentrant usage, or deadlock detection
ReentrantLock: In simple terms as per my understanding, ReentrantLock allows an object to re-enter from one critical section to other critical section . Since you already have lock to enter one critical section, you can other critical section on same object by using current lock.
ReentrantLock key features as per this article
Ability to lock interruptibly.
Ability to timeout while waiting for lock.
Power to create fair lock.
API to get list of waiting thread for lock.
Flexibility to try for lock without blocking.
You can use ReentrantReadWriteLock.ReadLock, ReentrantReadWriteLock.WriteLock to further acquire control on granular locking on read and write operations.
Apart from these three ReentrantLocks, java 8 provides one more Lock
StampedLock:
Java 8 ships with a new kind of lock called StampedLock which also support read and write locks just like in the example above. In contrast to ReadWriteLock the locking methods of a StampedLock return a stamp represented by a long value.
You can use these stamps to either release a lock or to check if the lock is still valid. Additionally stamped locks support another lock mode called optimistic locking.
Have a look at this article on usage of different type of ReentrantLock and StampedLock locks.
The main difference is fairness, in other words are requests handled FIFO or can there be barging? Method level synchronization ensures fair or FIFO allocation of the lock. Using
synchronized(foo) {
}
or
lock.acquire(); .....lock.release();
does not assure fairness.
If you have lots of contention for the lock you can easily encounter barging where newer requests get the lock and older requests get stuck. I've seen cases where 200 threads arrive in short order for a lock and the 2nd one to arrive got processed last. This is ok for some applications but for others it's deadly.
See Brian Goetz's "Java Concurrency In Practice" book, section 13.3 for a full discussion of this topic.
Major difference between lock and synchronized:
with locks, you can release and acquire the locks in any order.
with synchronized, you can release the locks only in the order it was acquired.
Brian Goetz's "Java Concurrency In Practice" book, section 13.3:
"...Like the default ReentrantLock, intrinsic locking offers no deterministic fairness guarantees, but the
statistical fairness guarantees of most locking implementations are good enough for almost all situations..."
Lock makes programmers' life easier. Here are a few situations that can be achieved easily with lock.
Lock in one method, and release the lock in another method.
If You have two threads working on two different pieces of code, however, in the first thread has a pre-requisite on a certain piece of code in the second thread (while some other threads also working on the same piece of code in the second thread simultaneously). A shared lock can solve this problem quite easily.
Implementing monitors. For example, a simple queue where the put and get methods are executed from many other threads. However, you do not want multiple put (or get) methods running simultaneously, neither the put and get method running simultaneously. A private lock makes your life a lot easier to achieve this.
While, the lock, and conditions build on the synchronized mechanism. Therefore, can certainly be able to achieve the same functionality that you can achieve using the lock. However, solving complex scenarios with synchronized may make your life difficult and can deviate you from solving the actual problem.
Lock and synchronize block both serves the same purpose but it depends on the usage. Consider the below part
void randomFunction(){
.
.
.
synchronize(this){
//do some functionality
}
.
.
.
synchronize(this)
{
// do some functionality
}
} // end of randomFunction
In the above case , if a thread enters the synchronize block, the other block is also locked. If there are multiple such synchronize block on the same object, all the blocks are locked. In such situations , java.util.concurrent.Lock can be used to prevent unwanted locking of blocks