I want to start with IllegalMonitorStateException which we get if the current thread is not the owner of the object's monitor. So if I do this, I will get exception:
public class Testing {
Object objLock = new Object();
void dance(){
synchronized (this){
objLock.wait();
}
}
}
So I came to conclusion that you must have same object to synchronize and call wait/notify. Does that mean I can only have one condition per lock?
But then there is Condition class and Lock interface. How do they manage to solve the job?
public class Testing {
Lock lock = new ReentrantLock();
Condition condition = lock.newCondition();
void dance(){
lock.lock();
condition.await();
lock.unlock();
}
}
Before I learn something wrong, does this mean that Lock/Condition example allows us to have more conditions? And how come when I just showed example of IllegalMonitorStateException which prevents us from doing exactly that.
Can someone please explain my confusion? How did Condition class 'trick it'? Or did it, if I said something wrong?
First of all, lets see official documentation of Conditions:
Condition factors out the Object monitor methods (wait, notify and
notifyAll) into distinct objects to give the effect of having multiple
wait-sets per object, by combining them with the use of arbitrary Lock
implementations. Where a Lock replaces the use of synchronized methods
and statements, a Condition replaces the use of the Object monitor
methods.
And according to official doc of Lock:
Lock implementations provide more extensive locking operations than
can be obtained using synchronized methods and statements. They allow
more flexible structuring, may have quite different properties, and
may support multiple associated Condition objects.
So, using this information I will answer your questions:
does this mean that Lock/Condition example allows us to have more conditions?
Yes, you can use more than one condition per lock and create your synchronizing logic using the combination of conditions. See example from official doc.
The reason you did get IllegalMonitorStateException is that you attempted to wait for object while not having a monitor for it (you should have passed objLock as synchronized block parameter). The reason you didn't get it with second code example is that you do not perform illegal wait operation on objects while not having monitor for them. You lock the resources by calling lock.lock() and unlock them after some condition is satisfied. Until that, no other threads can access those resources. Clearly, there is no magic or trick behind it.
P.S.: I recommend you to read documentation pieces on Lock and Condition as I find them really useful and informative in case of your question.
There is a sample usage about lock downgrading in the doc of ReentrantReadWriteLock(see this).
class CachedData {
final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
Object data;
volatile boolean cacheValid;
void processCachedData() {
rwl.readLock().lock();
if (!cacheValid) {
// Must release read lock before acquiring write lock
rwl.readLock().unlock();
rwl.writeLock().lock();
try {
// Recheck state because another thread might have
// acquired write lock and changed state before we did.
if (!cacheValid) {
data = ...
cacheValid = true;
}
// Downgrade by acquiring read lock before releasing write lock
rwl.readLock().lock();//B
} finally {//A
rwl.writeLock().unlock(); // Unlock write, still hold read
}
}
try {
use(data);
} finally {//C
rwl.readLock().unlock();
}
}
}
If I change Object data to volatile Object data, should I still need downgrading write lock to read lock?
update
What I mean is if I add volatile to data,Before I release the write lock in finally block at comment A,should I still need acquiring the read lock as the code at commentBandC do? Or the code can take the advantage of volatile?
No, volatile is not needed whether you downgrade or not (the locking already guarantees thread-safe access to data). It also won't help with the atomicity, which is what the acquire-read-then-write-lock pattern does (and which was the point of the question).
You're talking about needing to downgrade like it's a bad thing. You can keep a write lock and not downgrade, and things will work just fine. You're just keeping an unnecessarily strong lock, when a read lock would suffice.
You don't need to downgrade to a read lock, but if you don't it'll make your code less efficient: if use(data) takes 2 seconds (a long time), then without lock downgrading you're blocking all other readers for 2 seconds every time you refresh the cache.
If you mean why do you even need the read lock once the cache refresh is done, it's because otherwise it would be possible for another thread to start a new cache refresh (as there wouldn't be any locks) while we're still working on use(data).
In the given example code it's not possible to determine whether it would actually matter since there's not enough information, but it would create a possible additional state for the method and that's not an advantage:
One or more threads are in use(data), having read locks
One thread is refreshing cache, having write lock
One thread is in use(data) without lock and one thread is refreshing cache with a write lock
I have following piece of code:
synchronized void myMethod() {
String s="aaa";
try {
s.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
The code throws exception ...
I have seen codes using wait method on threads which is self explainable and logical..
why would one use wait method on an object like string instead of using it on main thread?
what is its use?
are there any practical implementations like this?
Thanks in advance
Your sample code won't work because the method is synchronizing on the instance that myMethod is called on, while the wait is called on the string. It will cause an IllegalMonitorStateException. You have to call wait and notify on the same object that you're locking on. The threads that get notified are the ones waiting on the lock that notify is called on.
Locking on a string object is a bad idea, don't do it. You don't want to lock on things where you can't reason about who can acquire them because anybody could acquire them. Some other code elsewhere in the application could be locking on the same string value, and you'd have the potential for strange interactions, deadlocking because the other code was taking your lock, or have the other code notifying you. Do you want to have to think about how strings are pooled when debugging some multithreading behavior?
You can limit who can acquire your lock by defining your own lock and making it private, like this:
private final Object LOCK = new Object();
so only threads calling the methods of the object you're controlling access to can acquire the lock:
public void myMethod() {
synchronized(LOCK) {
...
}
}
That way you know exactly what can acquire the lock, it's not available to every thread in the application. The lock can be acquired by anything that can get a reference to that object, so keep the reference private.
The way your example uses wait without a loop with a condition variable is very suspect. A thread can exit from a call to wait without having been notified. Even if a thread is notified, that doesn't give it any special priority with the scheduler. Another thread can barge in and do something, possibly something affecting the state that the notification was alerting the waiting thread to, between the time the thread is notified and the time that the thread can reacquire the lock it gave up when it started waiting. For both reasons there needs to be a loop where the thread retests a condition when it wakes from waiting.
Also if by "codes using wait method on threads" you mean code where a Thread object is used as a lock, that's another thing to avoid doing, see the API documentation for Thread#join:
This implementation uses a loop of this.wait calls conditioned on this.isAlive. As a thread terminates the this.notifyAll method is invoked. It is recommended that applications not use wait, notify, or notifyAll on Thread instances.
You first need to be synchronized on the Object before calling wait. This is where you are getting the exception from.
void test() {
String s = "AAA";
synchronized( s ) {
s.wait();
}
}
The same thing must be done when you call notify, but in this case it is a very very bad idea because if a thread enters this method it will never return. Although considering it is a String literal you may be able to get away with it by using the same literal in another method in the same class, but don't count on it.
wait() method is implemented in Object, and String extends object so it can be used.
why someone use it? ask him. its not a programming question.
something i can think of:
he could be using "lock1".wait() in one class and "lock1".notify() in other, it will be something like global lock object
because literals are interned by
the compiler and thus refer to the same object
but its VERY VERY BAD PRACTICE
This is an example of synchronization with no affect.
First of all, it is unlikely you will need to synchronize on String, it is immutable after all, therefore, you don't need it to perform anything asynchronously.
Second, you are likely to be synchronizing on the incorrect object anyways, no correctly written program would use String as a synchronization lock.
Third and finally, s is a local variable. In fact, it holds exactly the same pattern that JCIP specifically tells you not to use if you inline it:
synchronized (new Object()) {
// ...
}
This is synchronization without effect, as it does not guarantee the purpose of the synchronized keyword: serialized access, lock and release semantics that require that only one thread execute the synchronized block at any given time.
Because of this, each thread will have their own lock - not good.
In Java's implementation locking, there is no way to atomically upgrade a lock from a read lock to write lock. For example, the following code snippet fails
ReentrantReadWriteLock lock = new ...
lock.readLock().lock();
boolean mustWrite = false;
// do somestuff and determine you must instead write! :-O
if(mustWrite) {
lock.writeLock().lock();
writeSomeStuff();
lock.writeLock().unlock();
}
lock.readLock.unlock();
The write lock acquisition has to wait for all read locks are done so it knows it's not overwriting data that readers might potentially be reading. That's bad. So the work around is to do something like this:
if(mustWrite) {
lock.readLock().unlock(); // let go of read before writing
lock.writeLock().lock();
writeSomeStuff();
lock.writeLock().unlock();
lock.readLock().lock(); // get back to reading
}
But this isn't ideal - someone might go and get do something in between when you unlock the read and pick up the write. Now it's probably a good idea to double check those conditions anyway, but still - it's ugly.
Now typically, when you acquire a lock you want to force your code to wait for it to actually acquire before you go doing what you're doing. You wouldn't want to just trust your locking mechanism that it will have given you your lock before you start messing with the data.
But why does it force you to halt execution from when you've signaled that you want the lock to when you're actually read to wait? For example, why couldn't it allow something like this:
lock.writeLock().notifyIntentToLock(); // puts you in line to get the write lock
// everyone else will block until you both
// acquire and release the lock
lock.readLock().unlock(); // proceed with the unlock so you don't deadlock
lock.writeLock().makeGoodOnIntentToLock(); // actually acquire that lock
So in a sense, the current lock functionality could be theorized as them both being done at the same time, like
public void lock() {
this.notifyIntentToLock();
this.makeGoodOnIntentToLock();
}
What design decisions would make them not allow some kind of delayed intent to lock? Is there a serious problem with a lock design like that that I'm simply not seeing?
All you have to do after the decision to take the exclusive lock is:
leave the read lock,
take the write lock and
check the condition again
based on the result, either proceed or bail out.
As for intents to take write locks, what happens when multiple concurrent intents exist? Possibly all of them have to check the initial conditions as the there is no way to ensure who would be victims (granted the lock after the winner).
There is more to that - the impl. of RW lock sucks to boot as the reads modify the metadata causing coherency traffic - hence RW locks don't scale well.
I am using in my code at the moment a ReentrantReadWriteLock to synchronize access over a tree-like structure. This structure is large, and read by many threads at once with occasional modifications to small parts of it - so it seems to fit the read-write idiom well. I understand that with this particular class, one cannot elevate a read lock to a write lock, so as per the Javadocs one must release the read lock before obtaining the write lock. I've used this pattern successfully in non-reentrant contexts before.
What I'm finding however is that I cannot reliably acquire the write lock without blocking forever. Since the read lock is reentrant and I am actually using it as such, the simple code
lock.getReadLock().unlock();
lock.getWriteLock().lock()
can block if I have acquired the readlock reentrantly. Each call to unlock just reduces the hold count, and the lock is only actually released when the hold count hits zero.
EDIT to clarify this, as I don't think I explained it too well initially - I am aware that there is no built-in lock escalation in this class, and that I have to simply release the read lock and obtain the write lock. My problem is/was that regardless of what other threads are doing, calling getReadLock().unlock() may not actually release this thread's hold on the lock if it acquired it reentrantly, in which case the call to getWriteLock().lock() will block forever as this thread still has a hold on the read lock and thus blocks itself.
For example, this code snippet will never reach the println statement, even when run singlethreaded with no other threads accessing the lock:
final ReadWriteLock lock = new ReentrantReadWriteLock();
lock.getReadLock().lock();
// In real code we would go call other methods that end up calling back and
// thus locking again
lock.getReadLock().lock();
// Now we do some stuff and realise we need to write so try to escalate the
// lock as per the Javadocs and the above description
lock.getReadLock().unlock(); // Does not actually release the lock
lock.getWriteLock().lock(); // Blocks as some thread (this one!) holds read lock
System.out.println("Will never get here");
So I ask, is there a nice idiom to handle this situation? Specifically, when a thread that holds a read lock (possibly reentrantly) discovers that it needs to do some writing, and thus wants to "suspend" its own read lock in order to pick up the write lock (blocking as required on other threads to release their holds on the read lock), and then "pick up" its hold on the read lock in the same state afterwards?
Since this ReadWriteLock implementation was specifically designed to be reentrant, surely there is some sensible way to elevate a read lock to a write lock when the locks may be acquired reentrantly? This is the critical part that means the naive approach does not work.
This is an old question, but here's both a solution to the problem, and some background information.
As others have pointed out, a classic readers-writer lock (like the JDK ReentrantReadWriteLock) inherently does not support upgrading a read lock to a write lock, because doing so is susceptible to deadlock.
If you need to safely acquire a write lock without first releasing a read lock, there is a however a better alternative: take a look at a read-write-update lock instead.
I've written a ReentrantReadWrite_Update_Lock, and released it as open source under an Apache 2.0 license here. I also posted details of the approach to the JSR166 concurrency-interest mailing list, and the approach survived some back and forth scrutiny by members on that list.
The approach is pretty simple, and as I mentioned on concurrency-interest, the idea is not entirely new as it was discussed on the Linux kernel mailing list at least as far back as the year 2000. Also the .Net platform's ReaderWriterLockSlim supports lock upgrade also. So effectively this concept had simply not been implemented on Java (AFAICT) until now.
The idea is to provide an update lock in addition to the read lock and the write lock. An update lock is an intermediate type of lock between a read lock and a write lock. Like the write lock, only one thread can acquire an update lock at a time. But like a read lock, it allows read access to the thread which holds it, and concurrently to other threads which hold regular read locks. The key feature is that the update lock can be upgraded from its read-only status, to a write lock, and this is not susceptible to deadlock because only one thread can hold an update lock and be in a position to upgrade at a time.
This supports lock upgrade, and furthermore it is more efficient than a conventional readers-writer lock in applications with read-before-write access patterns, because it blocks reading threads for shorter periods of time.
Example usage is provided on the site. The library has 100% test coverage and is in Maven central.
I have made a little progress on this. By declaring the lock variable explicitly as a ReentrantReadWriteLock instead of simply a ReadWriteLock (less than ideal, but probably a necessary evil in this case) I can call the getReadHoldCount() method. This lets me obtain the number of holds for the current thread, and thus I can release the readlock this many times (and reacquire it the same number afterwards). So this works, as shown by a quick-and-dirty test:
final int holdCount = lock.getReadHoldCount();
for (int i = 0; i < holdCount; i++) {
lock.readLock().unlock();
}
lock.writeLock().lock();
try {
// Perform modifications
} finally {
// Downgrade by reacquiring read lock before releasing write lock
for (int i = 0; i < holdCount; i++) {
lock.readLock().lock();
}
lock.writeLock().unlock();
}
Still, is this going to be the best I can do? It doesn't feel very elegant, and I'm still hoping that there's a way to handle this in a less "manual" fashion.
What you want to do ought to be possible. The problem is that Java does not provide an implementation that can upgrade read locks to write locks. Specifically, the javadoc ReentrantReadWriteLock says it does not allow an upgrade from read lock to write lock.
In any case, Jakob Jenkov describes how to implement it. See http://tutorials.jenkov.com/java-concurrency/read-write-locks.html#upgrade for details.
Why Upgrading Read to Write Locks Is Needed
An upgrade from read to write lock is valid (despite the assertions to the contrary in other answers). A deadlock can occur, and so part of the implementation is code to recognize deadlocks and break them by throwing an exception in a thread to break the deadlock. That means that as part of your transaction, you must handle the DeadlockException, e.g., by doing the work over again. A typical pattern is:
boolean repeat;
do {
repeat = false;
try {
readSomeStuff();
writeSomeStuff();
maybeReadSomeMoreStuff();
} catch (DeadlockException) {
repeat = true;
}
} while (repeat);
Without this ability, the only way to implement a serializable transaction that reads a bunch of data consistently and then writes something based on what was read is to anticipate that writing will be necessary before you begin, and therefore obtain WRITE locks on all data that are read before writing what needs to be written. This is a KLUDGE that Oracle uses (SELECT FOR UPDATE ...). Furthermore, it actually reduces concurrency because nobody else can read or write any of the data while the transaction is running!
In particular, releasing the read lock before obtaining the write lock will produce inconsistent results. Consider:
int x = someMethod();
y.writeLock().lock();
y.setValue(x);
y.writeLock().unlock();
You have to know whether someMethod(), or any method it calls, creates a reentrant read lock on y! Suppose you know it does. Then if you release the read lock first:
int x = someMethod();
y.readLock().unlock();
// problem here!
y.writeLock().lock();
y.setValue(x);
y.writeLock().unlock();
another thread may change y after you release its read lock, and before you obtain the write lock on it. So y's value will not be equal to x.
Test Code: Upgrading a read lock to a write lock blocks:
import java.util.*;
import java.util.concurrent.locks.*;
public class UpgradeTest {
public static void main(String[] args)
{
System.out.println("read to write test");
ReadWriteLock lock = new ReentrantReadWriteLock();
lock.readLock().lock(); // get our own read lock
lock.writeLock().lock(); // upgrade to write lock
System.out.println("passed");
}
}
Output using Java 1.6:
read to write test
<blocks indefinitely>
What you are trying to do is simply not possible this way.
You cannot have a read/write lock that you can upgrade from read to write without problems. Example:
void test() {
lock.readLock().lock();
...
if ( ... ) {
lock.writeLock.lock();
...
lock.writeLock.unlock();
}
lock.readLock().unlock();
}
Now suppose, two threads would enter that function. (And you are assuming concurrency, right? Otherwise you would not care about locks in the first place....)
Assume both threads would start at the same time and run equally fast. That would mean, both would acquire a read lock, which is perfectly legal. However, then both would eventually try to acquire the write lock, which NONE of them will ever get: The respective other threads hold a read lock!
Locks that allow upgrading of read locks to write locks are prone to deadlocks by definition. Sorry, but you need to modify your approach.
Java 8 now has a java.util.concurrent.locks.StampedLock
with a tryConvertToWriteLock(long) API
More info at http://www.javaspecialists.eu/archive/Issue215.html
What you're looking for is a lock upgrade, and is not possible (at least not atomically) using the standard java.concurrent ReentrantReadWriteLock. Your best shot is unlock/lock, and then check that noone made modifications inbetween.
What you're attempting to do, forcing all read locks out of the way is not a very good idea. Read locks are there for a reason, that you shouldn't write. :)
EDIT:
As Ran Biron pointed out, if your problem is starvation (read locks are being set and released all the time, never dropping to zero) you could try using fair queueing. But your question didn't sound like this was your problem?
EDIT 2:
I now see your problem, you've actually acquired multiple read-locks on the stack, and you'd like to convert them to a write-lock (upgrade). This is in fact impossible with the JDK-implementation, as it doesn't keep track of the owners of the read-lock. There could be others holding read-locks that you wouldn't see, and it has no idea how many of the read-locks belong to your thread, not to mention your current call-stack (i.e. your loop is killing all read locks, not just your own, so your write lock won't wait for any concurrent readers to finish, and you'll end up with a mess on your hands)
I've actually had a similar problem, and I ended up writing my own lock keeping track of who's got what read-locks and upgrading these to write-locks. Although this was also a Copy-on-Write kind of read/write lock (allowing one writer along the readers), so it was a little different still.
What about this something like this?
class CachedData
{
Object data;
volatile boolean cacheValid;
private class MyRWLock
{
private final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
public synchronized void getReadLock() { rwl.readLock().lock(); }
public synchronized void upgradeToWriteLock() { rwl.readLock().unlock(); rwl.writeLock().lock(); }
public synchronized void downgradeToReadLock() { rwl.writeLock().unlock(); rwl.readLock().lock(); }
public synchronized void dropReadLock() { rwl.readLock().unlock(); }
}
private MyRWLock myRWLock = new MyRWLock();
void processCachedData()
{
myRWLock.getReadLock();
try
{
if (!cacheValid)
{
myRWLock.upgradeToWriteLock();
try
{
// Recheck state because another thread might have acquired write lock and changed state before we did.
if (!cacheValid)
{
data = ...
cacheValid = true;
}
}
finally
{
myRWLock.downgradeToReadLock();
}
}
use(data);
}
finally
{
myRWLock.dropReadLock();
}
}
}
to OP:
just unlock as many times as you have entered the lock, simple as that:
boolean needWrite = false;
readLock.lock()
try{
needWrite = checkState();
}finally{
readLock().unlock()
}
//the state is free to change right here, but not likely
//see who has handled it under the write lock, if need be
if (needWrite){
writeLock().lock();
try{
if (checkState()){//check again under the exclusive write lock
//modify state
}
}finally{
writeLock.unlock()
}
}
in the write lock as any self-respect concurrent program check the state needed.
HoldCount shouldn't be used beyond debug/monitor/fast-fail detect.
I suppose the ReentrantLock is motivated by a recursive traversal of the tree:
public void doSomething(Node node) {
// Acquire reentrant lock
... // Do something, possibly acquire write lock
for (Node child : node.childs) {
doSomething(child);
}
// Release reentrant lock
}
Can't you refactor your code to move the lock handling outside of the recursion ?
public void doSomething(Node node) {
// Acquire NON-reentrant read lock
recurseDoSomething(node);
// Release NON-reentrant read lock
}
private void recurseDoSomething(Node node) {
... // Do something, possibly acquire write lock
for (Node child : node.childs) {
recurseDoSomething(child);
}
}
So, Are we expecting java to increment read semaphore count only if this thread has not yet contributed to the readHoldCount? Which means unlike just maintaining a ThreadLocal readholdCount of type int, It should maintain ThreadLocal Set of type Integer (maintaining the hasCode of current thread). If this is fine, I would suggest (at-least for now) not to call multiple read calls within the same class, but instead use a flag to check, whether read lock is already obtained by current object or not.
private volatile boolean alreadyLockedForReading = false;
public void lockForReading(Lock readLock){
if(!alreadyLockedForReading){
lock.getReadLock().lock();
}
}
Found in the documentation for ReentrantReadWriteLock. It clearly says, that reader threads will never succeed when trying to acquire a write lock. What you try to achieve is simply not supported. You must release the read lock before acquisition of the write lock. A downgrade is still possible.
Reentrancy
This lock allows both readers and writers to reacquire read or write
locks in the style of a {#link ReentrantLock}. Non-reentrant readers
are not allowed until all write locks held by the writing thread have
been released.
Additionally, a writer can acquire the read lock, but not vice-versa.
Among other applications, reentrancy can be useful when write locks
are held during calls or callbacks to methods that perform reads under
read locks. If a reader tries to acquire the write lock it will never
succeed.
Sample usage from the above source:
class CachedData {
Object data;
volatile boolean cacheValid;
ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
void processCachedData() {
rwl.readLock().lock();
if (!cacheValid) {
// Must release read lock before acquiring write lock
rwl.readLock().unlock();
rwl.writeLock().lock();
// Recheck state because another thread might have acquired
// write lock and changed state before we did.
if (!cacheValid) {
data = ...
cacheValid = true;
}
// Downgrade by acquiring read lock before releasing write lock
rwl.readLock().lock();
rwl.writeLock().unlock(); // Unlock write, still hold read
}
use(data);
rwl.readLock().unlock();
}
}
Use the "fair" flag on the ReentrantReadWriteLock. "fair" means that lock requests are served on first come, first served. You could experience performance depredation since when you'll issue a "write" request, all of the subsequent "read" requests will be locked, even if they could have been served while the pre-existing read locks are still locked.