what happen to the readwritelock if current thread crashes - java

As title says, curious what happens to the readwritelock when the current thread crash.
readlock.lock();
try {
...
} finally {
readlock.unlock();
}
We can definitely unlock in the finally block to prevent any abruption. But what if the readLock.lock() statement crashes, does the lock automatically released?
Thanks,

If readlock.lock throws exception.
Lock operation fails.
No lock is being acquired.
No unlock is required.
If run-time exception being thrown during readlock.lock, yet lock is being acquired. lock author is implementing it wrongly. You may file a bug report for the author :)

If you refer to java.util.concurrent.locks.ReentrantReadWriteLock class then crash of a thread just after readlock.lock() (readlock.unlock() has not been called) will not release the read lock.
It is different with write lock though. A write lock defines an owner and can only be released by the thread that acquired it. In contrast, the read lock has no concept of ownership, and there is no requirement that the thread releasing a read lock is the same as the one that acquired it.
I would suggest putting readlock.lock() into try-finally statement.

It depends on your implementation of 'readlock' variable and 'how did your thread crash'. As far as I can see 'readlock' variable can be of any type, and there are multiple way a thread can crash.
For your information only, each java objects has a 'monitor'. If what you need is to synchronize access to shared variable by multiple threads, I suggest you spend some time with this chapter of java tutorial: http://docs.oracle.com/javase/tutorial/essential/concurrency/index.html

Related

What does "non-block-structured locking" mean?

I am reading Java Concurrency in Practice. In 13.1 Lock and ReentrantLock, it says:
Why create a new locking mechanism that is so similar to intrinsic locking? Intrinsic locking works fine in most situations but has some functional limitations— it is not possible to interrupt a thread waiting to acquire a lock, or to attempt to acquire a lock without being willing to wait for it forever. Intrinsic locks also must be released in the same block of code in which they are acquired; this simplifies coding and interacts nicely with exception handling, but makes non-block-structured locking disciplines impossible.
What does "non-block-structured locking" mean? I think it means that you can lock in one method, unlock in another method, like Lock, but intrinsic locks must be released in the same block of code in which they are acquired. Am I right?
But the Chinese version of that book translates "block" to "阻塞". Is it an error?
Block structured locking means that the pattern for acquiring and releasing locks mirrors the lexical code structure. Specifically, a section of code that acquires a lock is also responsible for releasing it; e.g.
Lock lock = ...
lock.acquire();
try {
// do stuff
} finally {
lock.release();
}
The alternative ("non-block-structured locking") simply means that the above constraint does not apply. You can do things like acquiring a lock in one method and releasing it in a different one. Or (hypothetically) even pass the lock to another thread to release1. The problem is that this kind of code is a lot harder to get correct, and a lot harder to analyze than the examples like the above.
1 - Beware. If you pass acquired locks between threads, you are liable to run into memory anomalies or worse. Indeed, I'm not even sure that it is legal in Java.
The "block structure" referred to in the quoted text is clearly talking about the lexical structure of the code, as per the Wikipedia article on the topic Block (programming). If the Chinese version of Java Concurrency in Practice uses characters that mean "blocking" in the sense of Blocking (programming), that is a mistranslation.

is lock downgrading necessary when using ReentrantReadWriteLock

There is a sample usage about lock downgrading in the doc of ReentrantReadWriteLock(see this).
class CachedData {
final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
Object data;
volatile boolean cacheValid;
void processCachedData() {
rwl.readLock().lock();
if (!cacheValid) {
// Must release read lock before acquiring write lock
rwl.readLock().unlock();
rwl.writeLock().lock();
try {
// Recheck state because another thread might have
// acquired write lock and changed state before we did.
if (!cacheValid) {
data = ...
cacheValid = true;
}
// Downgrade by acquiring read lock before releasing write lock
rwl.readLock().lock();//B
} finally {//A
rwl.writeLock().unlock(); // Unlock write, still hold read
}
}
try {
use(data);
} finally {//C
rwl.readLock().unlock();
}
}
}
If I change Object data to volatile Object data, should I still need downgrading write lock to read lock?
update
What I mean is if I add volatile to data,Before I release the write lock in finally block at comment A,should I still need acquiring the read lock as the code at commentBandC do? Or the code can take the advantage of volatile?
No, volatile is not needed whether you downgrade or not (the locking already guarantees thread-safe access to data). It also won't help with the atomicity, which is what the acquire-read-then-write-lock pattern does (and which was the point of the question).
You're talking about needing to downgrade like it's a bad thing. You can keep a write lock and not downgrade, and things will work just fine. You're just keeping an unnecessarily strong lock, when a read lock would suffice.
You don't need to downgrade to a read lock, but if you don't it'll make your code less efficient: if use(data) takes 2 seconds (a long time), then without lock downgrading you're blocking all other readers for 2 seconds every time you refresh the cache.
If you mean why do you even need the read lock once the cache refresh is done, it's because otherwise it would be possible for another thread to start a new cache refresh (as there wouldn't be any locks) while we're still working on use(data).
In the given example code it's not possible to determine whether it would actually matter since there's not enough information, but it would create a possible additional state for the method and that's not an advantage:
One or more threads are in use(data), having read locks
One thread is refreshing cache, having write lock
One thread is in use(data) without lock and one thread is refreshing cache with a write lock

Need of an object in synchronised block

As per my understanding, if we add synchronized keyword in our code, whole block of code inside it will be locked for the other threads. In that case, why do we need to specify a particular object in synchronized keyword.
for eg. synchronized(lockObject). what is the use of lockObject here?
Let's say you have 2 resources you want to synchronize; the bathroom and the fridge.
You want people to be able to grab a snack from the fridge even if someone is using the bathroom, don't you?
So you use different locks on the fridge and on the bathroom.
In programming terms, that means that each independent resource can have it's own lockObject.
Note that a resource can have several methods that access them - all the accessors of the same resource should use the same lock! After all, if you have two doors into the bathroom it wouldn't do much good if you only locked one of them.
if we add synchronized keyword in our code, whole block of code inside it will be locked for the other threads.
Incorrect. synchronized involves a mechanism entirely separate from your block of code: acquiring and releasing a mutual exclusion lock. Java has the concept of synchronized blocks as a convenience to ensure proper release of a lock after it's acquired.
So, what actually happens is that your thread acquires the monitor associated with the instance given in the parentheses, then proceeds to execute the block of code, then releases the monitor. Meanwhile no other thread can acquire that particular monitor, but it can very well acquire any other object's monitor. If you don't take care to always have the same object involved in the synchronized block, you will get no mutual exclusion.

How is reentrancy implemented in Java?

What actually happens when the same thread tries to acquire a lock that it already owns ?
I guess your question is about the semantics of the synchronized block/modifier. Refer to the Java Language Specification. If your question is about a specific implementation's way of doing it, then you need to specify the exact implementation you have in mind. But this being a well-understood technique, I don't see a reason for that.
Quoting from http://download.java.net/jdk7/archive/b123/docs/api/java/util/concurrent/locks/ReentrantLock.html
A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit >monitor lock accessed using synchronized methods and statements, but with extended >capabilities.
A ReentrantLock is owned by the thread last successfully locking, but not yet unlocking it. >A thread invoking lock will return, successfully acquiring the lock, when the lock is not >owned by another thread. The method will return immediately if the current thread already >owns the lock. This can be checked using methods isHeldByCurrentThread(), and getHoldCount().
I Agree that GrepCode explains it very well

Java lock and happend-before relation

I'm not sure if I'm interpreting the javadoc right. When using a ReentrantLock after calling the lock method and successfully gaining a lock, can you just access any object without any synchronized blocks and the happend-before relationship is magically enforced?
I don't see any connection between the ReentrantLock and the objects I'm working on, that's why it is hard to believe I can work on them safely. But this is the case, or am I reading the javadoc wrong?
If thread A has modified some object inside a code block CB1 guarded by the lock and then releases the lock, and thread B enters in a code block guarded by the same lock, then thread B will see the modifications done by thread A in the code block CB1.
If two threads read and write the same shared state, then every read and write to this state should be guarded by the same lock.
It's ... a (mutex) lock:
void myMethod()
{
myLock.lock(); // block until condition holds
try
{
// Do stuff that only one thread at a time should do
}
finally
{
myLock.unlock()
}
}
Only one thread can hold the lock at a time, so anything between the lock() and unlock() calls is guaranteed to only be executed by one thread at a time.
The relevant Oracle tutorial can be found here.
There's no magic in it. You're safe if, and only if, all threads accessing an object use the same lock - be it a ReentrantLock or any other mutex, such as a synchronized block.
The existence ReentrantLock is justified by that it provides more flexibility than synchronized: you can, for example, just try to acquire the lock - not possible with synchronized.

Categories

Resources