Situation 1:
//snippet1
a=1
if(a!=0){
print(a);
}
//snippet2
a=0;
lets say thread T1 is executing snippet1, checks the if condition (which is true),enters the scope of if, then T1 is interrupted, and T2 executes snippet2, now T1 resumes and executes print(a). this is wrong as even though a is now 0, T1 goes ahead and prints despite the if condition being false.
The solution to the above problem that I read is acquire a lock on snippet1, then until the snippet1 is over, T1 won't be interrupted.
Good enough!
But I have a doubt regarding the following:
Situation 2:
//snippet3
acquire_lock1;
{
acquire_lock2;
}
//snippet4
acquire_lock2;
{
acquire_lock1;
}
Lets say thread T3 is executing snippet3 and T4 executing snippet4. Say T3 acquired lock1, was interrupted, T4 acquired lock2 and went on to acquire lock1, but it is now held by T3. So now this is a classic example of deadlock.
I am having problem in understanding that, if acquiring a lock guarantees atomicity of execution, and that once lock is acquired on the critical section, we cannot interrupt that thread, then how come after acquiring lock1 was T3 interrupted and T4 went on with the execution.
According to me, once lock1 is acquired by T3, it should begin execution without getting interrupted ,acquire Lock2, release lock2,release lock1, and complete this entire process without interruption. But this way a deadlock shall never occur.
Can anyone explain me whats wrong with this thought process, and when exactly can a deadlock occur?
Thanks in advance!
As #Kayaman said already in there comment: "Acquiring a lock doesn't mean that the thread can't be interrupted".
The lock just protects ("guards") a code section from being executed by two threads at the same time (or that another thread may enter that section before another thread has left it). Technically, an acquired lock only guarantees that no other thread can acquire the same lock at the same time. This means that such another thread waits in the acquire statement until the lock is released (or a timeout expires, or hell freezes over … – for the following discussion, these details are not required).
So when Thread1 acquires Lock1 at the beginning of Section1, and then dies without returning the lock, no other thread may enter Section1. Or when Thread1 will be interrupted while holding the lock, it has to get back control to be able to release the lock before any other thread may enter Section1.
Your second scenario is now prone to a deadlock:
Thread1 acquires Lock1 when entering Section1_1, guarded by this lock
Thread2 acquires Lock2, guarding Section2_1,
Thread1 advances to the begin of Section1_2 that is guarded by Lock2
The attempt by Thread1 to acquire Lock2 fails, because this is already hold by Thread2
Thread1 now waits that Lock2 is released
Meanwhile Thread2 advanced to the begin of Section2_2 that is guarded by Lock1
As Lock1 is already hold by Thread1, the attempt by Thread2 to acquire that lock fails
Now Thread2 wait for Lock1 to be released
As both threads are now mutually waiting for the other one to release the thread, your program is trapped in a deadlock
An easy way to prevent that is to ensure that locks are always taken in the same sequence (also known as the 'ABC' rule): when threads need the locks Lock_One, Lock_Two, Lock_Three, they have to acquire them always in the sequence of their names (first Lock_One, then Lock_Three, finally Lock_Two). Although this seems not logical (because of the chosen names …) it guarantees that you do not get a dead lock. A thread can acquire Lock_Three only if it has already acquired Lock_One, or it does not need Lock_One at all; same for Lock_Two …
If for some reason it is required to acquire Lock_Two before Lock_Three … rename the locks so that the ABC rule will work again – and adjust your code afterwards!
Related
This question already has answers here:
What is the Re-entrant lock and concept in general?
(4 answers)
Closed 4 years ago.
So , a reentrant lock increments the count by one if the current thread acquires the lock again. what i am unable to understand is why and how that helps or benefits us?
The reason, that a reentrant lock is doing this, is to not lock the same Thread again, that has already acquired this lock.
For example: Let's say you have Thread A that is acquiring your Reentrant Lock A. Than Thread B tries to acquire the Lock A, which will result in Thread B getting blocked (more about the Thread states can be found here). Now, Thread A is trying to (again) acquire the Lock A.
Because the Reentrant lock is now highering its count, Thread A is not getting blocked. Thread A still has access over the Lock and can continue (the locks stores the informations about the depth). If he (sooner or later) releases the Lock, the count will be lowered again, to check if Thread A still need the Lock or not. If the count gets to 0, meaning Thread A has released for every time he has acquired, Thread B will get the access over the Lock.
Without reentrancy, you now would have a deadlock. Why? Because Thread A has the lock and would wait to get it again.
Concurrency can be realy complicated, reentrancy helps (just a bit) reducing this complexity.
This helps for the unusual situation when you want to call another method that also requires a lock.
ReentrantLock lock = new ReentrantLock();
public void doSomething() {
lock.lock();
try {
// Something.
} finally {
lock.unlock();
}
}
public void somethingElse () {
lock.lock();
try {
// Something else.
// We can now call another locking method without risking my lock being released.
doSomething();
} finally {
lock.unlock();
}
}
Here public can call doSomething and it will acquire the lock, do it's thing and then release the lock when unlock is called.
However, when somethingElse is called and it calls doSomething it just increases the lock count. When doSomething unlocks it does not release the lock, it merely counts the lock-count down, leaving the final unlock in somethingElse to release the lock.
The point of incrementing a count for the lock is to keep track of how many times the thread acquired the lock, so that the lock won't actually be released until the thread indicates readiness to release the lock the same number of times.
The assumption is that commands for locking will be matched with commands for releasing the lock.
Say my code enters Code Section A, which requires the lock.
Then without exiting Code Section A, it enters Code Section B which also requires the same lock. As you note we have the lock, so we needn't block.
But we'll leave Section B, and Section B is written to release the lock when we exit it. (Otherwise code that reaches Section B without already having the lock would never release the lock.)
We're still in Section A, though, so we don't want to really give up the lock yet, or another thread could take it while we're in Section A.
So when we entered Section B we incremented the lock count, which means when we exit Section B we can reduce the count by one and, seeing that it's not back to 0, not release it.
Then when Section A again releases the lock, the count falls back to 0 and that is when we really let the lock go.
Say I have three threads, thread 1, thread 2, and thread 3 all sharing the same lock. Thread 2 acquires the lock, does some work and then blocks via a call to the await method. Thread 1 then acquires the lock, does some work, and during the middle of it, thread 3 tries to acquire the lock but is blocked since thread 1 is holding it. Thread 1 finishes working and, before terminating, signals thread 2 that it can reacquire the lock. So what happens then? Will thread 2 or thread 3 acquire the lock next?
Thank you so much for your time and help in advance.
If no priority is given, whoever comes first will acquire the lock.
While mutual exclusion may provide safety property, it does not ensure liveness property. There can be cases where a thread keeps coming first to acquire the lock, resulting in starvation (other threads wait forever because someone keeps occupying).
Google with the keywords highlighted will help you understand more. I found these slides really comprehensive http://www.cs.cornell.edu/Courses/cs414/2004su/slides/05_schedule.pdf
If you're using a ReentrantLock (or any of its subclasses), you can pass a "fairness" flag to the constructor. If set to true, this will ensure that control of the lock passes to the longest-waiting thread, in this case your Thread 1.
Lock lock = new ReentrantLock(true);
Hi my question is how synchronization works?
In simple words we know that if a thread entered in a synchronization block by acquiring lock on any reference, than no other thread acquire that lock until first one exit from synchronized block.
But my question is if the thread acquired a lock on a reference and execute methodA() in that method there is a synchronized block, than can other thread acquire a lock on same reference and execute methodB(), there is also a synchronized block in it?
Synchronization is for mutual exclusion
Whenever you synchronize on an object, a lock is obtained on the monitor of that object.
Image Source: Thread synchronization
As the image shows as whenever a thread acquires a lock on monitor then it becomes the owner thread and no other thread can obtain lock on same monitor unless the owner thread enters wait state or releases the lock.
That said another point to keep in mind is that locks that are used in synchronized blocks are Reentrant, which means that if Thread 1 is the owner of the lock and same thread again tries to gain lock of which it is owner then Java will allow that.
Ok, than i have a issue. synchronized(b){ try{ b.wait();
}catch(InterruptedException e){} now as i acquired a lock on b object,
it means no other thread can acquire a lock on b object.
On calling wait(), the owner thread releases the lock and goes in Wait Set as shown in diagram. After that someone else from Entry set can get the lock.
I believe that the second lock will have to wait till the first lock has been released.
Have a look at the link Java synchronized references for more information.
Synchronization is very simple
One Thread locks any number of objects.
Only one thread can lock a given object at a given time.
When attempting to lock an object that is already locked by another thread, a thread has to wait until the object is released.
A Thread in wait state releases the locks it holds until it exists the wait state (at which time it attempts to reacquire the locks previously held).
In java, a lock is acquired using a synchronized block.
To answer your question: 2 threads can never acquire a lock on the same object at the same time.
I have taken the following points from this API and I would like to know the difference between the 2 following points:
Waiting threads are signalled in FIFO order.
The ordering of lock reacquisition for threads returning from
waiting methods is the same as for threads initially acquiring the
lock, which is in the default case not specified, but for fair locks
favors those threads that have been waiting the longest.
It is related to Condition class which is usually returned by the ReentrantLock method .newCondition(), and the bit I quoted it's explaining the difference between the methods of Condition and the regular monitor methods of the Object class.
"Waiting threads are signalled in FIFO order". I think that as long as a lock is created either fair or not, the fact that the waiting threads are signalled in a FIFO order is totally irrelevant isn'it? because anyhow it's whether they have been constructed, fair or not, which decides how they are queued.
Just asking for a confirmation.
Thanks in advance.
Please see below answers to your questions:
1.Waiting threads are signalled in FIFO order.
When we invoke await() method of Condition, thread goes into waiting state, the above statement refers to how these threads in waiting state are signalled. So if threads T1 went to waiting state before T2, T1 will be signalled before T2.
2.The ordering of lock reacquisition for threads returning from waiting methods is the same as for threads initially acquiring the lock, which is in the default case not specified, but for fair locks favors those threads that have been waiting the longest.
In continuation to above statement, when waiting thread is signalled, it tend to reaquire lock. Though above statement says T1 will be signalled before T2, but when it comes to reaquiring lock, the order of reacquisition uses concepts defined by Lock. So, it depends on how Lock object was created. While creating Lock you might have specified a fairness parameter:
ReentrantLock(boolean fair)
If yes then that parameter is used, if not then default behaviour of locks happens, you can read more on ReentrantLock Locks at this link
There could be more explanations to these statements, just tried to best detail my understanding here. Hoping was able to clarify.
Cheers !!
As long as a lock is created either fair or not, the fact that the waiting threads are signaled in a FIFO order is totally irrelevant, isn't it? Because anyhow it's whether they have been constructed, fair or not, which decides how they are queued.
I think it is relevant.
Consider a scenario where T1 and T2 are waiting on a condition C (with T1 waiting longer than T2), T3 is running inside the monitor and T4 is waiting for its initial lock acquisition. T3 signals C and leaves the monitor releasing the lock. Let's suppose no spurious wakeup occur.
If the lock is fair, T4 will definitely acquire the lock before T1, but the fact that waiting threads are signaled in FIFO order will guarantee you that T1 will acquire the lock before T2.
Also, if the lock is not fair, we can't say which thread will acquire the lock first between T1 and T4, but again the fact that waiting threads are signaled in FIFO order guarantees that T1 will acquire the lock before T2, provided no other signals occur until T1 acquires the lock (for example in case T1 is responsible for the next signaling).
I think the source code can give us more clues about how it works.ReentrantLock.newCondition() return a ConditionObject in AbstractQueuedSynchronizer.Here is the source code link AQS source code.
1.Waiting threads are signalled in FIFO order.
There are two queues in AbstractQueuedSynchronizer.
One is for waiting for the lock(just call it lock waiting queue),you will see two volatile variable head and tail in
AbstractQueuedSynchronizer's class definition,and the fairness parameter will affect this queue's behavior.When you new a fair ReentrantLock and call acquire,AQS will call FairSync's tryAcquire to check if current thread is the first thread waiting in the lock waiting queue,see hasQueuedPredecessors.
Another queue is the signal queue in the definition of ConditionObject,you will see two variable firstWaiter and lastWaiter.When await is called,a node will add to the tail of the queue and When signal is called,a node from the head will be dequeued and add to the lock waiting queue to reacquire the lock.Add to the lock waiting queue didn't mean it will be woke up,but a Lock.unlock() will be called after signal,which will wake up the waiters,see unparkSuccessor.
2.The ordering of lock reacquisition for threads returning from waiting methods is the same as for threads initially acquiring the lock, which is in the default case not specified, but for fair locks favors those threads that have been waiting the longest.
wake up from the await method didn't mean to hold the lock,it will call acquireQueued to reacquire the lock and could be parked again.
In my understanding,the order of initially acquiring the lock is the same as the order of calling await,so the same as the order of calling acquireQueued,What confused me was but for fair locks favors those threads that have been waiting the longest.,When wake up from the await,in my opinion,it will be the first thread in the lock waiting queue,When call acquireQueued and check p == head && tryAcquire(arg),lock fair or not has no effect.
Hope this helps and let me if I am wrong.
In my program i am using a Condition object created from a
private static final Lock lock = new ReentrantLock();
like so:
private static final Condition operationFinished = MyClass.lock.newCondition();
Occasionally (as it is always happening with concurrency problems) i encounter following behavior:
Thread1 aquires the lock
Thread1 calls operationFinished.awaitNanos() - this should suspend Thread1 and release the lock.
Thread2 tries to aquire the same lock, but debugging output shows that Thread1 is still holding the lock!
According to documentation this behavior is impossible, because upon awaitNanos() Thread1 first releases the lock and then suspends.
If it didn't release the lock, then it would not suspend, therefore Thread2 could never even get a possibility to try to get hold on the lock.
Has anybody experienced something similar? This errors happens once in 100 times - but still it indicates that I am either not using the concurrency-utilities in a proper way, or that there is some kind of bug in the java.utils.concurrent.* package (which i doubt).
UPDATE:
In response to Peters answer:
I observe following behavior: Apparently the 2 threads deadlock each other. I can see that Thread2 blocks (waiting for the lock) and at the same time awaitNanos() in Thread1 never times out.
Are you sure that the wait time hasn't finished? If you wait for a short period of time (a few hundred nanoseconds, for example), the wait time could expire before Thread2 can fully start, in which case Thread1 might be reactivated first.
Depending on how you are viewing this information, I have seen many examples of where multiple threads wait() on an object still say they are all holding the same lock. It may be that the stack trace or monitoring is mis-leading.
Say you have thread1 which is holding the lock, but in awaitNanos(), you have Thread2 which is trying to obtain the lock(), but sometimes Thread3 is holding the lock as well....
I would do a jstack -l {pid} to check all the threads which might be holding the lock.
If a lock deadlocks, awaitLock (nor wait()) won't return as it must acquire the lock before doing so. (Unless it is interrupted)