ReentrantLock API doc says:
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread.
Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock.
I am not able to understand points 2:
If one thread obtain lock multiple times in succession, then as per point 1, other threads will wait for longer and that does mean they will get the lock next time. Then how this does not affect (fairness of) thread scheduling? Thus, I feel fair lock is nothing but longest waiting time first thread scheduling.
I think they're just trying to separate the fairness logic side from the scheduling logic. Threads may be concurrent, but that doesn't mean they try to access Locks simultaneously. Thread priority requests are only 'hints' to the OS, and are never guaranteed the way they may be expected.
So, just because you have threads A and B, which may request a lock, which may even have identical behavior, one thread may execute, acquire the lock, release, re-acquire, before the other locks even request it:
A: Request Lock -> Release Lock -> Request Lock Again (Succeeds)
B: Request Lock (Denied)...
----------------------- Time --------------------------------->
Thread scheduling logic is decoupled from the Lock logic.
There are other scheduling issues too, the burden of which often falls on the software designer, see Starvation and Livelock
Related
The way Lock interface with class Reentrant(true) lock works is that it uses BlockingQueue to store Threads that want to acquire the lock. In that way thread that 'came first, go out first'-FIFO. All clear about that.
But where do 'unfair locks' go, or ReentrantLock(false). What is their internal implementation? How does OS decide which thread now to pick? And most importantly are now these threads also stored in a queue or where? (they must be somewhere)
The class ReentrantLock does not use a BlockingQueue. It uses a non-public subclass of AbstractQueuedSynchronizer behind the scenes.
The AbstractQueuedSynchronizer class, as its documentation states, maintains “a first-in-first-out (FIFO) wait queue”. This data structure is the same for fair and unfair locks. The unfairness doesn’t imply that the lock would change the order of enqueued waiting threads, as there would be no advantage in doing that.
The key difference is that an unfair lock allows a lock attempt to succeed immediately when the lock just has been released, even when there are other threads waiting for the lock for a longer time. In that scenario, the queue is not even involved for the overtaking thread. This is more efficient than adding the current thread to the queue and putting it into the wait state while removing the longest waiting thread from the queue and changing its state to “runnable”.
When the lock is not available by the time, a thread tries to acquire it, the thread will be added to the queue and at this point, there is no difference between fair and unfair locks for it (except that other threads may overtake it without getting enqueued). Since the order has not been specified for an unfair lock, it could use a LIFO data structure behind the scenes, but it’s obviously simpler to have just one implementation code for both.
For synchronized, on the other hand, which does not support fair acquisition, there are some JVM implementations using a LIFO structure. This may change from one version to another (or even with the same, as a side effect of some JVM options or environmental aspects).
Another interesting point in this regard, is that the parameterless tryLock() of the ReentrantLock implementation will be unfair, even when the lock is otherwise in fair mode. This demonstrates that being unfair is not a property of the waiting queue here, but the treatment of the arriving thread that makes a new lock attempt.
Even when this lock has been set to use a fair ordering policy, a call to tryLock() will immediately acquire the lock if it is available, whether or not other threads are currently waiting for the lock. This "barging" behavior can be useful in certain circumstances, even though it breaks fairness.
I am trying to understand the usefulness of fairness property in Semaphore class.
Specifically to quote the Javadoc mentions that:
Generally, semaphores used to control resource access should be initialized as fair, to ensure that no thread is starved out from accessing a resource. When using semaphores for other kinds of synchronization control, the throughput advantages of non-fair ordering often outweigh fairness considerations.
Could someone provide an example where barging might be desired here. I cannot think past resource access use case. Also, why is that the default is non-fair behavior?
Lastly, are there any performance implications in using the fairness behavior?
Java's built-in concurrency constructs (synchronized, wait(), notify(),...) do not specify which thread should be freed when a lock is released. It is up to the JVM implementation to decide which algorithm to use.
Fairness gives you more control: when the lock is released, the thread with the longest wait time is given the lock (FIFO processing). Without fairness (and with a very bad algorithm) you might have a situation where a thread is always waiting for the lock because there is a continuous stream of other threads.
If the Semaphore is set to be fair, there's a small overhead because it needs to maintain a queue of all the threads waiting for the lock. Unless you're writing a high throughput/high performance/many cores application, you won't probably see the difference though!
Scenario where fairness is not needed
If you have N identical worker threads, it doesn't matter which one gets a task to execute
Scenario where fairness is needed
If you have N tasks queues, you don't want one queue to be waiting forever and never acquiring the lock.
I scanned all the java documentation on the synchronized statements looking for an answer to this question with no luck.
Say I have thread1, thread2, thread3 trying to run the following code all at the same time.
synchronized(lockObj) {
doSomething();
}
Assume thread1 gets first to doSomething(), thread2 then thread3 which will block and wait on the synchronized statement.
Question
When thread1 releases the lock, which of the threads will be released first?
What is the general order rule that applies when releasing a lock?
1. Either thread2 or thread3. There is no guarantee:
Likewise, no assumptions should be made about the order in which threads are granted ownership of a monitor or the order in which threads wake in response to the notify or notifyAll method
http://docs.oracle.com/javase/1.5.0/docs/guide/vm/thread-priorities.html#general
2. Java monitors (synchronized/await/notify/notifyAll) are non-fair. The synchronization primitives from java 1.5 usually have parameters to enforce the fairness. Be advised that the fair version have a considerably performance penalty, usually the non-fair version should work be used: statistically, every thread will be given the chance to run, even if the order is not strictly enforced.
Programs using fair locks accessed by many threads may display lower overall throughput (i.e., are slower; often much slower) than those using the default setting, but have smaller variances in times to obtain locks and guarantee lack of starvation. Note however, that fairness of locks does not guarantee fairness of thread scheduling. Thus, one of many threads using a fair lock may obtain it multiple times in succession while other active threads are not progressing and not currently holding the lock. Also note that the untimed tryLock method does not honor the fairness setting. It will succeed if the lock is available even if other threads are waiting.
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReentrantLock.html#ReentrantLock%28boolean%29
When two threads try to acquire the lock of the same object what are the things that are considered to decide upon to which thread the lock should be handed over.
According to the Java documentation for notify():
Wakes up a single thread that is waiting on this object's monitor. If
any threads are waiting on this object, one of them is chosen to be
awakened. The choice is arbitrary and occurs at the discretion of the
implementation. A thread waits on an object's monitor by calling one
of the wait methods.
So if you use synchronized(obj){} you basically have no control on which thread will obtain the lock on obj, and you cannot make any assumption. It depends on the scheduler.
If you want fairness (that is, the next thread obtaining the lock is the first in the queue), have a look at ReentrantLock: it has a boolean flag to specify you want to enforce fairness.
According to Java Oracle Docs:
The constructor for this class accepts an optional fairness parameter. When set true, under contention, locks favor granting access to the longest-waiting thread. Otherwise this lock does not guarantee any particular access order.
If you are allowing fairness then FIFO (First-in-First-out) is used, otherwise it seems random (from my observations).
What happens to a Thread that fails to acquire a lock (non-spin)? It goes to the BLOCKED state. How does it gets executed again?
Lock lck = new ReentrantLock();
lck.lock()
try
{
}
finally
{
lck.unlock();
}
The scheduler (or the underlying Lock implementation) is responsible for getting it running again. If the lock action was translated into a mutex call all the way into the kernel, the scheduler will not reschedule the thread until the mutex becomes available; then the OS scheduler will re-awaken the thread. Reading the wikipedia page on Context Switch and links from there might provide more insight into the detailed mechanisms involved. You can also look directly at the code for ReentrantLock though that will eventually boil your question down to some combination of primitives including AbstractedQueuedSynchronizer, various atomic operations, and maybe LockSupport.park() and unpark(). You might augment your question or ask a new one if you're specifically interested in kernel-level blocking/context switches or specifically how various Java primitives (e.g., j.u.c.Lock or primitive object monitors) are implemented atop the kernel.
In practice, because this is costly, this may be optimized away by the JVM or lock implementation. For instance, the thread may actually spin for a bit to see if the lock is freed before actually blocking.
Note that a Java thread may report the state BLOCKED even if the underlying OS thread is not blocked, specifically in the adaptive spinning cases described in the performance whitepaper below.
There are some great resources out there to learn about concurrency control in Java. Leading the pantheon is Java Concurrency in Practice. Some interesting discussion of synchronization performance in HotSpot 6.0 in the Java SE 6 Performance Whitepaper and some related slides.
Lock acquisition never fails. Think of it as having not yet succeeded.
Sure, there are some cases where it will never succeed, but there's no transition event where the thread is notified of a failure… it just keeps waiting.
The thread holding the lock unlocks the lock and then the (or "a") blocked thread is woken up. If the thread holding the lock never releases the lock (perhaps because it's blocked on another resource) then you get deadlock. A non-spinning lock will typically use the wait()/notify() primitives or something similar such that the thread is notified when the lock again becomes available.