I have a question on ReadwriteLocks good practice. I've only ever used synchronized blocks before, so please bear with me.
Is the code below a correct way in which to use a ReadWriteLock? That is,
Obtain the lock in the private method.
If a condition is met, return from the private method having not released the lock. Release the lock in the public method.
Alternatively:
Obtain the lock in the private method.
If the condition is not met, release the lock immediately in the private method.
Many thanks
private List<Integer> list = new ArrayList<Integer>();
private ReadWriteLock listLock = new ReentrantReadWriteLock
public int methodA(int y) {
...........
long ago = methodB(y);
list.remove(y);
listLock.writeLock().unlock();
}
private long methodB(int x) {
listLock.writeLock().lock();
if(list.contains(x) {
long value = // do calculations on x
return value;
}
else {
listLock.writeLock().unlock();
// return something else unconnected with list
}
Normally when using locks you would do something similar to this.
Lock lock = ...; // Create type of lock
lock.lock();
try {
// Do synchronized stuff
}
finally {
lock.unlock();
}
This ensures that the lock is always unlocked at the end of the block. No matter if there is an exception thrown. Since you are using a reentrant lock you can place this in both methods and it will work correctly, not releasing the lock until the last finally block executes.
Edit: Javadocs for the Lock interface reinterates what I posted.
Related
If class has field with int type (not Atomic Integer and without volatile keyword) and all access to this field happens under read/write locks - will this field thread-safe in this case? Or in some moment some thread can see not real value of this field but something from cache?
public static class Example {
private int isSafe;
private final ReadWriteLock lock;
public Example(int i) {
isSafe = i;
lock = new ReentrantReadWriteLock();
}
public int getIsSafe() {
final Lock lock = this.lock.readLock();
lock.lock();
try {
return isSafe;
} finally {
lock.unlock();
}
}
public void someMethod1() {
final Lock lock = this.lock.writeLock();
lock.lock();
try {
isSafe++;
} finally {
lock.unlock();
}
}
}
Yes, This approach is thread-safe. If there is no thread that has requested the write lock and the lock for writing, then multiple threads can lock the lock for reading. It means multiple threads can read the data at the very moment, as long as there’s no thread to write the data or to update the data.
Get answer from #pveentjer in comments under question:
It is important to understand that caches on modern cpus are always
coherent due to the cache coherence protocol like MESI. Another
important thing to understand is that correctly synchronized programs
exhibit sequential consistent behavior and for sequential consistency
the real time order isnt relevant. So reads and writes can be skewed
as long as nobody can observe a violation of the program order.
I am trying to understand what is the use of doing condition.await() if I am already doing lock.lock()
If I understood locks correctly, once I do lock.lock() it will not proceed any further if some other thread has a lock.
So, in this case if pushToStack() has acquired a lock by doing lock.lock() then what is the use of checking for stackEmptyCondition.await() in the popFromStack() method? Because anyway, the code will stop at the lock.lock() line in the popFromStack() method. What am I missing/wrong?
public class ReentrantLockWithCondition {
Stack<String> stack = new Stack<>();
int CAPACITY = 5;
ReentrantLock lock = new ReentrantLock();
Condition stackEmptyCondition = lock.newCondition();
Condition stackFullCondition = lock.newCondition();
public void pushToStack(String item){
try {
lock.lock();
while(stack.size() == CAPACITY) {
stackFullCondition.await();
}
stack.push(item);
stackEmptyCondition.signalAll();
} finally {
lock.unlock();
}
}
public String popFromStack() {
try {
lock.lock(); // we are blocked here to acquire a lock
while(stack.size() == 0) {
stackEmptyCondition.await(); // then why do we need to check this again?
}
return stack.pop();
} finally {
stackFullCondition.signalAll();
lock.unlock();
}
}
}
The point is the Condition, not the Lock.
It is often the case that a program needs to wait until either "something happens" or "something is in a particular state". The Condition represents what you're waiting for.
In order to program such a thing safely, some sort of locking is needed. If you're waiting for something to be in a particular state, you really want it to remain in that state while you do whatever you had in mind when you decided to wait for it. That's where the Lock comes in.
In your example, you want to wait until the stack is not full, and when you discover that the stack is not full, you want it to stay not-full (that is, prevent some other thread from pushing on to the stack) while you push something on that stack.
I understand (or at least I think I do;) ) the principle behind volatile keyword.
When looking into ConcurrentHashMap source, you can see that all nodes and values are declared volatile, which makes sense because the value can be written/read from more than one thread:
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
...
}
However, looking into ArrayBlockingQueue source, it's a plain array that is being updated/read from multiple threads:
private void enqueue(E x) {
// assert lock.getHoldCount() == 1;
// assert items[putIndex] == null;
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length)
putIndex = 0;
count++;
notEmpty.signal();
}
How is it guaranteed that the value inserted into items[putIndex] will be visible from another thread, providing that the element inside the array is not volatile (i know that declaring the array itself doesnt have any effect anyhow on the elements themselves) ?
Couldn't another thread hold a cached copy of the array?
Thanks
Notice that enqueue is private. Look for all calls to it (offer(E), offer(E, long, TimeUnit), put(E)). Notice that every one of those looks like:
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
// Do stuff.
enqueue(e);
} finally {
lock.unlock();
}
}
So you can conclude that every call to enqueue is protected by a lock.lock() ... lock.unlock() so you don't need volatile because lock.lock/unlock are also a memory barrier.
According to my understanding volatile is not needed as all BlockingQueue implementations already have a locking mechanism unlike the ConcurrentHashMap.
If you look at he public methods of the Queue you will find a ReentrantLock that guards for concurrent access.
This must be really obvious, but I can't spot the answer. I need to put a lock around a variable to ensure that a couple of race-hazard conditions are avoided. From what I can see, a pretty simple solution exists using Lock, according to the android docs:
Lock l = ...;
l.lock();
try {
// access the resource protected by this lock
}
finally {
l.unlock();
}
So far, so good. However, I can't make the first line work. It would seem that something like:
Lock l = new Lock();
Might be correct, but eclipse reports, "Cannot instantiate the type Lock" - and no more.
Any suggestions?
If you're very keen on using a Lock, you need to choose a Lock implementation as you cannot instantiate interfaces.
As per the docs
You have 3 choices:
ReentrantLock
Condition This isn't a Lock itself but rather a helper class since Conditions are bound to Locks.
ReadWriteLock
You're probably looking for the ReentrantLock possibly with some Conditions
This means that instead of Lock l = new Lock(); you would do:
ReentrantLock lock = new ReentrantLock();
However, if all you're needing to lock is a small part, a synchronized block/method is cleaner (as suggested by #Leonidos & #assylias).
If you have a method that sets the value, you can do:
public synchronized void setValue (var newValue)
{
value = newValue;
}
or if this is a part of a larger method:
public void doInfinite ()
{
//code
synchronized (this)
{
value = aValue;
}
}
Just because Lock is an interface and can't be instantiated. Use its subclasses.
Hello I just had phone interview I was not able to answer this question and would like to know the answer, I believe, its advisable to reach out for answers that you don't know. Please encourage me to understand the concept.
His question was:
"The synchronized block only allows one thread a time into the mutual exclusive section.
When a thread exits the synchronized block, the synchronized block does not specify
which of the waiting threads will be allowed next into the mutual exclusive section?
Using synchronized and methods available in Object, can you implement first-come,
first-serve mutual exclusive section? One that guarantees that threads are let into
the mutual exclusive section in the order of arrival? "
public class Test {
public static final Object obj = new Object();
public void doSomething() {
synchronized (obj) {
// mutual exclusive section
}
}
}
Here's a simple example:
public class FairLock {
private int _nextNumber;
private int _curNumber;
public synchronized void lock() throws InterruptedException {
int myNumber = _nextNumber++;
while(myNumber != _curNumber) {
wait();
}
}
public synchronized void unlock() {
_curNumber++;
notifyAll();
}
}
you would use it like:
public class Example {
private final FairLock _lock = new FairLock();
public void doSomething() {
_lock.lock();
try {
// do something mutually exclusive here ...
} finally {
_lock.unlock();
}
}
}
(note, this does not handle the situation where a caller to lock() receives an interrupted exception!)
what they were asking is a fair mutex
create a FIFO queue of lock objects that are pushed on it by threads waiting for the lock and then wait on it (all this except the waiting in a synchronized block on a separate lock)
then when the lock is released an object is popped of the queue and the thread waiting on it woken (also synchronized on the same lock for adding the objects)
You can use ReentrantLock with fairness parameter set to true. Then the next thread served will be the thread waiting for the longest time i.e. the one that arrived first.
Here is my attempt. The idea to give a ticket number for each thread. Threads are entered based on the order of their ticket numbers. I am not familiar with Java, so please read my comments:
public class Test {
public static final Object obj = new Object();
unsigned int count = 0; // unsigned global int
unsigned int next = 0; // unsigned global int
public void doSomething() {
unsigned int my_number; // my ticket number
// the critical section is small. Just pick your ticket number. Guarantee FIFO
synchronized (obj) { my_number = count ++; }
// busy waiting
while (next != my_number);
// mutual exclusion
next++; // only one thread will modify this global variable
}
}
The disadvantage of this answer is the busy waiting which will consume CPU time.
Using only Object's method and synchronized, in my point of view is a little difficult. Maybe, by setting each thread a priority, you can garantee an ordered access to the critical section.