How can I wait for a lock before checking it?
Basically, I want to cache a list in a private variable. I only populate that list every once and a while, the other 99.999999% of the time, I want to read it, so I don't want to lock every time I read.
public class SomeServlet extends CustomServlet {
private static Object locker;
private static List<String> someList;
// moderately heavy populate, not called very often
private void populateList() {
// lock
someList.clear();
someList.addAll(getTheListStuff());
// unlock
}
public void doGetLikeMethod(HttpServletRequest req, HttpServletResponse res) {
// looking at some sort of method to check for the lock
// and wait for it, preferably with a timeout
if(!locker.isLocked(1000) && somelist.isEmpty()) {
populateList();
}
// the lock is present far less than 0.01% of the time this is checked
}
public void updateSomeList() {
populateList(); // populate list for some other reason
}
}
This is in a servlet and is not using a public framework. Our lead is very protective of adding any extra libraries, so I'd like to avoid that if at all possible. We have all the apache and java.util stuff. I'm not sure if I should use some sort of sychronized, ReadWriteLock, ReentReadWriteLock, or Lock.
I think I explained this well enough. Let me know if I need to clarify anything. I may be approaching this entirely wrong.
Use java.util.concurrent.locks.ReentrantReadWriteLock. Multiple threads can hold the read lock at a time, as long as no write is going on, so it satisfies your efficiency desires. Only a single thread can hold the write lock at a time, and only when no threads hold the read lock, so that ensures consistency between writes and reads. You probably want to set fairness on, so that write threads will eventually be able to do their writes even when there is constant contention for reads.
from http://tutorials.jenkov.com/
The rules by which a thread is allowed to lock the ReadWriteLock
either for reading or writing the guarded resource, are as follows:
Read Lock If no threads have locked the ReadWriteLock for writing,
and no thread have requested a write lock (but not yet obtained it).
Thus, multiple threads can lock the lock for reading.
Write Lock If
no threads are reading or writing. Thus, only one thread at a time
can lock the lock for writing.
ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
readWriteLock.readLock().lock();
// multiple readers can enter this section
// if not locked for writing, and not writers waiting
// to lock for writing.
readWriteLock.readLock().unlock();
readWriteLock.writeLock().lock();
// only one writer can enter this section,
// and only if no threads are currently reading.
readWriteLock.writeLock().unlock();
So I think it's what you need
In the case where you are writing less and reading more you may use Copy on Write methodology.
I have re-written the code with the solution i have mentioned.
public class SomeServlet extends CustomServlet {
private volatile List<String> someList;
// moderately heavy populate, not called very often
private void populateList() {
someList = getTheListStuff();
}
public void doGetLikeMethod(HttpServletRequest req, HttpServletResponse res) {
if(someList == null) {
//If updating is expensive and do not want to do twice in worst case include the synchronization and another if check.
//If updating is not expensive ignore synchronization and nested if.
synnchroized(this){
if(someList == null) {
populateList();
}
}
}
}
public void updateSomeList() {
populateList(); // populate list for some other reason
}
}
Related
I have three different threads which creates three different objects to read/manipulate some data which is common for all the threads. Now, I need to ensure that we are giving an access only to one thread at a time.
The example goes something like this.
public interface CommonData {
public void addData(); // adds data to the cache
public void getDataAccessKey(); // Key that will be common across different threads for each data type
}
/*
* Singleton class
*/
public class CommonDataCache() {
private final Map dataMap = new HashMap(); // this takes keys and values as custom objects
}
The implementation class of the interface would look like this
class CommonDataImpl implements CommonData {
private String key;
public CommonDataImpl1(String key) {
this.key = key;
}
public void addData() {
// access the singleton cache class and add
}
public void getDataAccessKey() {
return key;
}
}
Each thread will be invoked as follows:
CommonData data = new CommonDataImpl("Key1");
new Thread(() -> data.addData()).start();
CommonData data1 = new CommonDataImpl("Key1");
new Thread(() -> data1.addData()).start();
CommonData data2 = new CommonDataImpl("Key1");
new Thread(() -> data2.addData()).start();
Now, I need to synchronize those threads if and only if the keys of the data object (passed on to the thread) is the same.
My thought process so far:
I tried to have a class that provides the lock on the fly for a given key which looks something like this.
/*
* Singleton class
*/
public class DataAccessKeyToLockProvider {
private volatile Map<String, ReentrantLock> accessKeyToLockHolder = new ConcurrentHashMap<>();
private DataAccessKeyToLockProvider() {
}
public ReentrantLock getLock(String key) {
return accessKeyToLockHolder.putIfAbsent(key, new ReentrantLock());
}
public void removeLock(BSSKey key) {
ReentrantLock removedLock = accessKeyToLockHolder.remove(key);
}
}
So each thread would call this class and get the lock and use it and remove it once the processing is done. But this can so result in a case where the second thread could get the lock object that was inserted by the first thread and waiting for the first thread to release the lock. Once the first thread removes the lock, now the third thread would get a different lock altogether, so the 2nd thread and the 3rd thread are not in sync anymore.
Something like this:
new Thread(() -> {
ReentrantLock lock = DataAccessKeyToLockProvider.get(data.getDataAccessKey());
lock.lock();
data.addData();
lock.unlock();
DataAccessKeyToLockProvider.remove(data.getDataAccessKey());
).start();
Please let me know if you need any additional details to help me resolve my problem
P.S: Removing the key from the lock provider is kind of mandatory as i will be dealing with some millions of keys (not necessarily strings), so I don't want the lock provider to eat up my memory
Inspired the solution provided #rzwitserloot, I have tried to put some generic code that waits for the other thread to complete its processing before giving the access to the next thread.
public class GenericKeyToLockProvider<K> {
private volatile Map<K, ReentrantLock> keyToLockHolder = new ConcurrentHashMap<>();
public synchronized ReentrantLock getLock(K key) {
ReentrantLock existingLock = keyToLockHolder.get(key);
try {
if (existingLock != null && existingLock.isLocked()) {
existingLock.lock(); // Waits for the thread that acquired the lock previously to release it
}
return keyToLockHolder.put(key, new ReentrantLock()); // Override with the new lock
} finally {
if (existingLock != null) {
existingLock.unlock();
}
}
}
}
But looks like the entry made by the last thread wouldn't be removed. Anyway to solve this?
First, a clarification: You either use ReentrantLock, OR you use synchronized. You don't synchronized on a ReentrantLock instance (you synchronize on any object you want) – or, if you want to go the lock route, you can call the lock lock method on your lock object, using a try/finally guard to always ensure you call unlock later (and don't use synchronized at all).
synchronized is low-level API. Lock, and all the other classes in the java.util.concurrent package are higher level and offer far more abstractions. It's generally a good idea to just peruse the javadoc of all the classes in the j.u.c package from time to time, very useful stuff in there.
The key issue is to remove all references to a lock object (thus ensuring it can be garbage collected), but not until you are certain there are zero active threads locking on it. Your current approach does not know how many classes are waiting. That needs to be fixed. Once you return an instance of a Lock object, it's 'out of your hands' and it is not possible to track if the caller is ever going to call lock on it. Thus, you can't do that. Instead, call lock as part of the job; the getLock method should actually do the locking as part of the operation. That way, YOU get to control the process flow. However, let's first take a step back:
You say you'll have millions of keys. Okay; but it is somewhat unlikely you'll have millions of threads. After all, a thread requires a stack, and even using the -Xss parameter to reduce the stack size to the minimum of 128k or so, a million threads implies you're using up 128GB of RAM just for stacks; seems unlikely.
So, whilst you might have millions of keys, the number of 'locked' keys is MUCH smaller. Let's focus on those.
You could make a ConcurrentHashMap which maps your string keys to lock objects. Then:
To acquire a lock:
Create a new lock object (literally: Object o = new Object(); - we are going to be using synchronized) and add it to the map using putIfAbsent. If you managed to create the key/value pair (compare the returned object using == to the one you made; if they are the same, you were the one to add it), you got it, go, run the code. Once you're done, acquire the sync lock on your object, send a notification, release, and remove:
public void doWithLocking(String key, Runnable op) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
op.run();
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
} else {
...
}
}
To wait until the lock is available, first acquire a lock on the locker object, THEN check if the concurrentMap still contains it. If not, you're now free to retry this operation. If it's still in, then we now wait for a notification. In any case we always just retry from scratch. Thus:
public void performWithLocking(String key, Runnable op) throws InterruptedException {
while (true) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
try {
op.run();
} finally {
// We want to lock even if the operation throws!
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
}
return;
} else {
synchronized (o) {
if (concurrentMap.containsKey(key)) o.wait();
}
}
}
}
}
Instead of this setup where you pass the operation to execute along with the lock key, you could have tandem 'lock' and 'unlock' methods but now you run the risk of writing code that forgets to call unlock. Hence why I wouldn't advise it!
You can call this with, for example:
keyedLockSupportThingie.doWithLocking("mykey", () -> {
System.out.println("Hello, from safety!");
});
I have a requirement which is as below :
1) A class which has all static methods and a static list. This list stores some objects on which I perform some operation.
2) Now this operation is called from multiple threads.
3) this operation call is not sharing any common data so this methods is not synchronized.
4) now whenever this list gets updated with new objects, I have to stop this operation call.
class StaticClass{
static List<SomeObjects>=new List<>();
static performOperation()
{
//call operation on all objects in the list
}
static updateList()
{
//update list, add, update or remove objects
}
}
Possible solutions :
1) Make performOperation() and updateList() synchronized. But the frequency at which performOperation() gets called is too high and udpate list frequency is too low.
2) use read write locks. Use read locks in performOperation() and write locks in updateList(). Sample is shown below :
class StaticClass{
static List<SomeObjects>=new List<>();
static final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
static performOperation()
{
readWriteLock.readLock().lock();
//call operation on all objects in the list
readWriteLock.readLock().unlock();
}
static updateList()
{
readWriteLock.writeLock().lock();
//update list, add, update or remove objects
readWriteLock.writeLock().unlock();
}
So which solution is better? Is this a correct usage of readwrite locks. Why I am confused in going with approach 2 is there is not such data in performOperation() which needs read access or write access. I just cannot call this method when list is being updated. So I am not sure whether its a appropriate usage of read write locks or not.
ReadWriteLock is more efficient when a lot of reading occurs as synchronized will block everything. That said ReadWriteLock is more error prone. For example your example will actually end up in a deadlock, because everytime you invoke readWriteLock.writeLock() or readWriteLock.readLock() it will create a new instance and it will never be unlocked, causing it to end up in a dead lock. So you example should look more like:
class StaticClass{
static List<SomeObjects>=new List<>();
static final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
static final Lock readLock = readWriteLock.readLock();
static final Lock writeLock = readWriteLock.writeLock();
static void performOperation()
{
readLock.lock();
try {
//call operation on all objects in the list
} finally {
// This ensures read lock is unlocked even when exception occurs
readLock.unlock();
}
}
static void updateList()
{
writeLock.lock();
try {
//update list, add, update or remove objects
} finally {
// This ensures read lock is unlocked even when exception occurs
writeLock.unlock();
}
}
}
Please note I also added the try/finally in here to avoid possible issues with exceptions. As you can see this is quite more work then the simple synchronized part.
Also there is a possible alternative CopyOnWriteArrayList. Which is thread safe and you don't have to use locks or synchronized keyword. It will impact your performance when there are a lot of writes to it.
I need to create a class that has a shared-between-threads Object (lets call is SharedObject). The special thing about SharedObject is that it holds a String that will be returned in multithreaded environment, and sometimes the entire SharedObject will be written to by changing field reference to newly created object.
I do not want to make the read and write both synchronised on the same monitor because the write scenario is happening rarely while read scenario is quite common. Therefore I did the following:
public class ObjectHolder {
private volatile SharedObject sharedObject;
public String getSharedObjectString() {
if (!isObjectStillValid()) {
obtainNewSharedObject()
}
return sharedObject.getCommonString()
}
public synchronized void obtainNewSharedObject() {
/* This is in case multiple threads wait on this lock,
after first one obtains new object the others can just
use it and should not obtain a new one */
if(!isObjectStillValid()) {
sharedObject = new SharedObject(/*some parameters from somewhere*/)
}
}
}
From what I have read from documentation and on stackoverflow, the synchronized keyword will assure only one thread can access the synchronised block on the same object instance(therefore write race/multiple unnecessary writes is a non-issue) while volatile keyword on the field reference will assure the reference value is written directly to the main program memory (not cached locally).
Are there any other pitfalls I am missing?
I want to be sure that within synchronized block when sharedObject is written to, the new value of sharedObject is present for any other thread at latest when lock for obtainNewSharedObject() is released. Should this not be guaranteed, I could encounter scenarios of unnecessary writes and replacing correct values which are a big problem for this case.
I know to be absolutely safe I could just make getSharedObjectString() synchronized by itself however as stated previously I do not want to block reading if not needed.
This way reading is non-blocking, when a write scenario occurs it is blocking.
I should probably mention method isObjectStillValid() is thread independant (entirely SharedObject and System clock based) therefore a valid Thread-free check to be used for write scenarios.
Edit: Please note I could not find a similar topic on stackoverflow, but it may exist. Sorry if that is the case.
Edit2: Thank you for all the comments. Edit because apparently I cannot upvote yet (I can, but it does not show). While my solution is functional as long as isObjectStillValid is thread-safe, it can suffer from decreased performance due to multiple accesses to volatile field. I will improve it most likely using the upgraded double-checked locking solution. I will also in-depth analyse all the other possibilities mentioned here.
Why don't you use AtomicReference. It uses optimistic locking, meaning that no actual thread locking is involved. Internally it uses Compare and Swap. If you look at the implementation it uses volatile in its implementation and I would trust Doug Lea to implement it correctly :)
Apart from this, there many more ways for synchronization between lot of readers and some writers - ReadWriteLock
This looks like a classic double-checked locking pattern. While your implementation is logically correct - thanks to the use of volatile on sharedObject - it might not be the most performant.
The recommended pattern for Java 1.5 on is shown on the Wikipedia page linked.
// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized(this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the use of a localRef for accessing the helper field. This limits access to the volatile field in the simple case to a single read instead of potentially twice; once for the check and once for the return. See the Wikipedia page again, just after the recommended pattern sample.
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 25 percent.[7]
Depending on how isObjectStillValid() accesses sharedObject, you might benefit from a similar pattern.
This sounds like the use of a ReadWriteLock would be appropiate.
The basic idea is that there can be multiple readers simultaniously or one writer exclusively. Here can you find an Example how to use it in a List implementation.
Copy paste in case the side goes down:
import java.util.*;
import java.util.concurrent.locks.*;
/**
* ReadWriteList.java
* This class demonstrates how to use ReadWriteLock to add concurrency
* features to a non-threadsafe collection
* #author www.codejava.net
*/
public class ReadWriteList<E> {
private List<E> list = new ArrayList<>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public ReadWriteList(E... initialElements) {
list.addAll(Arrays.asList(initialElements));
}
public void add(E element) {
Lock writeLock = rwLock.writeLock();
writeLock.lock();
try {
list.add(element);
} finally {
writeLock.unlock();
}
}
public E get(int index) {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.get(index);
} finally {
readLock.unlock();
}
}
public int size() {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.size();
} finally {
readLock.unlock();
}
}
}
How can I synchronize 2 threads to handle data in a list ?
thread A is adding / changing items in a list (writing to the list)
thread B is displaying the items (only reading the list)
I would like to "notify" thread B when it can display the list. In the time of reading the list it must not be changed by thread A. When thread B is done reading, thread A can start changing the list again.
My guesses go to
synchronized(obj)
list.wait() + list.notify()
Threads aren't invoking each other. They run concurrent all the time.
You could put all changes in Runnables and put them in a queue that Thread A executes in order. After each job, A must generate a snapshot of the modified list and submit it to Thread B. You could use Executors for that.
General concept (as I see it in your case) would be as follows.
1) Create an instance of List that you're planning to work with.
2) Write 2 classes corresponding to your thread A and thread B that both implement Runnable and take List as their constructor parameter.
3) Synchronize these 2 classes on list instance:
// method in class that adds
public void add() {
synchronized(list) {
// perform addition ...
list.notify();
}
}
// method in class that reads
public void read() throws InterruptedException {
synchronized(list) {
while (list.isEmpty())
list.wait();
// process data ...
}
}
4) Create 2 threads with argumens corresponding to instances of these 2 classes and start them.
Reader and writer locks are your friends here.
•thread A is adding / changing items in a list (writing to the list)
... so it can use the write lock ...
•thread B is displaying the items (only reading the list)
... so it can use the read lock.
Let's assume that you're using something straight forward for your wait/notify (for example, the built-in Object methods) to block the read and display thread. At that point, your code looks something like this:
/** This is the read/write lock that both threads can see */
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
/** This method is called by thread A (the writer / modifier) */
public void add() {
try {
// Only one writer at a time allowed
lock.writeLock().lock();
// Insert code here: Add to the list
} finally {
// Unlock in the finally block to ensure that lock is released
lock.writeLock().unlock();
}
// Notify anyone who's waiting for data
list.notify();
}
/** This method is called by thread B (the reader / displayer) */
public void read() throws InterruptedException {
try {
// As many readers as you like at a time
lock.readLock().lock();
// Insert code here: read from the list
} finally {
// Unlock in the finally block to ensure that lock is released
lock.readLock().unlock();
}
// Wait for new data
list.wait();
}
To make things even more convenient, you can get rid of the notify/wait messaging by using a blocking data structure: e.g., one of the BlockingQueues. In that case, you don't write any notification at all. The reader blocks waiting for new data. When the writer adds data to the queue, the reader unblocks, drains the new data to process, does its thing and then blocks again.
I tried concurrency packages suggested here or here and it works well. The threads lock each other out:
final Lock lock = new ReentrantLock(true);
// thread A
lock.lock();
// write to list
lock.unlock();
// thread B
lock.lock();
// read from list
lock.unlock();
Not sure if they can execute precisely one after another and I didn't get the notify feature. But that doesn't hurt my application.
If I have a class like :
class MultiThreadEg {
private Member member;
public Integer aMethod() {
..............
..............
}
public String aThread() {
...............
member.memberMethod(.....);
Payment py = member.payment();
py.processPayment();
...........................
}
}
Suppose that aThread() is a new thread, then, will accessing the shared member object by too many threads at the same time cause any issues (with the following access rules)?
Rule 1 : ONLY reading, no writing to the object(member).
Rule 2 : For all the objects that need some manipulation(writing/modification), a copy of the original object will be created.
for eg: In the payment() method, I do this :
public class Member {
private Payment memPay;
public payment() {
Payment py = new Payment(this.memPay);//Class's Object copy constructor will be called.
return py;
}
}
My concern is that, even though I create object copies for "writing" (like in the method payment()), acessing the member object by too many threads at the same time will cause some discrepancies.
What is the fact ? Is this implementation reliable in every case (0 or more concurrent accesses) ? Please advise. Thanks.
You could simply use a ReentrantReadWriteLock. That way, you could have multiple threads reading at the same time, without issue, but only one would be allowed to modify data. And Java handles the concurrency for you.
ReadWriteLock rwl = new ReentrantReadWriteLock();
Lock readLock = rwl.readLock;
Lock writeLock = rwl.writeLock;
public void read() {
rwl.readLock.lock();
try {
// Read as much as you want.
} finally {
rwl.readlock.unlock();
}
}
public void writeSomething() {
rwl.writeLock.lock();
try {
// Modify anything you want
} finally {
rwl.writeLock.unlock();
}
}
Notice that you should lock() before the try block begins, to guarantee the lock has been obtained before even starting. And, putting the unlock() in the finally clause guarantees that, no matter what happens within the try (early return, an exception is thrown, etc), the lock will be released.
In case update to memPay depends on the memPay contents (like memPay.amount+=100) you should block access for other threads when you are updating. This looks like:
mutual exclusion block start
get copy
update copy
publish copy
mutual exclusion block end
Otherwise there could be lost updates when two threads simultaneously begin update memPay object.