Is there a non-reentrant ReadWriteLock I can use? - java

I need a ReadWriteLock that is NOT reentrant, because the lock may be released by a different thread than the one that acquired it. (I realized this when I started to get IllegalMonitorStateException intermittently.)
I'm not sure if non-reentrant is the right term. A ReentrantLock allows the thread that currently holds to lock to acquire it again. I do NOT want this behaviour, therefore I'm calling it "non-reentrant".
The context is that I have a socket server using a thread pool. There is NOT a thread per connection. Requests may get handled by different threads. A client connection may need to lock in one request and unlock in another request. Since the requests may be handled by different threads, I need to be able to lock and unlock in different threads.
Assume for the sake of this question that I need to stay with this configuration and that I do really need to lock and unlock in different requests and therefore possibly different threads.
It's a ReadWriteLock because I need to allow multiple "readers" OR an exclusive "writer".
It looks like this could be written using AbstractQueuedSynchronizer but I'm afraid if I write it myself I'll make some subtle mistake. I can find various examples of using AbstractQueuedSynchronizer but not a ReadWriteLock.
I could take the OpenJDK ReentrantReadWriteLock source and try to remove the reentrant part but again I'm afraid I wouldn't get it quite right.
I've looked in Guava and Apache Commons but didn't find anything suitable. Apache Commons has RWLockManager which might do what I need but I'm not sure and it seems more complex than I need.

A Semaphore allows different threads to perform the acquire and release of permits. An exclusive write is equivalent to having all of the permits, as the thread waits until all have been released and no additional permits can be acquired by other threads.
final int PERMITS = Integer.MAX_VALUE;
Semaphore semaphore = new Semaphore(PERMITS);
// read
semaphore.acquire(1);
try { ... }
finally {
semaphore.release(1);
}
// write
semaphore.acquire(PERMITS);
try { ... }
finally {
semaphore.release(PERMITS);
}

I know you've already accepted another answer. But I still think that you are going to create quite a nightmare for yourself. Eventually, a client is going to fail to come back and release those permits and you'll begin to wonder why the "writer" never writes.
If I were doing it, I would do it like this:
Client issues a request to start a transaction
The initial request creates a task (Runnable/Callable) and places it in an Executor for execution
The initial request also registers that task in a Map by transaction id
Client issues the second request to close the transaction
The close request finds the task by transaction id in a map
The close request calls a method on the task to indicate that it should close (probably a signal on a Condition or if data needs to be passed, placing an object in a BlockingQueue)
Now, the transaction task would have code like this:
public void run() {
readWriteLock.readLock().lock();
try {
//do stuff for initializing this transaction
if (condition.await(someDurationAsLong, someTimeUnit)( {
//do the rest of the transaction stuff
} else {
//do some other stuff to back out the transaction
}
} finally {
readWriteLock.readLock.unlock();
}
}

Not entirely sure what you need, esp. why it should be a read write lock, but if you have task that need to be handled by many threads, and you don't want it to be processesd/accessed concurrently, I'd use actually a ConcurrentMap ( etc.).
You can remove the task from the map or substitute it with a special "lock object" to indicate it's locked. You could return the task with an updated state to the map to let another thread take over, or alternatively you can pass the task directly to the next thread and let it return the task to the map instead.

They seem to have dropped the ball on this one by deprecating com.sun.corba.se.impl.orbutil.concurrent.Mutex;
I mean who in his right mind thinks that we won't need non-reentrant locks. Here we are, wasting our times arguing over the definition of reentrant (can slighty change in meaning per framework btw). Yes I want to tryLock on the same thread is that such a bad thing? it won't deadlock because ill else out of it. A non-reentrant lock that locks in the same thread can be very usefull to prevent errors on GUI apps where the user presses on the same button rapidly and repeatedly. Been there, done that, QT was right...again.

Related

Java - JdbcLockRegistry distributed lock in combination with ReentrantReadWriteLock

I want to handle concurrent threads in a way that out of multiple threads having same referenceId, only one threads writes into the db and rest read that data from db using the refId.
Code -
Lock lock = jdbcLockRegistry.obtain("myLockKey");
if(lock.tryLock()) {
try{
writeLock.lock();
\\do an API call, get data
\\write this data into db
}
finally {
writeLock.unlock();
}
}
else {
try{
readLock.lock();
\\do a read operation
}
finally {
readLock.unlock();
}
}
So basically the code says, if thread gets a distributed lock, take a write lock and start writing. Meanwhile writeLock finishes, reads are waiting.
Problem occurs when I run the application with multiple instances.
On multiple instances, distributed lock works as expected and doesn't let other threads take lock on the key which is already taken.
With ReentrantReadWriteLock, the thread which has distributed lock is supposed to take write lock and it does, but on an another instance (say inst2), only the distributed lock will be in effect, as the first instance (inst1) is still holding distributed lock for that key.But there's no Reentrant lock on inst2, since threads can not take write lock on inst2, they straight away proceed for read, and read null data as inst1 would still befinishing up writing that data.
I can only hope I made the problem clear.
To resolve this, I introduced a delay before the reads, and put a Thread.sleep(2000) for 2 seconds to give write thread enough time to write into db.
Problem 1 - Even after delay, write thread is not able to write before the reads start on an another instance for all requests. It works partially.
Problem 2 - I have a time constraint and cannot afford to introduce delays.
If anyone can suggest a different logic to achieve my goal or suggestion to improve the existing one, it would be of great help.
Note - Please do not suggest using Redis, PCC or any caching, I do not have the option to use these resources.
Thanks

if ReentrantLock is locked wait but dont lock the lock

I have a ReentrantLock in my code and want to use it to clear an array once per second; I dont want other threads to change the array while it is being cleared, but if I am not currently clearing the array other threads shall not have to wait, like this:
public void addToArray(Object a) {
lock.waitforunlock(); //not a real method just to clarify my intentions
array.add(a);
}
To better clarify my intentions I will explain the process: the netty eventloop will call my network handler, that network handler will then call the addToArray method from before, once per second my main thread that will never be a Netty thread will clear the array, in this time every netty thread shall have to wait until this is finished! Note: the addToArray method is threadproof and I dont want to sync it because then the hole point of a event loop is useless.
There is no API method that does exactly what you are asking.
The most efficient way to do it is like this:
try {
lock.lock();
} finally {
lock.unlock();
}
In other words, grab the lock momentarily then release it.
But here's the problem.
In general, the instant you release the lock, some other thread might immediately grab it. So your array.add() call may happen simultaneously with some other thread doing things to array. Even if your use-case means that another thread grabbing the lock is highly unlikely, it can still happen; e.g. if your server is under severe load and the current thread gets preempted immediately after releasing the lock.
Presumably you are performing memory writes in array.add(). Unless they are performed with appropriate synchronization, those updates may not be visible to other threads. (You say "addToArray method is threadproof", but without a clear, detailed explanation of what you mean by that, I would be uncomfortable with saying this code is thread safe.)
If what you are trying to do here is to array.add() after something else has happened, then testing the lock / waiting for it to be released doesn't tell you if the event actually happened. All it tells you is that it wasn't happening at the instant that the test succeeded.
In short, I doubt that waiting for a lock to be released before doing an update is actually a correct solution ... no matter how you implement the waiting.
Another way to look at this.
If array.add() is completely threadsafe, and will work correctly irrespective of some other thread holding the lock, why do you need to test the lock? Just call the method.
If you are actually trying to have the array.add() call happen after some event that coincides with the lock being released, use a cyclic barrier or similar.
Note: I read and tried to understand your explanation, but I got lost with what you are saying. Due to "language issues" I think.
As I understand it, you have two or more separate threads mutating a list: the main thread occasionally clearing the list, and the netty thread adding to the list. You want to make sure they don't both attempt to modify the list at the same time.
The simplest solution to this is to use a thread safe list, and make sure the main thread uses the List.clear() method to clear the list. That way, the clear() call will be atomic - once started it will finish before any other accesses to the list - so you won't have to worry about adding to the list "in the middle of" the clear() call.
In a comment to another answer, you mention that you are using a CopyOnWriteArrayList, which is thread safe. Thus, you can just call add() the code that adds to the list without worrying about synchronization; the add() call will automatically wait if the list is being cleared, and proceed otherwise. You can also remove the use of the ReentrantLock from your main thread unless there are other reasons, besides protecting this list, to use the lock.

How does Java determine which Thread should proceed when using synchronized?

private static void WaitInQueue(Customer c)
{
synchronized(mutex){
//Do some operation here.
}
}
I need to make threads wait before proceeding(only one at a time), however, it appears that synchronized is not using FIFO to determine which should proceed next.(It seems like LIFO) Why is this?
How can I ensure that the first thread to wait at synchronized will be the first one to aquire the lock next?
a synchronized block makes no guarantees about fairness - any of the waiting threads may in theory be chosen to execute. if you really want a fair lock (fifo), switch to use the newer locking mechanisms introduced in java 5+.
see for example the documentation for ReentrantLock.
here's how you'd use a fair lock:
private final ReentrantLock lock = new ReentrantLock(true); //fair lock
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
note, however, that this results in overall degraded performance and so is not recommended.
quoting from the documentation"
The constructor for this class accepts an optional fairness parameter.
When set true, under contention, locks favor granting access to the
longest-waiting thread. Otherwise this lock does not guarantee any
particular access order. Programs using fair locks accessed by many
threads may display lower overall throughput (i.e., are slower; often
much slower) than those using the default setting
You can use the Semaphore class, with a fairness setting set to true and a count of 1. This guarantees FIFO order for threads, and is almost identical to having a synchronized block.
ReentrantLock also provides a fairness setting.
To answer "Why is this?": There's rarely any reason to use a specific order on a lock. The threads hit the lock in random order and may as well leave it the same way. The important thing from the JVM's viewpoint is to keep the cores busy working on your program's code. Generally, if you care about what order your threads run in you need something a lot fancier than a lock or semaphore.
The only good exception I can think of is if your lock always has waiting threads, creating the real possibility that a thread that hits it might wait for many seconds, continually getting bumped to the back of the "queue", while an irate user fumes. Then, FIFO makes a lot of sense. But even here you might want to spend some time trying to speed up the synchronized block (or avoiding it completely) so most threads that hit it don't get blocked.
In summary, think long and hard about your design if you find yourself worrying about the order your threads run in.
You should use Thread.join for wait before proceeding.
Just go through the following link
http://msdn.microsoft.com/en-us/library/dsw9f9ts(v=vs.90).aspx

Writing a one-at-a-time lock in Java

I'm trying to implement a Java lock-type-thing which does the following:
By default, threads do not pass the lock. (Opposite from normal locks, where locks can be acquired as long as they are not held.)
If only one thread is waiting for the lock, execution in that thread stops
If more than one thread is waiting for the lock, the thread that has been waiting the longest is allowed to continue execution.
I'm working on implementing this on top of AbstractQueuedSynchronizer. The transition to allow the oldest thread to go through looks like this:
//inner class inside Lock
private static class Sync extends AbstractQueuedSynchronizer {
public Sync(){
setState(-1);
}
public boolean tryAcquire(int ignore) {
if (getState() == 1) return false;
Thread first = getFirstQueuedThread();
if (first != null &&
first != Thread.currentThread()) {
setState(0);
return false;
}
return compareAndSetState(0, 1);
The problem that I'm seeing is that when I call setState(0) but return false, the Sync object never has the first thread tryAcquire again. Do I need to use SharedMode? Is there a better solution to this problem?
This is part of an implementation of what I call a "Valve" which I want to use for long-polling AJAX responses. I've got the part where a thread waits for the valve to become "pressurized" -- there's data to send to the client) but getting the oldest thread to release seems hard unless I don't use AbstractQueuedSynchronizer, and I really don't want to write a ground-up lock implementation.
Have a look at the ReentrantLock class (http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/locks/ReentrantLock.html).
You could keep this lock object as a private variable in your class and use it to do whatever you need to do. I'm not quite sure how you could implement this without more knowledge of your code, but this Lock object has all of the methods you require to provide the behavior you mentioned in your post.
To keep track of how long a thread has been waiting, you might have to hack something together to keep track of it. I don't think the Thread class provides that kind of functionality.
Have you looked at this link
A Fair Lock
Below is shown the previous Lock class turned into a fair lock called FairLock. You will notice that the implementation has changed a bit with respect to synchronization and wait() / notify() compared to the Lock class shown earlier.
Exactly how I arrived at this design beginning from the previous Lock class is a longer story involving several incremental design steps, each fixing the problem of the previous step: Nested Monitor Lockout, Slipped Conditions, and Missed Signals. That discussion is left out of this text to keep the text short, but each of the steps are discussed in the appropriate texts on the topic ( see the links above). What is important is, that every thread calling lock() is now queued, and only the first thread in the queue is allowed to lock the FairLock instance, if it is unlocked. All other threads are parked waiting until they reach the top of the queue.

design of a Producer/Consumer app

I have a producer app that generates an index (stores it in some in-memory tree data structure). And a consumer app will use the index to search for partial matches.
I don't want the consumer UI to have to block (e.g. via some progress bar) while the producer is indexing the data. Basically if the user wishes to use the partial index, it will just do so. In this case, the producer will potentially have to stop indexing for a while until the user goes away to another screen.
Roughly, I know I will need the wait/notify protocol to achieve this. My question: is it possible to interrupt the producer thread using wait/notify while it is doing its business ? What java.util.concurrent primitives do I need to achieve this ?
The way you've described this, there's no reason that you need wait/notify. Simply synchronize access to your data structure, to ensure that it is in a consistent state when accessed.
Edit: by "synchronize access", I do not mean synchronize the entire data structure (which would end up blocking either producer or consumer). Instead, synchronize only those bits that are being updated, and only at the time that you update them. You'll find that most of the producer's work can take place in an unsynchronized manner: for example, if you're building a tree, you can identify the node where the insert needs to happen, synchronize on that node, do the insert, then continue on.
In your producer thread, you are likely to have some kind of main loop. This is probably the best place to interrupt your producer. Instead of using wait() and notify() I suggest you use the java synchronization objects introduced in java 5.
You could potentially do something like that
class Indexer {
Lock lock = new ReentrantLock();
public void index(){
while(somecondition){
this.lock.lock();
try{
// perform one indexing step
}finally{
lock.unlock();
}
}
}
public Item lookup(){
this.lock.lock();
try{
// perform your lookup
}finally{
lock.unlock();
}
}
}
You need to make sure that each time the indexer releases the lock, your index is in a consistent, legal state. In this scenario, when the indexer releases the lock, it leaves a chance for a new or waiting lookup() operation to take the lock, complete and release the lock, at which point your indexer can proceed to its next step. If no lookup() is currently waiting, then your indexer just reaquires the lock itself and goes on with its next operation.
If you think you might have more that one thread trying to do the lookup at the same time, you might want to have a look at the ReadWriteLock interface and ReentrantReadWriteLock implementation.
Of course this solution is the simple way to do it. It will block either one of the threads that doesn't have the lock. You may want to check if you can just synchronize on your data structure directly, but that might prove tricky since building indexes tends to use some sort of balanced tree or B-Tree or whatnot where node insertion is far from being trivial.
I suggest you first try that simple approach, then see if the way it behaves suits you. If it doesn't, you may either try breaking up the the indexing steps into smaller steps, or try synchronizing on only parts of your data structure.
Don't worry too much about the performance of locking, in java uncontended locking (when only one thread is trying to take the lock) is cheap. As long as most of your locking is uncontented, locking performance is nothing to be concerned about.
The producer application can have two indices: published and in-work. The producer will work only with in-work, the consumer will work only with published. Once the producer done with indexing it can replace in-work one with published (usually swapping one pointer). The producer may also publish copy of the partial index if will bring value. This way you will avoid long term locks -- it will be useful when index accessed by lost of consumers.
No, that's not possible.
The only way of notifying a thread without any explicit code in the thread itself is to use Thread.interrupt(), which will cause an exception in the thread. interrrupt() is usually not very reliable though, because throwing a exception at some random point in the code is a nightmare to get right in all code paths. Beside that, a single try{}catch(Throwable){} somewhere in the thread (including any libraries that you use) could be enough to swallow the signal.
In most cases, the only correct solution is use a shared flag or a queue that the consumer can use to pass messages to the producer. If you worry about the producer being unresponsive or freezing, run it in a separate thread and require it to send heartbeat messages every n seconds. If it does not send a heartbeat, kill it. (Note that determining whether a producer is actually freezing, and not just waiting for an external event, is often very hard to get right as well).

Categories

Resources