Locking multiple objects in Java - java

I have a situtation where in out of multiple instances of a class 2 undergo an operation which leads to the state change of both, since this is a multithreaded application i want to ensure that unless that particular piece of code is executed, none of the other threads which are trying to access any of the aforementioned 2 instances be in waiting state.
Using synchronized or lock we can acquire lock on a single instance and nesting the synchronized block on 2 objects isn't a great idea either.
synchronized(obj1){
synchronized(obj2){
}
}
Another potential problem is that there could be cases where even though the inner object obj2 is free, since the outer object is locked the thread keeps waiting.
What could be the best possible solution to this problem.

Having nested locks introduce risk of deadlock when two different threads aqcuire locks in different order.
There is no universal approach for eliminate such deadlocks. Possible solutions includes:
Maintain total order of lock objects, and acquire locks only in ascending order of objects(or only in descending order).
This is good solution for cases, when both objects for lock are known without needs to lock one of the objects.
Using tryLock mechanism for inner lock.
tryLock returns false immediately if the object is already locked by the other thread. So a program should process this case somehow.

Related

How do conventional locks protect from parallel access in Java?

One of the first things we all learnt about concurrency in Java is that we use locks (synchronized keyword, Lock/ReadWriteLock interface) to protect against concurrent access. For example:
synchronized void eat(){
//some code
}
In theory, this method eat() can be executed by a single thread. Even though there are many threads waiting to execute it, only one will take the lock. But then comes parallelism, which made me think twice about what I just said.
I have 4 core CPU. That means I can do 4 tasks parallelly. Yes, lock can be taken by a single thread. But could it happen that 4 threads call method eat() and take a lock at the LITERALLY same time, although there is a lock that needs to be acquired to actually do anything?
Can something like that even happen in Java? I guess it can't but I had to ask this. And how is it even dealing with a case I just said?
...4 threads...take a lock at the LITERALLY same time...
Can't happen. Any "synchronized" operation (e.g., "take a lock") must operate on the system's main memory, and in any conventional computer system, there is only one memory bus. It is physically impossible for more than one CPU to access the main memory at the same time.
If two CPUs decide to access the memory at literally the same time, the hardware guarantees that one of them will "win" the race and go first, while the other one is forced to wait its turn.
Short answer - JVM keeps a list of threads that are trying to get the lock.
More details
https://wiki.openjdk.java.net/display/HotSpot/Synchronization
Also, worth mentioning that list (aka "inflated lock") is only created when really required (== when contention on a lock happens).

How to update 2 objects atomically in a lock-free manner?

I'm working on class that implements hash-code based atomic locking by multiple objects. The main purpose is to unpark a waiter thread as soon as all required locks become available, thus reducing overall waiting time.
Its lockAndGet(List objects) returns a single "composite lock" that involves all listed objects. After the caller finishes his job, it calls unlock().
The main components are: 1) common lock table--int[] array showing which locks are held now; 2) deque of waiter threads.
Algorithm is quite simple:
When a thread calls lockAndGet(), it is marked for parking, then new waiter object with the state LOCKED containing this thread is created and added to the tail of the queue, after which makeRound() method is invoked;
makeRound() traverses the deque starting from the head, trying to find which waiters can acquire their locks. When one such waiter is found, lock table is updated, waiter state is changed to UNLOCKED and removed from the deque, waiter's thread is unparked. After traversal, if current thread is marked for parking, it is parked;
When unlock() is called on some composite lock, lock table state is updated and makeRound() is invoked.
Here, to avoid race conditions, update of the state of lock table must be performed atomically with update of the state of a waiter. Now, it is achieved by synchronization on common exclusive Lock, and it works well, but I'd like to implement similar mechanism in free-lock manner using CAS operations, making waiter queue lock-free. It is pretty simple for step 1, but problematic for step 2.
Since DCAS isn't supported by Java, does anyone know how
can be achieved update of the queue node (change of waiter's state and mark for deletion) atomically with update of other object (lock table), using only CAS? I don't ask for code, just some hints or approaches that could help.
You can try to implement a multi-word CAS using the available CAS, however this is dependent on how much of a performance hit you are willing to take in order to achieve this.
You can look at Harris' http://www.cl.cam.ac.uk/research/srg/netos/papers/2002-casn.pdf

Making sure a thread's "updates" is readable to other threads in Java

I got one main thread that will start up other threads. Those other threads will ask for jobs to be done, and the main thread will make jobs available for the other threads to see and do.
The job that must be done is to set indexes in the a huge boolean array to true. They are by default false, and the other threads can only set them to true, never false. The various jobs may involve setting the same indexes to true.
The main thread finds new jobs depending on two things.
The values in the huge boolean array.
Which jobs has already been done.
How do I make sure the main thread reads fresh values from the huge boolean array?
I can't have the update of the array be through a synchronized method, because that's pretty much all the other threads do, and as such I would only get a pretty much sequential performance.
Let's say the other threads update the huge boolean array by setting many of it's indexes to true through a non-synchronized function. How can I make sure the main thread reads the updates and make sure it's not just locally cached at the thread? Is there any ways to make it "push" the update? I'm guessing the main thread should just use a synchronized method to "get" the updates?
For the really complete answer to your question, you should open up a copy of the Java Language Spec, and search for "happens before".
When the JLS says that A "happens before" B, it means that in a valid implementation of the Java language, A is required to actually happen before B. The spec says things like:
If some thread updates a field, and then releases a lock (e.g.,
leaves a synchronized block), the update "happens before" the lock is
released,
If some thread releases a lock, and some other thread subsequently
acquires the same lock, the release "happens before" the acquisition.
If some thread acquires a lock, and then reads a field, the
acquisition "happens before" the read.
Since "happens before" is a transitive relationship, you can infer that if thread A updates some variables inside a synchronized block and then thread B examines the variables in a block that is synchronized on the same object, then thread B will see what thread A wrote.
Besides entering and leaving synchronized blocks, there are lots of other events (constructing objects, wait()ing/notify()ing objects, start()ing and join()ing threads, reading and writing volatile variables) that allow you to establish "happens before" relationships between threads.
It's not a quick solution to your problem, but the chapter is worth reading.
...the main thread will make jobs available for the other threads to see and do...
I can't have the update of the array be through a synchronized method, because that's pretty much all the other threads do, and ...
Sounds like you're saying that each worker thread can only do a trivial amount of work before it must wait for further instructions from the main() thread. If that's true, then most of the workers are going to be waiting most of the time. You'd probably get better performance if you just do all of the work in a single thread.
Assuming that your goal is to make the best use of available cycles a multi-processor machine, you will need to partition the work in some way that lets each worker thread go off and do a significant chunk of it before needing to synchronize with any other thread.
I'd use another design pattern. For instance, you could add to a Set the indexes of the boolean values as they're turned on, for instance, and then synchronize access to that. Then you can use wait/notify to wake up.
First of all, don't use boolean arrays in general, use BitSets. See this: boolean[] vs. BitSet: Which is more efficient?
In this case you need an atomic BitSet, so you can't use the java.util.BitSet, but here is one: AtomicBitSet implementation for java
You could instead model this as message passing rather than mutating shared state. In your description the workers never read the boolean array and only write the completion status. Have you considered using a pending job queue that workers consume from and a completion queue that the master reads? The job status fields can be efficiently maintained by the master thread without any shared state concerns. Depending on your needs, you can use either blocking or non-blocking queues.

private lock object and intrinsic lock

When to prefer private lock object to synchronize a block over intrinsic lock(this)?
Please cite the upshots of both.
private lock object:-
Object lock =new Object();
synchronized(lock)
{ }
intrinsic lock(this):-
synchronized(this)
{ }
Using explicit lock objects can allow different methods to synchronize on different locks and avoid unnecessary contention. It also makes the lock more explicit, and can make it easier to search the code for blocks that use the lock.
You probably don't want to do either, however! Find the appropriate class in java.util.concurrent and use that instead. :)
A private lock can be useful if you are doing some kind of lock sharding, i.e., you need to only lock certain parts of your object while others can still be accessed by a different client.
One simple parallel to understand this concept is a table lock in a database: if you are modifying one table, you acquire the lock on that single table, not the whole database, so the rest of the tables can be modified by other clients. If you need to implement a similar logic but in a POJO you would use as many private locks as necessary.
One downside of this approach is that your class gets cluttered with a lot of objects. This might be indication that you need to refactor it in a more granular set of classes with a simpler locking strategy but it all depends on your design and implementation.
These are both using intrinsic locks. Your first example is using the intrinsic lock of lock, while the second is using the intrinsic lock of this. The question is whether or not this is really what you want to lock on, which it often isn't.
Consider the case, when you use synchronized(this) inside one of your methods. You have 2 objects of this class, and these objects reference some shared resource. If you lock on this then you will not have mutual exclusivity to that resource. You need to lock on some object that everything that can access the resource has access to.
Lock on this ONLY if the important resource is part of the class itself. Even then in some cases a lock object is better. Also, if there's several different resources in your class, that do not need to be mutually exclusive as a whole, but individually, then you need several lock objects.
The key is to really just know how synchronized works, and be mindful of what your code is actually doing
Actually, using either won't make any difference, it is more about choice/style, API writers will lock on the Object -either by synchronized(this) or explicit synchronized on any Object method-, or use an internal monitor depends on sharing a resource, you might not want API users to access your internal lock or you might want to give the choice to API users to share the Object intrinsic lock.
Either way none of those choices are wrong, it is more about the intention of such lock.
Read Java Concurrency in Practice, that will make you a master of concurrency and clarify many of those concepts, which sometimes are more related with the choice you make rather than correctness.
Each object has only one intrinsic lock.
With the synchronized keyword: if you call two synchronized methods from the same object from two different threads, even-thought one thread could run method one and the other thread could run method two, that will not happen because both methods share the same intrinsic lock (which belongs to the object). And according to that one thread will have to wait for the other thread to finish before it can acquire the intrinsic lock to run the other method.
But if you use multiple locks, you will make sure that only one thread can access method one at a time and that only one thread can access method two at a time. But you will allow that method one and method two can be accessed by one thread each at the same time and then reducing the time required for the operation.

Some questions on java multithreading,

I have a set of questions regarding Java multithreading issues. Please provide me with as much help as you can.
0) Assume we have 2 banking accounts and we need to transfer money between them in a thread-safe way.
i.e.
accountA.money += transferSum;
accountB.money -= transferSum;
Two requirements exist:
no one should be able to see the intermediate results of the operation (i.e. one acount sum is increased, but others is not yet decreased)
reading access should not be blocked during the operation (i.e. old values of account sums should be shown during the operation goes on)
Can you suggest some ideas on this?
1) Assume 2 threads modify some class field via synchronized method or utilizing an explicit lock. Regardless of synchronization, there are no guarantee that this field will be visible to threads, that read it via NOT synchronized method. - is it correct?
2) How long a thread that is awoken by notify method can wait for a lock? Assume we have a code like this:
synchronized(lock) {
lock.notifyall();
//do some very-very long activity
lock.wait() //or the end of synchronized block
}
Can we state that at least one thread will succeed and grab the lock? Can a signal be lost due to some timeout?
3) A quotation from Java Concurrency Book:
"Single-threaded executors also provide sufficient internal synchronization to guarantee that any memory writes made by tasks are visible to subsequent tasks; this means that objects can be safely confined to the "task thread" even though that thread may be replaced with another from time to time."
Does this mean that the only thread-safety issue that remains for a code being executed in single-threaded executor is data race and we can abandon the volatile variables and overlook all visibility issues? It looks like a universal way to solve a great part of concurrency issues.
4) All standard getters and setters are atomic. They need not to be synchronized if the field is marked as volatile. - is it correct?
5) The initiation of static fields and static blocks is accomplished by one thread and thus need not to be synchronized. - is it correct?
6) Why a thread needs to notify others if it leaves the lock with wait() method, but does not need to do this if it leaves the lock by exiting the synchronized block?
0: You can't.
Assuring an atomic update is easy: you synchronize on whatever object holds the bank accounts. But then you either block all readers (because they synchronize as well), or you can't guarantee what the reader will see.
BUT, in a large-scale system such as a banking system, locking on frequently-accessed objects is a bad idea, as it introduces waits into the system. In the specific case of changing two values, this might not be an issue: it will happen so fast that most accesses will be uncontended.
There are certainly ways to avoid such race conditions. Databases do a pretty good job for ba nk accounts (although ultimately they rely on contended access to the end of a transaction).
1) To the best of my knowledge, there are no guarantees other than those established by synchronized or volatile. If one thread makes a synchronized access and one thread does not, the unsynchronized access does not have a memory barrier. (if I'm wrong, I'm sure that I'll be corrected or at least downvoted)
2) To quote that JavaDoc: "The awakened threads will not be able to proceed until the current thread relinquishes the lock on this object." If you decide to throw a sleep into that synchronized block, you'll be unhappy.
3) I'd have to read that quote several times to be sure, but I believe that "single-threaded executor" is the key phrase. If the executor is running only a single thread, then there is a strict happens-before relationship for all operations on that thread. It does not mean that other threads, running in other executors, can ignore synchronization.
4) No. long and double are not atomic (see the JVM spec). Use an AtomicXXX object if you want unsynchronized access to member variables.
5) No. I couldn't find an exact reference in the JVM spec, but section 2.17.5 implies that multiple threads may initialize classes.
6) Because all threads wait until one thread does a notify. If you're in a synchronized block, and leave it with a wait and no notify, every thread will be waiting for a notification that will never happen.
0) Is a difficult problem because you don't want intermediate results to be visible or to lock readers during the operation. To be honest I'm not sure it's possible at all, in order to ensure no thread sees intermediate results you need to block readers while doing both writes.
If you dont want intermediate results visible then you have to lock both back accounts before doing your writing. The best way to do this is to make sure you get and release the locks in the same order each time (otherwise you get a deadlock). E.G. get the lock on the lower account number first and then the greater.
1) Correct, all access must be via a lock/synchronized or use volatile.
2) Forever
3) Using a Single Threaded Executor means that as long as all access is doen by tasks run by that executor you dont need to worry about thread safety/visibilty.
4) Not sure what you mean by standard getters and setters but writes to most variable types (except double and long) are atomic and so don't need sync, just volatile for visibility. Try using the Atomic variants instead.
5) No, it is possible for two threads to try an init some static code, making naive implementations of Singleton unsafe.
6) Sync and Wait/Notify are two different but related mechanisms. Without wait/notify you'd have to spin lock (i.e. keep getting a lock and polling )on a object to get updates
5) The initiation of static fields and static blocks is accomplished by one thread and thus need not to be synchronized. - is it correct?
VM executes static initialization in a synchronized(clazz) block.
static class Foo {
static {
assert Thread.holdsLock(Foo.class); // true
synchronized(Foo.class){ // redundant, already under the lock
....
0) The only way I can see to do this to to store accountA and accountB in an object stored in an AtomicReference. You then make a copy of the object, modify it, and update the reference if it is still the same as the original reference.
AtomicReference<Accounts> accountRef;
Accounts origRef;
Accounts newRef;
do {
origRef = accountRef.get();
// make a deep copy of origRef
newRef.accountA.money += transferSum;
newRef.accountB.money -= transferSum;
} while(accountRef.compareAndSet(origRef, newRef);

Categories

Resources