How does Java determine which Thread should proceed when using synchronized? - java

private static void WaitInQueue(Customer c)
{
synchronized(mutex){
//Do some operation here.
}
}
I need to make threads wait before proceeding(only one at a time), however, it appears that synchronized is not using FIFO to determine which should proceed next.(It seems like LIFO) Why is this?
How can I ensure that the first thread to wait at synchronized will be the first one to aquire the lock next?

a synchronized block makes no guarantees about fairness - any of the waiting threads may in theory be chosen to execute. if you really want a fair lock (fifo), switch to use the newer locking mechanisms introduced in java 5+.
see for example the documentation for ReentrantLock.
here's how you'd use a fair lock:
private final ReentrantLock lock = new ReentrantLock(true); //fair lock
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
note, however, that this results in overall degraded performance and so is not recommended.
quoting from the documentation"
The constructor for this class accepts an optional fairness parameter.
When set true, under contention, locks favor granting access to the
longest-waiting thread. Otherwise this lock does not guarantee any
particular access order. Programs using fair locks accessed by many
threads may display lower overall throughput (i.e., are slower; often
much slower) than those using the default setting

You can use the Semaphore class, with a fairness setting set to true and a count of 1. This guarantees FIFO order for threads, and is almost identical to having a synchronized block.
ReentrantLock also provides a fairness setting.

To answer "Why is this?": There's rarely any reason to use a specific order on a lock. The threads hit the lock in random order and may as well leave it the same way. The important thing from the JVM's viewpoint is to keep the cores busy working on your program's code. Generally, if you care about what order your threads run in you need something a lot fancier than a lock or semaphore.
The only good exception I can think of is if your lock always has waiting threads, creating the real possibility that a thread that hits it might wait for many seconds, continually getting bumped to the back of the "queue", while an irate user fumes. Then, FIFO makes a lot of sense. But even here you might want to spend some time trying to speed up the synchronized block (or avoiding it completely) so most threads that hit it don't get blocked.
In summary, think long and hard about your design if you find yourself worrying about the order your threads run in.

You should use Thread.join for wait before proceeding.
Just go through the following link
http://msdn.microsoft.com/en-us/library/dsw9f9ts(v=vs.90).aspx

Related

Is there a non-reentrant ReadWriteLock I can use?

I need a ReadWriteLock that is NOT reentrant, because the lock may be released by a different thread than the one that acquired it. (I realized this when I started to get IllegalMonitorStateException intermittently.)
I'm not sure if non-reentrant is the right term. A ReentrantLock allows the thread that currently holds to lock to acquire it again. I do NOT want this behaviour, therefore I'm calling it "non-reentrant".
The context is that I have a socket server using a thread pool. There is NOT a thread per connection. Requests may get handled by different threads. A client connection may need to lock in one request and unlock in another request. Since the requests may be handled by different threads, I need to be able to lock and unlock in different threads.
Assume for the sake of this question that I need to stay with this configuration and that I do really need to lock and unlock in different requests and therefore possibly different threads.
It's a ReadWriteLock because I need to allow multiple "readers" OR an exclusive "writer".
It looks like this could be written using AbstractQueuedSynchronizer but I'm afraid if I write it myself I'll make some subtle mistake. I can find various examples of using AbstractQueuedSynchronizer but not a ReadWriteLock.
I could take the OpenJDK ReentrantReadWriteLock source and try to remove the reentrant part but again I'm afraid I wouldn't get it quite right.
I've looked in Guava and Apache Commons but didn't find anything suitable. Apache Commons has RWLockManager which might do what I need but I'm not sure and it seems more complex than I need.
A Semaphore allows different threads to perform the acquire and release of permits. An exclusive write is equivalent to having all of the permits, as the thread waits until all have been released and no additional permits can be acquired by other threads.
final int PERMITS = Integer.MAX_VALUE;
Semaphore semaphore = new Semaphore(PERMITS);
// read
semaphore.acquire(1);
try { ... }
finally {
semaphore.release(1);
}
// write
semaphore.acquire(PERMITS);
try { ... }
finally {
semaphore.release(PERMITS);
}
I know you've already accepted another answer. But I still think that you are going to create quite a nightmare for yourself. Eventually, a client is going to fail to come back and release those permits and you'll begin to wonder why the "writer" never writes.
If I were doing it, I would do it like this:
Client issues a request to start a transaction
The initial request creates a task (Runnable/Callable) and places it in an Executor for execution
The initial request also registers that task in a Map by transaction id
Client issues the second request to close the transaction
The close request finds the task by transaction id in a map
The close request calls a method on the task to indicate that it should close (probably a signal on a Condition or if data needs to be passed, placing an object in a BlockingQueue)
Now, the transaction task would have code like this:
public void run() {
readWriteLock.readLock().lock();
try {
//do stuff for initializing this transaction
if (condition.await(someDurationAsLong, someTimeUnit)( {
//do the rest of the transaction stuff
} else {
//do some other stuff to back out the transaction
}
} finally {
readWriteLock.readLock.unlock();
}
}
Not entirely sure what you need, esp. why it should be a read write lock, but if you have task that need to be handled by many threads, and you don't want it to be processesd/accessed concurrently, I'd use actually a ConcurrentMap ( etc.).
You can remove the task from the map or substitute it with a special "lock object" to indicate it's locked. You could return the task with an updated state to the map to let another thread take over, or alternatively you can pass the task directly to the next thread and let it return the task to the map instead.
They seem to have dropped the ball on this one by deprecating com.sun.corba.se.impl.orbutil.concurrent.Mutex;
I mean who in his right mind thinks that we won't need non-reentrant locks. Here we are, wasting our times arguing over the definition of reentrant (can slighty change in meaning per framework btw). Yes I want to tryLock on the same thread is that such a bad thing? it won't deadlock because ill else out of it. A non-reentrant lock that locks in the same thread can be very usefull to prevent errors on GUI apps where the user presses on the same button rapidly and repeatedly. Been there, done that, QT was right...again.

Implementing a Mutex in Java

I have a multi-threaded application (a web app in Tomcat to be exact). In it there is a class that almost every thread will have its own instance of. In that class there is a section of code in one method that only ONE thread (user) can execute at a time. My research has led me to believe that what I need here is a mutex (which is a semaphore with a count of 1, it would seem).
So, after a bit more research, I think what I should do is the following. Of importance is to note that my lock Object is static.
Am I doing it correctly?
public Class MyClass {
private static Object lock = new Object();
public void myMethod() {
// Stuff that multiple threads can execute simultaneously.
synchronized(MyClass.lock) {
// Stuff that only one thread may execute at a time.
}
}
}
In your code, myMethod may be executed in any thread, but only in one at a time. That means that there can never be two threads executing this method at the same time. I think that's what you want - so: Yes.
Typically, the multithreading problem comes from mutability - where two or more threads are accessing the same data structure and one or more of them modifies it.
The first instinct is to control the access order using locking, as you've suggested - however you can quickly run into lock contention where your application looses a lot of processing time to context switching as your threads are parked on lock monitors.
You can get rid of most of the problem by moving to immutable data structures - so you return a new object from the setters, rather than modifying the existing one, as well as utilising concurrent collections, such a ConcurrentHashMap / CopyOnWriteArrayList.
Concurrent programming is something you'll need to get your head around, especially as throughput comes from parallelisation in todays modern computing world.
This will allow one thread at a time through the block. Other thread will wait, but no queue as such, there is no guarantee that threads will get the lock in a fair manner. In fact with Biased lock, its unlikely to be fair. ;)
Your lock should be final If there is any reason it can't its probably a bug. BTW: You might be able to use synchronized(MyClass.class) instead.

Some questions on java multithreading,

I have a set of questions regarding Java multithreading issues. Please provide me with as much help as you can.
0) Assume we have 2 banking accounts and we need to transfer money between them in a thread-safe way.
i.e.
accountA.money += transferSum;
accountB.money -= transferSum;
Two requirements exist:
no one should be able to see the intermediate results of the operation (i.e. one acount sum is increased, but others is not yet decreased)
reading access should not be blocked during the operation (i.e. old values of account sums should be shown during the operation goes on)
Can you suggest some ideas on this?
1) Assume 2 threads modify some class field via synchronized method or utilizing an explicit lock. Regardless of synchronization, there are no guarantee that this field will be visible to threads, that read it via NOT synchronized method. - is it correct?
2) How long a thread that is awoken by notify method can wait for a lock? Assume we have a code like this:
synchronized(lock) {
lock.notifyall();
//do some very-very long activity
lock.wait() //or the end of synchronized block
}
Can we state that at least one thread will succeed and grab the lock? Can a signal be lost due to some timeout?
3) A quotation from Java Concurrency Book:
"Single-threaded executors also provide sufficient internal synchronization to guarantee that any memory writes made by tasks are visible to subsequent tasks; this means that objects can be safely confined to the "task thread" even though that thread may be replaced with another from time to time."
Does this mean that the only thread-safety issue that remains for a code being executed in single-threaded executor is data race and we can abandon the volatile variables and overlook all visibility issues? It looks like a universal way to solve a great part of concurrency issues.
4) All standard getters and setters are atomic. They need not to be synchronized if the field is marked as volatile. - is it correct?
5) The initiation of static fields and static blocks is accomplished by one thread and thus need not to be synchronized. - is it correct?
6) Why a thread needs to notify others if it leaves the lock with wait() method, but does not need to do this if it leaves the lock by exiting the synchronized block?
0: You can't.
Assuring an atomic update is easy: you synchronize on whatever object holds the bank accounts. But then you either block all readers (because they synchronize as well), or you can't guarantee what the reader will see.
BUT, in a large-scale system such as a banking system, locking on frequently-accessed objects is a bad idea, as it introduces waits into the system. In the specific case of changing two values, this might not be an issue: it will happen so fast that most accesses will be uncontended.
There are certainly ways to avoid such race conditions. Databases do a pretty good job for ba nk accounts (although ultimately they rely on contended access to the end of a transaction).
1) To the best of my knowledge, there are no guarantees other than those established by synchronized or volatile. If one thread makes a synchronized access and one thread does not, the unsynchronized access does not have a memory barrier. (if I'm wrong, I'm sure that I'll be corrected or at least downvoted)
2) To quote that JavaDoc: "The awakened threads will not be able to proceed until the current thread relinquishes the lock on this object." If you decide to throw a sleep into that synchronized block, you'll be unhappy.
3) I'd have to read that quote several times to be sure, but I believe that "single-threaded executor" is the key phrase. If the executor is running only a single thread, then there is a strict happens-before relationship for all operations on that thread. It does not mean that other threads, running in other executors, can ignore synchronization.
4) No. long and double are not atomic (see the JVM spec). Use an AtomicXXX object if you want unsynchronized access to member variables.
5) No. I couldn't find an exact reference in the JVM spec, but section 2.17.5 implies that multiple threads may initialize classes.
6) Because all threads wait until one thread does a notify. If you're in a synchronized block, and leave it with a wait and no notify, every thread will be waiting for a notification that will never happen.
0) Is a difficult problem because you don't want intermediate results to be visible or to lock readers during the operation. To be honest I'm not sure it's possible at all, in order to ensure no thread sees intermediate results you need to block readers while doing both writes.
If you dont want intermediate results visible then you have to lock both back accounts before doing your writing. The best way to do this is to make sure you get and release the locks in the same order each time (otherwise you get a deadlock). E.G. get the lock on the lower account number first and then the greater.
1) Correct, all access must be via a lock/synchronized or use volatile.
2) Forever
3) Using a Single Threaded Executor means that as long as all access is doen by tasks run by that executor you dont need to worry about thread safety/visibilty.
4) Not sure what you mean by standard getters and setters but writes to most variable types (except double and long) are atomic and so don't need sync, just volatile for visibility. Try using the Atomic variants instead.
5) No, it is possible for two threads to try an init some static code, making naive implementations of Singleton unsafe.
6) Sync and Wait/Notify are two different but related mechanisms. Without wait/notify you'd have to spin lock (i.e. keep getting a lock and polling )on a object to get updates
5) The initiation of static fields and static blocks is accomplished by one thread and thus need not to be synchronized. - is it correct?
VM executes static initialization in a synchronized(clazz) block.
static class Foo {
static {
assert Thread.holdsLock(Foo.class); // true
synchronized(Foo.class){ // redundant, already under the lock
....
0) The only way I can see to do this to to store accountA and accountB in an object stored in an AtomicReference. You then make a copy of the object, modify it, and update the reference if it is still the same as the original reference.
AtomicReference<Accounts> accountRef;
Accounts origRef;
Accounts newRef;
do {
origRef = accountRef.get();
// make a deep copy of origRef
newRef.accountA.money += transferSum;
newRef.accountB.money -= transferSum;
} while(accountRef.compareAndSet(origRef, newRef);

Java Thread Synchronization, best concurrent utility, read operation

I have a java threads related question.
To take a very simple example, lets say I have 2 threads.
Thread A running StockReader Class instance
Thread B running StockAvgDataCollector Class instance
In Thread B, StockAvgDataCollector collects some market Data continuously, does some heavy averaging/manipulation and updates a member variable spAvgData
In Thread A StockReader has access to StockAvgDataCollector instance and its member spAvgData using getspAvgData() method.
So Thread A does READ operation only and Thread B does READ/WRITE operations.
Questions
Now, do I need synchronization or atomic functionality or locking or any concurrency related stuff in this scenario? It doesnt matter if Thread A reads an older value.
Since Thread A is only going READ and not update anything and only Thread B does any WRITE operations, will there be any deadlock scenarios?
I've pasted a paragraph below from the following link. From that paragraph, it seems like I do need to worry about some sort of locking/synchronizing.
http://java.sun.com/developer/technicalArticles/J2SE/concurrency/
Reader/Writer Locks
When using a thread to read data from an object, you do not necessarily need to prevent another thread from reading data at the same time. So long as the threads are only reading and not changing data, there is no reason why they cannot read in parallel. The J2SE 5.0 java.util.concurrent.locks package provides classes that implement this type of locking. The ReadWriteLock interface maintains a pair of associated locks, one for read-only and one for writing. The readLock() may be held simultaneously by multiple reader threads, so long as there are no writers. The writeLock() is exclusive. While in theory, it is clear that the use of reader/writer locks to increase concurrency leads to performance improvements over using a mutual exclusion lock. However, this performance improvement will only be fully realized on a multi-processor and the frequency that the data is read compared to being modified as well as the duration of the read and write operations.
Which concurrent utility would be less expensive and suitable in my example?
java.util.concurrent.atomic ?
java.util.concurrent.locks ?
java.util.concurrent.ConcurrentLinkedQueue ? - In this case StockAvgDataCollector will add and StockReader will remove. No getspAvgData() method will be exposed.
Thanks
Amit
Well, the whole ReadWriteLock thing really makes sense when you have many readers and at least one writer... So you guarantee liveliness (you won't be blocking any reader threads if no one other thread is writing). However, you have only two threads.
If you don't mind thread B reading an old (but not corrupted) value of spAvgData, then I would go for an AtomicDouble (or AtomicReference, depending on what spAvgData's datatype).
So the code would look like this
public class A extends Thread {
// spAvgData
private final AtomicDouble spAvgData = new AtomicDouble(someDefaultValue);
public void run() {
while (compute) {
// do intensive work
// ...
// done with work, update spAvgData
spAvgData.set(resultOfComputation);
}
}
public double getSpAvgData() {
return spAvgData.get();
}
}
// --------------
public class B {
public void someMethod() {
A a = new A();
// after A being created, spAvgData contains a valid value (at least the default)
a.start();
while(read) {
// loll around
a.getSpAvgData();
}
}
}
Yes, synchronization is important and you need to consider two parameters: visibility of the spAvgData variable and atomicity of its update. In order to guarantee visibility of the spAvgData variable in thread B by thread A, the variable can be declared volatile or as an AtomicReference. Also you need to guard that the action of the update is atomic in case there are more invariants involved or the update action is a compound action, using synchronization and locking. If only thread B is updating that variable then you don't need synchronization and visibility should be enough for thread A to read the most up-to-date value of the variable.
If you don't mind that Thread A can read complete nonsense (including partially updated data) then no, you don't need any synchronisation. However, I suspect that you should mind.
If you just use a single mutex, or ReentrantReadWriteLock and don't suspend or sleep without timeout while holding locks then there will be no deadlock. If you do perform unsafe thread operations, or try to roll your own synchronisation solution, then you will need to worry about it.
If you use a blocking queue then you will also need a constantly-running ingestion loop in StockReader. ReadWriteLock is still of benefit on a single core processor - the issues are the same whether the threads are physically running at the same time, or just interleaved by context switches.
If you don't use at least some form of synchronisation (e.g. a volatile) then your reader may never see any change at all.

Mixing synchronized() with ReentrantLock.lock()

In Java, do ReentrantLock.lock() and ReetrantLock.unlock() use the same locking mechanism as synchronized()?
My guess is "No," but I'm hoping to be wrong.
Example:
Imagine that Thread 1 and Thread 2 both have access to:
ReentrantLock lock = new ReentrantLock();
Thread 1 runs:
synchronized (lock) {
// blah
}
Thread 2 runs:
lock.lock();
try {
// blah
}
finally {
lock.unlock();
}
Assume Thread 1 reaches its part first, then Thread 2 before Thread 1 is finished: will Thread 2 wait for Thread 1 to leave the synchronized() block, or will it go ahead and run?
No, Thread 2 can lock() even when Thread 1 is synchronized on the same lock. This is what the documentation has to say:
Note that Lock instances are just
normal objects and can themselves be
used as the target in a synchronized
statement. Acquiring the monitor lock
of a Lock instance has no specified
relationship with invoking any of the
lock() methods of that instance. It
is recommended that to avoid confusion
you never use Lock instances in this
way, except within their own
implementation.
The two mechanisms are different. Implementation/performance wise:
the synchronized mechanism uses a locking mechanism that is "built into" the JVM; the underlying mechanism is subject to the particular JVM implementation, but typically uses a combination of a raw compare-and-set operation (CAS) instruction for cases where the lock isn't contended plus underlying locking mechanisms provided by the OS;
the lock classes such as ReentrantLock are basically coded in pure Java (via a library introduced in Java 5 which exposes CAS instructions and thread descheduling to Java) and so is somewhat more standardised across OS's and more controllable (see below).
Under some circumstances, the explicit locks can perform better. If you look at this comparison of locking mechanisms I performed under Java 5, you'll see that in that particular test (multiple threads accessing an array), explicit lock classes configured in "unfair" mode (the yellow and cyan triangles) allow more throughput than plain synchronized (the purple arrows).
(I should also say that the performance of synchronized has been improved in more recent versions of Hotspot; there may not be much in it on the latest versions or indeed under other circumstances-- this is obviously one test in one environment.)
Functionality-wise:
the synchronized mechanism provides minimal functionality (you can lock and unlock, locking is an all-or-nothing operation, you're more subject to the algorithm the OS writers decided on), though with the advantage of built-in syntax and some monitoring built into the JVM;
the explicit lock classes provide more control, notably you can specify a "fair" lock, lock with a timeout, override if you need to alter the lock's behiour...
Why did you make the balance static in Account class?
Remove static and it should work.
Also, have a question about your thread usage. In your TestMain you create new threads and assign runnables like WithdrawRequests & DepositRequests. But again you create new threads inside the constructors of those runnables. This will cause the run method to be executed twice!

Categories

Resources