Multi threading and shared object - java

If I have a class like :
class MultiThreadEg {
private Member member;
public Integer aMethod() {
..............
..............
}
public String aThread() {
...............
member.memberMethod(.....);
Payment py = member.payment();
py.processPayment();
...........................
}
}
Suppose that aThread() is a new thread, then, will accessing the shared member object by too many threads at the same time cause any issues (with the following access rules)?
Rule 1 : ONLY reading, no writing to the object(member).
Rule 2 : For all the objects that need some manipulation(writing/modification), a copy of the original object will be created.
for eg: In the payment() method, I do this :
public class Member {
private Payment memPay;
public payment() {
Payment py = new Payment(this.memPay);//Class's Object copy constructor will be called.
return py;
}
}
My concern is that, even though I create object copies for "writing" (like in the method payment()), acessing the member object by too many threads at the same time will cause some discrepancies.
What is the fact ? Is this implementation reliable in every case (0 or more concurrent accesses) ? Please advise. Thanks.

You could simply use a ReentrantReadWriteLock. That way, you could have multiple threads reading at the same time, without issue, but only one would be allowed to modify data. And Java handles the concurrency for you.
ReadWriteLock rwl = new ReentrantReadWriteLock();
Lock readLock = rwl.readLock;
Lock writeLock = rwl.writeLock;
public void read() {
rwl.readLock.lock();
try {
// Read as much as you want.
} finally {
rwl.readlock.unlock();
}
}
public void writeSomething() {
rwl.writeLock.lock();
try {
// Modify anything you want
} finally {
rwl.writeLock.unlock();
}
}
Notice that you should lock() before the try block begins, to guarantee the lock has been obtained before even starting. And, putting the unlock() in the finally clause guarantees that, no matter what happens within the try (early return, an exception is thrown, etc), the lock will be released.

In case update to memPay depends on the memPay contents (like memPay.amount+=100) you should block access for other threads when you are updating. This looks like:
mutual exclusion block start
get copy
update copy
publish copy
mutual exclusion block end
Otherwise there could be lost updates when two threads simultaneously begin update memPay object.

Related

Multithreading: synchronize by locking

Concurrent collection:
ConcurrentMap<LocalDate, A> ex = new ConcurrentHashMap<>();
class A {
AtomicLong C;
AtomicLong D
}
How can I synchronize by locking "C" and "D"? That is, I need to change "C" and "B" at the same time with a guarantee that while I change the other one, the first one does not change from external actions.
Thank you.
What you're solving for
You are describing that you want to:
allow a caller to modify something on an object
prevent any other callers (other threads) from modifying things at the same time
Solution description
This solution uses synchronized, though there are number of other mechanisms available in Java that would support this (several of which are covered in
the Lock Objects section of the Java Tutorials).
The way "synchronized" works is that you designate some code using the "synchronized" keyword, along with an object to synchronize on.
When your code runs, the JVM will guarantee that, for all code which is synchronized – on the same object – only one thread
can proceed at a time.
You can make a synchronized code block, like below. Note: this defines an Object named "lock", but it's just a name chosen for clarity when
reading the code – you could name it anything you like.
Object lock;
synchronized (lock) {
... // all things here run only when "lock" is available
}
You can also designate an entire method as being synchronized, like this:
public void synchronized print() {
System.out.println("hello");
}
This second example behaves like the first - it also locks on an object – but it's not clear at a glance what the object is; that is, how does
the JVM know which object to sychronize on? This approach works if the method itself is called on an object instance, and in that case the lock becomes this. I'll show an example below.
There's good info in the Java Tutorials about synchronized methods.
Solution #1: synchronized block using Object lock
Here are a few notes about a class AllowOneEditAtATime:
it has two private members: one and two
because they're private, they cannot be changed directly – so it would not be allowed to do something like this:
AllowOneEditAtATime a = new AllowOneEditAtATime();
a.one = new AtomicLong(1); // cannot change "one" directly because it is private
defines private Object lock – this is meant to act as the thing that two different synchronized code blocks will lock on. It's totally fine to have different blocks of code each synchronize on the same object. This is the main technique you're after.
uses a synchronized block inside setOne(), synchronizing on "lock"
uses another synchronized block inside the other method – setTwo() – also synchronizing on "lock"
because both setOne() and setTwo() are synchronized on the same object, one of them will be allowed to run at a time
class AllowOneEditAtATime1 {
private Object lock;
private AtomicLong one;
private AtomicLong two;
public void setOne(AtomicLong newOne) {
synchronized (lock) {
one = newOne;
}
}
public void setTwo(AtomicLong newTwo) {
synchronized (lock) {
two = newTwo;
}
}
}
Solution #2: synchronized block, using this
Solution #1 works fine, but it isn't necessary (in this case) to create an entire object just for locking. Instead, you can rely on the fact that
this code runs only after someone called new AllowOneEditAtATime(), which means there's always an object instance, which means inside the code
you can use this. The this keyword refers to the object instance itself, the actual instance of AllowOneEditAtATime.
So here's a variation using this (no more Object lock):
class AllowOneEditAtATime2 {
private AtomicLong one;
private AtomicLong two;
public void setOne(AtomicLong newOne) {
synchronized (this) {
one = newOne;
}
}
public void setTwo(AtomicLong newTwo) {
synchronized (this) {
two = newTwo;
}
}
}
Solution #3: synchronized methods, implicit lock
Solution #2 works fine, but since we're using this as the lock, and since the code paths fit with doing this, we can use
synchronized methods
instead of synchronized code blocks.
That means we can replace this:
public void setTwo(AtomicLong newTwo) {
synchronized (this) {
two = newTwo;
}
}
with this:
public synchronized void setOne(AtomicLong newOne) {
one = newOne;
}
Under the covers, the entire setOne() method is synchronized on this automatically, so it isn't necessary to
include synchronized (this) { .. }
at all. In Solution #2, both methods were doing that, so both can be replaced.
By synchronizing both methods, they will both be synchronized on the object instance (this), which is similar to Solution #2, but with less code.
class AllowOneEditAtATime3 {
private AtomicLong one;
private AtomicLong two;
public synchronized void setOne(AtomicLong newOne) {
one = newOne;
}
public synchronized void setTwo(AtomicLong newTwo) {
two = newTwo;
}
}
Any of the above would work, as would other synchronization mechanisms. As with all things, there are multiple ways you could solve the problem.
For additional reading,
the Concurrency lesson (in Java Tutorials) has good info
and might be worth your time.

How to synchronize multiple threads from accessing some common data

I have three different threads which creates three different objects to read/manipulate some data which is common for all the threads. Now, I need to ensure that we are giving an access only to one thread at a time.
The example goes something like this.
public interface CommonData {
public void addData(); // adds data to the cache
public void getDataAccessKey(); // Key that will be common across different threads for each data type
}
/*
* Singleton class
*/
public class CommonDataCache() {
private final Map dataMap = new HashMap(); // this takes keys and values as custom objects
}
The implementation class of the interface would look like this
class CommonDataImpl implements CommonData {
private String key;
public CommonDataImpl1(String key) {
this.key = key;
}
public void addData() {
// access the singleton cache class and add
}
public void getDataAccessKey() {
return key;
}
}
Each thread will be invoked as follows:
CommonData data = new CommonDataImpl("Key1");
new Thread(() -> data.addData()).start();
CommonData data1 = new CommonDataImpl("Key1");
new Thread(() -> data1.addData()).start();
CommonData data2 = new CommonDataImpl("Key1");
new Thread(() -> data2.addData()).start();
Now, I need to synchronize those threads if and only if the keys of the data object (passed on to the thread) is the same.
My thought process so far:
I tried to have a class that provides the lock on the fly for a given key which looks something like this.
/*
* Singleton class
*/
public class DataAccessKeyToLockProvider {
private volatile Map<String, ReentrantLock> accessKeyToLockHolder = new ConcurrentHashMap<>();
private DataAccessKeyToLockProvider() {
}
public ReentrantLock getLock(String key) {
return accessKeyToLockHolder.putIfAbsent(key, new ReentrantLock());
}
public void removeLock(BSSKey key) {
ReentrantLock removedLock = accessKeyToLockHolder.remove(key);
}
}
So each thread would call this class and get the lock and use it and remove it once the processing is done. But this can so result in a case where the second thread could get the lock object that was inserted by the first thread and waiting for the first thread to release the lock. Once the first thread removes the lock, now the third thread would get a different lock altogether, so the 2nd thread and the 3rd thread are not in sync anymore.
Something like this:
new Thread(() -> {
ReentrantLock lock = DataAccessKeyToLockProvider.get(data.getDataAccessKey());
lock.lock();
data.addData();
lock.unlock();
DataAccessKeyToLockProvider.remove(data.getDataAccessKey());
).start();
Please let me know if you need any additional details to help me resolve my problem
P.S: Removing the key from the lock provider is kind of mandatory as i will be dealing with some millions of keys (not necessarily strings), so I don't want the lock provider to eat up my memory
Inspired the solution provided #rzwitserloot, I have tried to put some generic code that waits for the other thread to complete its processing before giving the access to the next thread.
public class GenericKeyToLockProvider<K> {
private volatile Map<K, ReentrantLock> keyToLockHolder = new ConcurrentHashMap<>();
public synchronized ReentrantLock getLock(K key) {
ReentrantLock existingLock = keyToLockHolder.get(key);
try {
if (existingLock != null && existingLock.isLocked()) {
existingLock.lock(); // Waits for the thread that acquired the lock previously to release it
}
return keyToLockHolder.put(key, new ReentrantLock()); // Override with the new lock
} finally {
if (existingLock != null) {
existingLock.unlock();
}
}
}
}
But looks like the entry made by the last thread wouldn't be removed. Anyway to solve this?
First, a clarification: You either use ReentrantLock, OR you use synchronized. You don't synchronized on a ReentrantLock instance (you synchronize on any object you want) – or, if you want to go the lock route, you can call the lock lock method on your lock object, using a try/finally guard to always ensure you call unlock later (and don't use synchronized at all).
synchronized is low-level API. Lock, and all the other classes in the java.util.concurrent package are higher level and offer far more abstractions. It's generally a good idea to just peruse the javadoc of all the classes in the j.u.c package from time to time, very useful stuff in there.
The key issue is to remove all references to a lock object (thus ensuring it can be garbage collected), but not until you are certain there are zero active threads locking on it. Your current approach does not know how many classes are waiting. That needs to be fixed. Once you return an instance of a Lock object, it's 'out of your hands' and it is not possible to track if the caller is ever going to call lock on it. Thus, you can't do that. Instead, call lock as part of the job; the getLock method should actually do the locking as part of the operation. That way, YOU get to control the process flow. However, let's first take a step back:
You say you'll have millions of keys. Okay; but it is somewhat unlikely you'll have millions of threads. After all, a thread requires a stack, and even using the -Xss parameter to reduce the stack size to the minimum of 128k or so, a million threads implies you're using up 128GB of RAM just for stacks; seems unlikely.
So, whilst you might have millions of keys, the number of 'locked' keys is MUCH smaller. Let's focus on those.
You could make a ConcurrentHashMap which maps your string keys to lock objects. Then:
To acquire a lock:
Create a new lock object (literally: Object o = new Object(); - we are going to be using synchronized) and add it to the map using putIfAbsent. If you managed to create the key/value pair (compare the returned object using == to the one you made; if they are the same, you were the one to add it), you got it, go, run the code. Once you're done, acquire the sync lock on your object, send a notification, release, and remove:
public void doWithLocking(String key, Runnable op) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
op.run();
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
} else {
...
}
}
To wait until the lock is available, first acquire a lock on the locker object, THEN check if the concurrentMap still contains it. If not, you're now free to retry this operation. If it's still in, then we now wait for a notification. In any case we always just retry from scratch. Thus:
public void performWithLocking(String key, Runnable op) throws InterruptedException {
while (true) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
try {
op.run();
} finally {
// We want to lock even if the operation throws!
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
}
return;
} else {
synchronized (o) {
if (concurrentMap.containsKey(key)) o.wait();
}
}
}
}
}
Instead of this setup where you pass the operation to execute along with the lock key, you could have tandem 'lock' and 'unlock' methods but now you run the risk of writing code that forgets to call unlock. Hence why I wouldn't advise it!
You can call this with, for example:
keyedLockSupportThingie.doWithLocking("mykey", () -> {
System.out.println("Hello, from safety!");
});

Java Concurrency volatile for reading synchronization for writing

I need to create a class that has a shared-between-threads Object (lets call is SharedObject). The special thing about SharedObject is that it holds a String that will be returned in multithreaded environment, and sometimes the entire SharedObject will be written to by changing field reference to newly created object.
I do not want to make the read and write both synchronised on the same monitor because the write scenario is happening rarely while read scenario is quite common. Therefore I did the following:
public class ObjectHolder {
private volatile SharedObject sharedObject;
public String getSharedObjectString() {
if (!isObjectStillValid()) {
obtainNewSharedObject()
}
return sharedObject.getCommonString()
}
public synchronized void obtainNewSharedObject() {
/* This is in case multiple threads wait on this lock,
after first one obtains new object the others can just
use it and should not obtain a new one */
if(!isObjectStillValid()) {
sharedObject = new SharedObject(/*some parameters from somewhere*/)
}
}
}
From what I have read from documentation and on stackoverflow, the synchronized keyword will assure only one thread can access the synchronised block on the same object instance(therefore write race/multiple unnecessary writes is a non-issue) while volatile keyword on the field reference will assure the reference value is written directly to the main program memory (not cached locally).
Are there any other pitfalls I am missing?
I want to be sure that within synchronized block when sharedObject is written to, the new value of sharedObject is present for any other thread at latest when lock for obtainNewSharedObject() is released. Should this not be guaranteed, I could encounter scenarios of unnecessary writes and replacing correct values which are a big problem for this case.
I know to be absolutely safe I could just make getSharedObjectString() synchronized by itself however as stated previously I do not want to block reading if not needed.
This way reading is non-blocking, when a write scenario occurs it is blocking.
I should probably mention method isObjectStillValid() is thread independant (entirely SharedObject and System clock based) therefore a valid Thread-free check to be used for write scenarios.
Edit: Please note I could not find a similar topic on stackoverflow, but it may exist. Sorry if that is the case.
Edit2: Thank you for all the comments. Edit because apparently I cannot upvote yet (I can, but it does not show). While my solution is functional as long as isObjectStillValid is thread-safe, it can suffer from decreased performance due to multiple accesses to volatile field. I will improve it most likely using the upgraded double-checked locking solution. I will also in-depth analyse all the other possibilities mentioned here.
Why don't you use AtomicReference. It uses optimistic locking, meaning that no actual thread locking is involved. Internally it uses Compare and Swap. If you look at the implementation it uses volatile in its implementation and I would trust Doug Lea to implement it correctly :)
Apart from this, there many more ways for synchronization between lot of readers and some writers - ReadWriteLock
This looks like a classic double-checked locking pattern. While your implementation is logically correct - thanks to the use of volatile on sharedObject - it might not be the most performant.
The recommended pattern for Java 1.5 on is shown on the Wikipedia page linked.
// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized(this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the use of a localRef for accessing the helper field. This limits access to the volatile field in the simple case to a single read instead of potentially twice; once for the check and once for the return. See the Wikipedia page again, just after the recommended pattern sample.
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 25 percent.[7]
Depending on how isObjectStillValid() accesses sharedObject, you might benefit from a similar pattern.
This sounds like the use of a ReadWriteLock would be appropiate.
The basic idea is that there can be multiple readers simultaniously or one writer exclusively. Here can you find an Example how to use it in a List implementation.
Copy paste in case the side goes down:
import java.util.*;
import java.util.concurrent.locks.*;
/**
* ReadWriteList.java
* This class demonstrates how to use ReadWriteLock to add concurrency
* features to a non-threadsafe collection
* #author www.codejava.net
*/
public class ReadWriteList<E> {
private List<E> list = new ArrayList<>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public ReadWriteList(E... initialElements) {
list.addAll(Arrays.asList(initialElements));
}
public void add(E element) {
Lock writeLock = rwLock.writeLock();
writeLock.lock();
try {
list.add(element);
} finally {
writeLock.unlock();
}
}
public E get(int index) {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.get(index);
} finally {
readLock.unlock();
}
}
public int size() {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.size();
} finally {
readLock.unlock();
}
}
}

Synchronization of non-final field

A warning is showing every time I synchronize on a non-final class field. Here is the code:
public class X
{
private Object o;
public void setO(Object o)
{
this.o = o;
}
public void x()
{
synchronized (o) // synchronization on a non-final field
{
}
}
}
so I changed the coding in the following way:
public class X
{
private final Object o;
public X()
{
o = new Object();
}
public void x()
{
synchronized (o)
{
}
}
}
I am not sure the above code is the proper way to synchronize on a non-final class field. How can I synchronize a non final field?
First of all, I encourage you to really try hard to deal with concurrency issues on a higher level of abstraction, i.e. solving it using classes from java.util.concurrent such as ExecutorServices, Callables, Futures etc.
That being said, there's nothing wrong with synchronizing on a non-final field per se. You just need to keep in mind that if the object reference changes, the same section of code may be run in parallel. I.e., if one thread runs the code in the synchronized block and someone calls setO(...), another thread can run the same synchronized block on the same instance concurrently.
Synchronize on the object which you need exclusive access to (or, better yet, an object dedicated to guarding it).
It's really not a good idea - because your synchronized blocks are no longer really synchronized in a consistent way.
Assuming the synchronized blocks are meant to be ensuring that only one thread accesses some shared data at a time, consider:
Thread 1 enters the synchronized block. Yay - it has exclusive access to the shared data...
Thread 2 calls setO()
Thread 3 (or still 2...) enters the synchronized block. Eek! It think it has exclusive access to the shared data, but thread 1 is still furtling with it...
Why would you want this to happen? Maybe there are some very specialized situations where it makes sense... but you'd have to present me with a specific use case (along with ways of mitigating the sort of scenario I've given above) before I'd be happy with it.
I agree with one of John's comment: You must always use a final lock dummy while accessing a non-final variable to prevent inconsistencies in case of the variable's reference changes. So in any cases and as a first rule of thumb:
Rule#1: If a field is non-final, always use a (private) final lock dummy.
Reason #1: You hold the lock and change the variable's reference by yourself. Another thread waiting outside the synchronized lock will be able to enter the guarded block.
Reason #2: You hold the lock and another thread changes the variable's reference. The result is the same: Another thread can enter the guarded block.
But when using a final lock dummy, there is another problem: You might get wrong data, because your non-final object will only be synchronized with RAM when calling synchronize(object). So, as a second rule of thumb:
Rule#2: When locking a non-final object you always need to do both: Using a final lock dummy and the lock of the non-final object for the sake of RAM synchronisation. (The only alternative will be declaring all fields of the object as volatile!)
These locks are also called "nested locks". Note that you must call them always in the same order, otherwise you will get a dead lock:
public class X {
private final LOCK;
private Object o;
public void setO(Object o){
this.o = o;
}
public void x() {
synchronized (LOCK) {
synchronized(o){
//do something with o...
}
}
}
}
As you can see I write the two locks directly on the same line, because they always belong together. Like this, you could even do 10 nesting locks:
synchronized (LOCK1) {
synchronized (LOCK2) {
synchronized (LOCK3) {
synchronized (LOCK4) {
//entering the locked space
}
}
}
}
Note that this code won't break if you just acquire an inner lock like synchronized (LOCK3) by another threads. But it will break if you call in another thread something like this:
synchronized (LOCK4) {
synchronized (LOCK1) { //dead lock!
synchronized (LOCK3) {
synchronized (LOCK2) {
//will never enter here...
}
}
}
}
There is only one workaround around such nested locks while handling non-final fields:
Rule #2 - Alternative: Declare all fields of the object as volatile. (I won't talk here about the disadvantages of doing this, e.g. preventing any storage in x-level caches even for reads, aso.)
So therefore aioobe is quite right: Just use java.util.concurrent. Or begin to understand everything about synchronisation and do it by yourself with nested locks. ;)
For more details why synchronisation on non-final fields breaks, have a look into my test case: https://stackoverflow.com/a/21460055/2012947
And for more details why you need synchronized at all due to RAM and caches have a look here: https://stackoverflow.com/a/21409975/2012947
I'm not really seeing the correct answer here, that is, It's perfectly alright to do it.
I'm not even sure why it's a warning, there is nothing wrong with it. The JVM makes sure that you get some valid object back (or null) when you read a value, and you can synchronize on any object.
If you plan on actually changing the lock while it's in use (as opposed to e.g. changing it from an init method, before you start using it), you have to make the variable that you plan to change volatile. Then all you need to do is to synchronize on both the old and the new object, and you can safely change the value
public volatile Object lock;
...
synchronized (lock) {
synchronized (newObject) {
lock = newObject;
}
}
There. It's not complicated, writing code with locks (mutexes) is actally quite easy. Writing code without them (lock free code) is what's hard.
EDIT: So this solution (as suggested by Jon Skeet) might have an issue with atomicity of implementation of "synchronized(object){}" while object reference is changing. I asked separately and according to Mr. erickson it is not thread safe - see: Is entering synchronized block atomic?. So take it as example how to NOT do it - with links why ;)
See the code how it would work if synchronised() would be atomic:
public class Main {
static class Config{
char a='0';
char b='0';
public void log(){
synchronized(this){
System.out.println(""+a+","+b);
}
}
}
static Config cfg = new Config();
static class Doer extends Thread {
char id;
Doer(char id) {
this.id = id;
}
public void mySleep(long ms){
try{Thread.sleep(ms);}catch(Exception ex){ex.printStackTrace();}
}
public void run() {
System.out.println("Doer "+id+" beg");
if(id == 'X'){
synchronized (cfg){
cfg.a=id;
mySleep(1000);
// do not forget to put synchronize(cfg) over setting new cfg - otherwise following will happend
// here it would be modifying different cfg (cos Y will change it).
// Another problem would be that new cfg would be in parallel modified by Z cos synchronized is applied on new object
cfg.b=id;
}
}
if(id == 'Y'){
mySleep(333);
synchronized(cfg) // comment this and you will see inconsistency in log - if you keep it I think all is ok
{
cfg = new Config(); // introduce new configuration
// be aware - don't expect here to be synchronized on new cfg!
// Z might already get a lock
}
}
if(id == 'Z'){
mySleep(666);
synchronized (cfg){
cfg.a=id;
mySleep(100);
cfg.b=id;
}
}
System.out.println("Doer "+id+" end");
cfg.log();
}
}
public static void main(String[] args) throws InterruptedException {
Doer X = new Doer('X');
Doer Y = new Doer('Y');
Doer Z = new Doer('Z');
X.start();
Y.start();
Z.start();
}
}
AtomicReference suits for your requirement.
From java documentation about atomic package:
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
In above example, you replace String with your own Object
Related SE question:
When to use AtomicReference in Java?
If o never changes for the lifetime of an instance of X, the second version is better style irrespective of whether synchronization is involved.
Now, whether there's anything wrong with the first version is impossible to answer without knowing what else is going on in that class. I would tend to agree with the compiler that it does look error-prone (I won't repeat what the others have said).
Just adding my two cents: I had this warning when I used component that is instantiated through designer, so it's field cannot really be final, because constructor cannot takes parameters. In other words, I had quasi-final field without the final keyword.
I think that's why it is just warning: you are probably doing something wrong, but it might be right as well.

Java - threads + action

I'm new to Java so I have a simple question that I don't know where to start from -
I need to write a function that accepts an Action, at a multi-threads program , and only the first thread that enter the function do the action, and all the other threads wait for him to finish, and then return from the function without doing anything.
As I said - I don't know where to begin because,
first - there isn't a static var at the function (static like as in c / c++ ) so how do I make it that only the first thread would start the action, and the others do nothing ?
second - for the threads to wait, should I use
public synchronized void lala(Action doThis)
{....}
or should i write something like that inside the function
synchronized (this)
{
...
notify();
}
Thanks !
If you want all threads arriving at a method to wait for the first, then they must synchronize on a common object. It could be the same instance (this) on which the methods are invoked, or it could be any other object (an explicit lock object).
If you want to ensure that the first thread is the only one that will perform the action, then you must store this fact somewhere, for all other threads to read, for they will execute the same instructions.
Going by the previous two points, one could lock on this 'fact' variable to achieve the desired outcome
static final AtomicBoolean flag = new AtomicBoolean(false); // synchronize on this, and also store the fact. It is static so that if this is in a Runnable instance will not appear to reset the fact. Don't use the Boolean wrapper, for the value of the flag might be different in certain cases.
public void lala(Action doThis)
{
synchronized (flag) // synchronize on the flag so that other threads arriving here, will be forced to wait
{
if(!flag.get()) // This condition is true only for the first thread.
{
doX();
flag.set(true); //set the flag so that other threads will not invoke doX.
}
}
...
doCommonWork();
...
}
If you're doing threading in any recent version of Java, you really should be using the java.util.concurrent package instead of using Threads directly.
Here's one way you could do it:
private final ExecutorService executor = Executors.newCachedThreadPool();
private final Map<Runnable, Future<?>> submitted
= new HashMap<Runnable, Future<?>>();
public void executeOnlyOnce(Runnable action) {
Future<?> future = null;
// NOTE: I was tempted to use a ConcurrentHashMap here, but we don't want to
// get into a possible race with two threads both seeing that a value hasn't
// been computed yet and both starting a computation, so the synchronized
// block ensures that no other thread can be submitting the runnable to the
// executor while we are checking the map. If, on the other hand, it's not
// a problem for two threads to both create the same value (that is, this
// behavior is only intended for caching performance, not for correctness),
// then it should be safe to use a ConcurrentHashMap and use its
// putIfAbsent() method instead.
synchronized(submitted) {
future = submitted.get(action);
if(future == null) {
future = executor.submit(action);
submitted.put(action, future);
}
}
future.get(); // ignore return value because the runnable returns void
}
Note that this assumes that your Action class (I'm assuming you don't mean javax.swing.Action, right?) implements Runnable and also has a reasonable implementation of equals() and hashCode(). Otherwise, you may need to use a different Map implementation (for example, IdentityHashMap).
Also, this assumes that you may have multiple different actions that you want to execute only once. If that's not the case, then you can drop the Map entirely and do something like this:
private final ExecutorService executor = Executors.newCachedThreadPool();
private final Object lock = new Object();
private volatile Runnable action;
private volatile Future<?> future = null;
public void executeOnlyOnce(Runnable action) {
synchronized(lock) {
if(this.action == null) {
this.action = action;
this.future = executor.submit(action);
} else if(!this.action.equals(action)) {
throw new IllegalArgumentException("Unexpected action");
}
}
future.get();
}
public synchronized void foo()
{
...
}
is equivalent to
public void foo()
{
synchronized(this)
{
...
}
}
so either of the two options should work. I personally like the synchronized method option.
Synchronizing the whole method can sometimes be overkill if there is only a certain part of the code that deals with shared data (for example, a common variable that each thread is updating).
Best approach for performance is to only use the synchronized keyword just around the shared data. If you synchronized the whole method when it is not entirely necessarily then a lot of threads will be waiting when they can still do work within their own local scope.
When a thread enters the synchronize it acquires a lock (if you use the this object it locks on the object itself), the other will wait till the lock-acquiring thread has exited. You actually don't need a notify statement in this situation as the threads will release the lock when they exit the synchronize statement.

Categories

Resources