public class MyClass {
private List<Integer> resources = new ArrayList<>();
public synchronized Integer getAndRemoveResourceOrWait(Integer requestedResource) throws InterruptedException {
while(resources.stream().anyMatch((r) -> { return r >= requestedResource; })) {
wait();
}
Integer found = resources.stream().findFirst((r) -> {
return r >= requestedResource;
}).get();
resources.remove(found);
return found;
}
public void addResource(Integer resource) {
resources.add(resource);
notifyAll();
}
}
Thread "A" episodically invokes addResource with random value.
A few another threads actively invokes getAndRemoveResourceOrWait.
What I need to do to let method getAndRemoveResourceOrWait work concurrently?
For example, thread "X" invokes getAndRemoveResourceOrWait with variable 128 which does not exists in resources collection. So, it become waiting for it. While it is waiting, thread "Y" invokes getAndRemoveResourceOrWait with variable 64 and it exists in resources collection. Thread "Y" should not wait for thread "X" to complete.
What I need to do to let method getAndRemoveResourceOrWait work concurrently?
It simply needs to run on a different thread to the one that calls addResource(resource).
Note that getAndRemoveResource is a blocking (synchronous) operation in the sense that the thread making the call is blocked until it gets the answer. However one thread that is calling getAndRemoveResource does not block another thread calling getAndRemoveResource. The key is that the wait() call releases the mutex, and then reacquires it when the mutex is notified. What will happen here is that a notifyAll will cause all waiting threads to way up, one at a time.
However, there is a bug on your addResource method. The method needs to be declared as synchronized. If you don't call notifyAll() while the current thread holds the mutex for on this, you will get an exception. (And this is also necessary to ensure that the updates to the shared resources object are visible ... in both directions.)
Also, this implementation is not going to scale well:
Each waiting thread will scan the entire resource list on every update; i.e. on every call to addResource.
When a waiting thread finds a resource, it will scan the list twice more to remove it.
All of this is done while holding the mutex on the shared MyClass instance ... which blocks addResource as well.
UPDATE - Assuming that the Resource values are unique, a better solution would be to use replace ArrayList with TreeSet. This should work:
public class MyClass {
private TreetSet<Integer> resources = new TreeSet<>();
public synchronized Integer getAndRemoveResourceOrWait(
Integer resource) throws InterruptedException {
while (true) {
Integer found = resources.tailSet(resource, true).pollFirst();
if (found != null) {
return found;
}
wait();
}
}
public synchronized void addResource(Integer resource) {
resources.add(resource);
notifyAll();
}
}
(I also tried ConcurrentSkipListSet but I couldn't figure out a way to avoid using a mutex while adding and removing. If you were trying to remove an equal resource, it could be done ...)
Related
I have three different threads which creates three different objects to read/manipulate some data which is common for all the threads. Now, I need to ensure that we are giving an access only to one thread at a time.
The example goes something like this.
public interface CommonData {
public void addData(); // adds data to the cache
public void getDataAccessKey(); // Key that will be common across different threads for each data type
}
/*
* Singleton class
*/
public class CommonDataCache() {
private final Map dataMap = new HashMap(); // this takes keys and values as custom objects
}
The implementation class of the interface would look like this
class CommonDataImpl implements CommonData {
private String key;
public CommonDataImpl1(String key) {
this.key = key;
}
public void addData() {
// access the singleton cache class and add
}
public void getDataAccessKey() {
return key;
}
}
Each thread will be invoked as follows:
CommonData data = new CommonDataImpl("Key1");
new Thread(() -> data.addData()).start();
CommonData data1 = new CommonDataImpl("Key1");
new Thread(() -> data1.addData()).start();
CommonData data2 = new CommonDataImpl("Key1");
new Thread(() -> data2.addData()).start();
Now, I need to synchronize those threads if and only if the keys of the data object (passed on to the thread) is the same.
My thought process so far:
I tried to have a class that provides the lock on the fly for a given key which looks something like this.
/*
* Singleton class
*/
public class DataAccessKeyToLockProvider {
private volatile Map<String, ReentrantLock> accessKeyToLockHolder = new ConcurrentHashMap<>();
private DataAccessKeyToLockProvider() {
}
public ReentrantLock getLock(String key) {
return accessKeyToLockHolder.putIfAbsent(key, new ReentrantLock());
}
public void removeLock(BSSKey key) {
ReentrantLock removedLock = accessKeyToLockHolder.remove(key);
}
}
So each thread would call this class and get the lock and use it and remove it once the processing is done. But this can so result in a case where the second thread could get the lock object that was inserted by the first thread and waiting for the first thread to release the lock. Once the first thread removes the lock, now the third thread would get a different lock altogether, so the 2nd thread and the 3rd thread are not in sync anymore.
Something like this:
new Thread(() -> {
ReentrantLock lock = DataAccessKeyToLockProvider.get(data.getDataAccessKey());
lock.lock();
data.addData();
lock.unlock();
DataAccessKeyToLockProvider.remove(data.getDataAccessKey());
).start();
Please let me know if you need any additional details to help me resolve my problem
P.S: Removing the key from the lock provider is kind of mandatory as i will be dealing with some millions of keys (not necessarily strings), so I don't want the lock provider to eat up my memory
Inspired the solution provided #rzwitserloot, I have tried to put some generic code that waits for the other thread to complete its processing before giving the access to the next thread.
public class GenericKeyToLockProvider<K> {
private volatile Map<K, ReentrantLock> keyToLockHolder = new ConcurrentHashMap<>();
public synchronized ReentrantLock getLock(K key) {
ReentrantLock existingLock = keyToLockHolder.get(key);
try {
if (existingLock != null && existingLock.isLocked()) {
existingLock.lock(); // Waits for the thread that acquired the lock previously to release it
}
return keyToLockHolder.put(key, new ReentrantLock()); // Override with the new lock
} finally {
if (existingLock != null) {
existingLock.unlock();
}
}
}
}
But looks like the entry made by the last thread wouldn't be removed. Anyway to solve this?
First, a clarification: You either use ReentrantLock, OR you use synchronized. You don't synchronized on a ReentrantLock instance (you synchronize on any object you want) – or, if you want to go the lock route, you can call the lock lock method on your lock object, using a try/finally guard to always ensure you call unlock later (and don't use synchronized at all).
synchronized is low-level API. Lock, and all the other classes in the java.util.concurrent package are higher level and offer far more abstractions. It's generally a good idea to just peruse the javadoc of all the classes in the j.u.c package from time to time, very useful stuff in there.
The key issue is to remove all references to a lock object (thus ensuring it can be garbage collected), but not until you are certain there are zero active threads locking on it. Your current approach does not know how many classes are waiting. That needs to be fixed. Once you return an instance of a Lock object, it's 'out of your hands' and it is not possible to track if the caller is ever going to call lock on it. Thus, you can't do that. Instead, call lock as part of the job; the getLock method should actually do the locking as part of the operation. That way, YOU get to control the process flow. However, let's first take a step back:
You say you'll have millions of keys. Okay; but it is somewhat unlikely you'll have millions of threads. After all, a thread requires a stack, and even using the -Xss parameter to reduce the stack size to the minimum of 128k or so, a million threads implies you're using up 128GB of RAM just for stacks; seems unlikely.
So, whilst you might have millions of keys, the number of 'locked' keys is MUCH smaller. Let's focus on those.
You could make a ConcurrentHashMap which maps your string keys to lock objects. Then:
To acquire a lock:
Create a new lock object (literally: Object o = new Object(); - we are going to be using synchronized) and add it to the map using putIfAbsent. If you managed to create the key/value pair (compare the returned object using == to the one you made; if they are the same, you were the one to add it), you got it, go, run the code. Once you're done, acquire the sync lock on your object, send a notification, release, and remove:
public void doWithLocking(String key, Runnable op) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
op.run();
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
} else {
...
}
}
To wait until the lock is available, first acquire a lock on the locker object, THEN check if the concurrentMap still contains it. If not, you're now free to retry this operation. If it's still in, then we now wait for a notification. In any case we always just retry from scratch. Thus:
public void performWithLocking(String key, Runnable op) throws InterruptedException {
while (true) {
Object locker = new Object();
Object o = concurrentMap.putIfAbsent(key, locker);
if (o == locker) {
try {
op.run();
} finally {
// We want to lock even if the operation throws!
synchronized (locker) {
locker.notifyAll(); // wake up everybody waiting.
concurrentMap.remove(key); // this has to be inside!
}
}
return;
} else {
synchronized (o) {
if (concurrentMap.containsKey(key)) o.wait();
}
}
}
}
}
Instead of this setup where you pass the operation to execute along with the lock key, you could have tandem 'lock' and 'unlock' methods but now you run the risk of writing code that forgets to call unlock. Hence why I wouldn't advise it!
You can call this with, for example:
keyedLockSupportThingie.doWithLocking("mykey", () -> {
System.out.println("Hello, from safety!");
});
Attached the code..
what does this mean, synchronized(m)..?? why we should use that..??
What's the difference between synchronized(this) & synchronized(m)..??
class Waiter implements Runnable {
Message m;
public Waiter(Message m) {
this.m = m;
}
#Override
public void run() {
String name = Thread.currentThread().getName();
synchronized (m) {
try {
System.out.println("Waiting to get notified at time " +System.currentTimeMillis());
m.wait();
} catch (InterruptedException ex) {
ex.printStackTrace();
}
System.out.println("Waiter thread notified at time "+System.currentTimeMillis());
System.out.println("Message processed ");
}
}
}
The difference between synchronized(this) and synchronized(m) is that by synchronizing on this, you synchronize on the entire instance. So, as you would expect, no body would be able to synchronize on this while you hold the lock.
public synchronized void foo() {
// Handle shared resource
}
is similar to
public void foo() {
synchronize(this) {
// Handle shared resource
}
}
By using objects, such as m, you get a more fine grained control over what you want to synchronize and when. But remember that if someone uses foo(), as shown above, it will not stop access to methods that are not synchronized on this:
public void anotherLock() {
synchronized(m) {
// Should handle another shared resource
// otherwise you might get unexpected results
}
}
While a thread is using foo(), another thread can access anotherLock().
The java keyword synchronized is used to synchronize different threads by one instance, acting as a mutual exclusive semaphore. Hence, the argument passed to synchronized is the instance which can be owned by one thread exclusively. It is up to you, the programmer, on which instance you like to synchronize your threads.
But it is a good idea to use the resource, which is under racing conditions, or the owning instance of that resource. The later you start a synchronized block and the earlier you leave it, the better your application will scale.
synchronized is used for thread safety. In your case it is used for implementing observer pattern. you want to wait for something to happen on Message object and then only process it so someone will notify on Message object m for which you are waiting (m.wait()).
When you wait on some object you need to take lock on that object for which you always need to put the wait() statement in a synchronized block on wait object. That is why you are using synchronized(m).
You can not replace it with synchronized(this) as you are calling wait() on object m so synchronized should be on m only.
Somewhere in your application you must be calling m.notify() or m.notifyAll() which will resume your wait() on m.
I've been working on a project where I need a synchronized queue, for the reason that my program is multi-threaded and the thread may access this queue.
I used arraylist to do that, but I seem to have some issues with it and threads got deadlocked. I don't know if the queue is the reason, but I just wanted to check:
public class URLQueue {
private ArrayList<URL> urls;
public URLQueue() {
urls = new ArrayList<URL>();
}
public synchronized URL remove() throws InterruptedException {
while (urls.isEmpty())
wait();
URL r = urls.remove(0);
notifyAll();
return r;
}
public synchronized void add(URL newURL) throws InterruptedException {
urls.add(newURL);
notifyAll();
}
public int getSize() {
return urls.size();
}
}
EDITS:
Even when using LinkedBlockingQueue I get stuck in the same loop as before. I think this is caused because there is a thread which is waiting for the queue to be filled, but it never does because the other functionalities are done running...any ideas???
It is better to use LinkedBlockingQueue here as it is designed for that purpose. It waits until some element is available while trying to remove an alement.
LinkedBlockingQueue
It provides a take() method which
Retrieves and removes the head of this queue, waiting if necessary until an element becomes available
In your code, notifyAll() doesn't throw InterruptedException so you should remove the throws from add()
The remove() method doesn't need to notifyAll() as it's action shouldn't wake other threads.
The getSize() method should be synchronized.
Otherwise there is no chance for your code to deadlock as you need two locks to create a deadlock.
I am having a code this way,
public int handle_refresh(Data mmsg) throws Exception {
String custId = mmsg.getCustomerId();
CustomerThread t = custMap.get(mmsg.getCustomerId());
if (t == null || !t.isAlive()) {
t = (CustomerThread) context.getBean("custT");
t.initThread(mmsg.getCustomerId(), mmsg.getCustomerId(), mmsg.getMessageBody());
custSMap.put(mmsg.getCustomerId(), t);
t.createBufferThread();
t.start();
t.initStreaming();
}
synchronized (t) {
if (null != t) {
ret = t.addSymbols(mmsg);
}
}
return ret;
}
}
Here CustomerThread is checked in custMap,
Map custMap= new CustomerThread ();
if thread is there in custMap
1) then read spring appilcation context and get it. t = (CustomerThread) context.getBean("custT");
2) In initThread method set the name of the thread uniquely for each customer. t.initThread(mmsg.getCustomerId(), mmsg.getCustomerId(), mmsg.getMessageBody());
3) then put the newly created thread in to map custMap. custSMap.put(mmsg.getCustomerId(), t);
4) then in createBufferThread data is setting into cache.. t.createBufferThread();
5) then start the thread newly and then get data from db. t.start();
6) set the db connections
if thread is not there in custMap
1) synchronized (t) .
2) call t.addSymbols() method.
My questions are...
1) Here does the first if block executes only first time and if once thread is created always synchronized (t) is executed?
I mean all the above 1 to 6 steps in if block are executed only once?
2) then what does synchronized (t) does?
It looks to me that sychronized (t) is protecting the addSymbols method to make it thread-safe. Calls to this method are adding symbols, I assume, to some data structure within the t thread. It may be that other methods in that thread are synchronized which would mean that it would be locking on the Thread instance. That's what sychronized (t) is doing here.
But this is an extremely ugly way of adding thread-safety. The addSymbols(...) method should itself lock on a lock object or, if necessary, be a synchronized method. A class should be responsible for it's own locking and not require the caller to do something.
Couple other comments about your code:
t = (CustomerThread) context.getBean("custT");
The above code seems to be getting a thread instance from Spring. This is typically a singleton unless the "custT" bean is some sort of thread-factory. If it is not a factory bean then you are going to be getting the same thread-object for each call to your handle_refresh method and reinitializing it. This is most likely not what you want.
synchronized (t) {
if (null != t) {
ret = t.addSymbols(mmsg);
}
}
If t was null then the synchronized line would throw a NPE. You don't need the null check inside of synchronized.
CustomerThread t = custMap.get(mmsg.getCustomerId());
If the handle_refresh(...) method is called from multiple threads then you need to make sure that the custMap is properly synchronized as well.
The if block should only execute once per customer ID. Notice this line of code in the if block:
custSMap.put(mmsg.getCustomerId(), t);
It populates the map, so the next time a search is done with that customerId, it will be found and the synchronized block will be executed.
The Synchronized block will call a method on t inside a mutex lock.
I'm new to Java so I have a simple question that I don't know where to start from -
I need to write a function that accepts an Action, at a multi-threads program , and only the first thread that enter the function do the action, and all the other threads wait for him to finish, and then return from the function without doing anything.
As I said - I don't know where to begin because,
first - there isn't a static var at the function (static like as in c / c++ ) so how do I make it that only the first thread would start the action, and the others do nothing ?
second - for the threads to wait, should I use
public synchronized void lala(Action doThis)
{....}
or should i write something like that inside the function
synchronized (this)
{
...
notify();
}
Thanks !
If you want all threads arriving at a method to wait for the first, then they must synchronize on a common object. It could be the same instance (this) on which the methods are invoked, or it could be any other object (an explicit lock object).
If you want to ensure that the first thread is the only one that will perform the action, then you must store this fact somewhere, for all other threads to read, for they will execute the same instructions.
Going by the previous two points, one could lock on this 'fact' variable to achieve the desired outcome
static final AtomicBoolean flag = new AtomicBoolean(false); // synchronize on this, and also store the fact. It is static so that if this is in a Runnable instance will not appear to reset the fact. Don't use the Boolean wrapper, for the value of the flag might be different in certain cases.
public void lala(Action doThis)
{
synchronized (flag) // synchronize on the flag so that other threads arriving here, will be forced to wait
{
if(!flag.get()) // This condition is true only for the first thread.
{
doX();
flag.set(true); //set the flag so that other threads will not invoke doX.
}
}
...
doCommonWork();
...
}
If you're doing threading in any recent version of Java, you really should be using the java.util.concurrent package instead of using Threads directly.
Here's one way you could do it:
private final ExecutorService executor = Executors.newCachedThreadPool();
private final Map<Runnable, Future<?>> submitted
= new HashMap<Runnable, Future<?>>();
public void executeOnlyOnce(Runnable action) {
Future<?> future = null;
// NOTE: I was tempted to use a ConcurrentHashMap here, but we don't want to
// get into a possible race with two threads both seeing that a value hasn't
// been computed yet and both starting a computation, so the synchronized
// block ensures that no other thread can be submitting the runnable to the
// executor while we are checking the map. If, on the other hand, it's not
// a problem for two threads to both create the same value (that is, this
// behavior is only intended for caching performance, not for correctness),
// then it should be safe to use a ConcurrentHashMap and use its
// putIfAbsent() method instead.
synchronized(submitted) {
future = submitted.get(action);
if(future == null) {
future = executor.submit(action);
submitted.put(action, future);
}
}
future.get(); // ignore return value because the runnable returns void
}
Note that this assumes that your Action class (I'm assuming you don't mean javax.swing.Action, right?) implements Runnable and also has a reasonable implementation of equals() and hashCode(). Otherwise, you may need to use a different Map implementation (for example, IdentityHashMap).
Also, this assumes that you may have multiple different actions that you want to execute only once. If that's not the case, then you can drop the Map entirely and do something like this:
private final ExecutorService executor = Executors.newCachedThreadPool();
private final Object lock = new Object();
private volatile Runnable action;
private volatile Future<?> future = null;
public void executeOnlyOnce(Runnable action) {
synchronized(lock) {
if(this.action == null) {
this.action = action;
this.future = executor.submit(action);
} else if(!this.action.equals(action)) {
throw new IllegalArgumentException("Unexpected action");
}
}
future.get();
}
public synchronized void foo()
{
...
}
is equivalent to
public void foo()
{
synchronized(this)
{
...
}
}
so either of the two options should work. I personally like the synchronized method option.
Synchronizing the whole method can sometimes be overkill if there is only a certain part of the code that deals with shared data (for example, a common variable that each thread is updating).
Best approach for performance is to only use the synchronized keyword just around the shared data. If you synchronized the whole method when it is not entirely necessarily then a lot of threads will be waiting when they can still do work within their own local scope.
When a thread enters the synchronize it acquires a lock (if you use the this object it locks on the object itself), the other will wait till the lock-acquiring thread has exited. You actually don't need a notify statement in this situation as the threads will release the lock when they exit the synchronize statement.