I have this scenrio:
private static final Map<String, Boolean> availables = new HashMap<String, Boolean>();
1 single Thread keep getting an available task and distribute it among worker (let's call it distributor)
public static String getAnAvailable() {
synchronized (availables){
for(Map.Entry<String, Boolean> entry : availables.entrySet()){
if(entry.getValue() == true){
entry.setValue(false);
return entry.getKey();
}
}
}
return null;
}
1 single Thread run this code (let's call it Updater):
while(true){
...
synchronized (availables) {
for(String deleteIt : delete){
if(available.containsKey(deleteIt)){
available.remove(deleteIt);
}
}
for(String name : names){
if(!available.containsKey(name)){
available.put(name, true);
}
}
}
...
}
I tried to use ConcurrentHashMap instead of manual synchronization but it can throw Exception in getAnAvailable method, the reason is obvious, because if updater thread changes the size of the map then within getAnAvailable we get iteration exception. so no point of using that.
now my problem is that for some reason Updater starves indefinitely on exactly the third run...
basically it gets stock when it tries to acquire the luck.
for the life of me I have no idea why that happens. can anyone see something that I don't see here?
Related
We're trying to work out the best way to use Hazelcast's IMap without using pessimistic locking.
EntryProcessor seems like the correct choice, however we need to apply two different types of operations: 'create' when containsKey is false, and 'update' when containsKey is true.
How can I utilise EntryProcessor to support these logic checks?
If two threads hit the containsKey() at the same time and it returns false to both of them, I don't want both of them to create the key. I'd want the second thread to apply an update instead.
This is what we have so far:
public void put(String key, Object value) {
IMap<String, Object> map = getMap();
if (!map.containsKey(key)) {
// create key here
} else {
// update existing value here
// ...
map.executeOnKey(key, new TransactionEntryProcessor({my_new_value}));
}
}
private static class MyEntryProcessor implements
EntryProcessor<String, Object>, EntryBackupProcessor<String, Object>, Serializable {
private static final long serialVersionUID = // blah blah
private static final ThreadLocal<Object> entryToSet = new ThreadLocal<>();
MyEntryProcessor(Object entryToSet) {
MyEntryProcessor.entryToSet.set(entryToSet);
}
#Override
public Object process(Map.Entry<String, Object> entry) {
entry.setValue(entryToSet.get());
return entry.getValue();
}
#Override
public EntryBackupProcessor<String, Object> getBackupProcessor() {
return MyEntryProcessor.this;
}
#Override
public void processBackup(Map.Entry<String, Object> entry) {
entry.setValue(entryToSet.get());
}
}
You can see that two threads can enter the put method and call containsKey at the same time. The second will overwrite the outcome of the first.
EntryProcessor by definition is a processing logic that gets executed on the entry itself, eliminating the need of serializing/deserializing the value. Internally, EPs are executed by partition threads, where one partition thread takes care of multiple partitions. When an EP comes to HC, it is picked by the owner thread of the partition where the key belongs. Once the processing is completed, the partition thread is ready to accept and execute other tasks (which may well be same EP for same key, submitted by another thread). Therefore, it may seem so but EPs should not be used as alternatives to pessimistic locking.
If you are insistent and really keen on using EP for this then you could try putting a null check inside process method. Something like this:
public Object process(Map.Entry<String, Object> entry) {
if(null == entry.getValue()) {
entry.setValue("value123");
}
return entry.getValue();
}
This way two things will happen:
1. The other thread will wait for partition thread to be available again
2. Since the value already exists, you wont overwrite anything
I got a multithreaded application in which n threads write to an ConcurrentHashMap. Another n Threads read from that Map and copy its Value to a copy List.
After that the original List is removed from the map.
For some reason I always get a ConcurrentModificationException.
I even tried to create my own lock mechanism with a volatile boolean, but it won't work. When using Google Guava with Lists.newLinkedList() i get a ConcurrentModificationException. When using the StandardWay new LinkedList(list) I get an ArrayOutOfBoundsException.
Here is the compiling code example:
public class VolatileTest {
public static Map<String, List<String>> logMessages = new ConcurrentHashMap<String, List<String>>();
public static AtomicBoolean lock = new AtomicBoolean(false);
public static void main(String[] args) {
new Thread() {
public void run() {
while (true) {
try {
if (!VolatileTest.lock.get()) {
VolatileTest.lock.set(true);
List<String> list = VolatileTest.logMessages.get("test");
if (list != null) {
List<String> copyList = Collections.synchronizedList(list);
for (String string : copyList) {
System.out.println(string);
}
VolatileTest.logMessages.remove("test");
}
VolatileTest.lock.set(false);
}
} catch (ConcurrentModificationException ex) {
ex.printStackTrace();
System.exit(1);
}
}
};
}.start();
new Thread() {
#Override
public void run() {
while (true) {
if (!VolatileTest.lock.get()) {
VolatileTest.lock.set(true);
List<String> list = VolatileTest.logMessages.get("test");
if (list == null) {
list = Collections.synchronizedList(new LinkedList<String>());
}
list.add("TestError");
VolatileTest.logMessages.put("test", list);
VolatileTest.lock.set(false);
}
}
}
}.start();
}
You have ConcurrentModificationException because you have your locking broken and the reader thread reads the same list (by Iterator) the writer writes to at the same time.
Your code looks like a try of lock-free coding. If so, you must use CAS operation like this:
while (!VolatileTest.lock.compareAndSet(false, true) { } // or while (VolatileTest.lock.getAndSet(true)) {} - try to get lock
try {
// code to execute under lock
} finally {
VolatileTest.lock.set(false); // unlock
}
Your
if (!VolatileTest.lock.get()) {
VolatileTest.lock.set(true);
...
}
is not atomic. Or you can use synchronized section or any other standard locking mechanism (ReadWriteLock, for instance)
Also, if you deal with a list for reading and writing using one lock, you don't have to use synchronized list then. And moreover, you don't need even ConcurrentHashMap.
So:
use one global lock and plain HashMap/ArrayList OR
remove your global lock, use ConcurrentHashMap and plain ArrayList with synchronized on each particular instance of the list OR
use a Queue (some BlockingQueue or ConcurrentLinkedQueue) instead of all of your current stuff OR
use something like Disruptor (http://lmax-exchange.github.io/disruptor/) for inter-thread communication with many options. Also, here is a good example of how to build lock-free queues http://psy-lob-saw.blogspot.ru/2013/03/single-producerconsumer-lock-free-queue.html
ConcurrentHashMap is fail safe meaning you will not encounter ConcurrentModificationException. It's your List<String> within the map where one of your thread tries to read the data while other thread is trying to remove the data while iterating.
I would suggest, you don't try locking on whole map operation, but instead look out for making thread safe access to list may be using Vector or SynchronizedList.
Also note, your entry condition if (!VolatileTest.lock) { for both the threads means they can both run at the same time as initially by default boolean would hold false value and both may try to work on same list at the same time.
As already mentioned the locking pattern does not look valid. It is better to use synchronized. The below code works for me
final Object obj = new Object();
and then
synchronized (obj){....} instead of if (!VolatileTest.lock) {.....}
I want to reproduce one scenario in which there are two threads accessing a shared HashMap. While one thread is copying the contents of the shared map into localMap using putAll() operation, second thread changes the shared map and CocurrentModificationException should be thrown.
I have tried but not able to reproduce the exception at the time when putAll operation is running. Each time either putAll gets complete before other thread does modification or putAll is called after other thread modification.
Can anyone please suggest how can I generate the scenario in java?
Thanks.
Spin up both threads, have them running continuously.
Have one thread constantly doing putAll, the other constantly doing the modification.
import java.util.HashMap;
import java.util.Map;
public class Example {
private final HashMap<String, String> map = new HashMap<String, String>();
public static void main(String[] args) {
new Example();
}
public Example() {
Thread thread1 = new Thread(new Runnable() {
#Override
public void run() {
int counter = 0;
while (true) {
Map<String, String> tmp = new HashMap<String, String>();
tmp.put("example" + counter, "example");
counter++;
map.putAll(tmp);
}
}
});
Thread thread2 = new Thread(new Runnable() {
#Override
public void run() {
while (true) {
map.values().remove("example");
}
}
});
thread1.start();
thread2.start();
}
}
Unfortunately I cannot copy/paste the running code directly from my current workstation so I retyped it here so there might be a typing error.
As you can see the first thread is continuously adding values while the second thread iterates over the values in the Map. When it starts iterating over the values it expects a number of values (this value is initialized at the construction of the iterator). However because thread1 is continuously adding items this value is not as expected when the Iterator checks the actual amount of values that are in the map when it actual executes the remove code. This causes the ConcurrentModificationException.
If you just need a CocurrentModificationException to be thrown, you could implement your own Map implementation (HackedMap) to remove items from the HashMap when the HashMap tries to copy values from your HackedMap
This is a little piece of code
public Map<String,Object> findTruckParts(Map<String,Object> output){
Map<String,Object>findPartsMap = null;
NewFooInstance newFooInstance = new NewFooInstance();
findPartsMap = PartBuilder.buildPartsOutputMap(output, outputMap);
newFooInstance.buildItem(findPartsMap);
return findPartsMap;
}
outputMap is a new hashMap and output is a hashmap with some spare parts info.
buildItem calls a few other private methods passing around the findPartsMap.
public class NewFooInstance{
buildItem(Map<String,Object> partsMap){
checkPartsValidity(partsMap, fetchMapOfValidParts());
}
checkPartsValidity(Map<String,Object> partsMap,Map<String,Object> partsMap){
//partsMap = update partsMap with missing items fetched from list of valid parts
}
}
Is the above thread safe ? Since all the maps are local to the respective methods, My assumption is that this is thread safe.
Edit: I have modified the method a little bit. Takes in a map and returns another one. So, my question is, is this map that is being returned be thread safe? It is local to the method so I think this is going to be threadsafe (no other thread entering will be able to change its value in case this map loses its monitor), however, since this map is being modified in other classes and other methods, does this method-localness of this map carry over across different classes/methods and assure thread safety ?
The answer is "no", because HashMap itself is not threadsafe.
Consider using a threadsafe Map implementation such as ConcurrentHashMap.
The problem is here:
public Map<String,Object> findTruckParts(Map<String,Object> output)
Even though things look thread-safe in the methods and sub-methods with respect to resulting map, you still have a thread safety issue with the source map (i.e. 'output'). As you extract data from it to put into the new resulting map, if it is altered by another thread at the same time, you will get a ConcurrentModificationException.
Below is some code to illustrate the issue:
import java.util.HashMap;
import java.util.Map;
public class Test {
public static void main(String[] args) throws Exception {
final Map<String, Object> test = new HashMap<String, Object>();
new Thread(new Runnable() {
public void run() {
System.out.println("Thread 1: started");
findTruckParts(test);
System.out.println("Thread 1: done");
}
public Map<String,Object> findTruckParts(Map<String,Object> output) {
Map<String, Object> result = new HashMap<String, Object>();
for(int i=0; i<100000000; i++) {
for(String key : output.keySet()) {
result.put("x", output.get(key));
}
}
return result;
}
}).start();
new Thread(new Runnable() {
public void run() {
System.out.println("Thread 2: started");
for(int i=0; i<100000; i++) {
test.put("y", "y"+i);
test.remove("y");
}
System.out.println("Thread 2: done");
}
}).start();
}
}
And the output is invariably:
Thread 1: started Thread 2: started Exception in thread "Thread-1"
java.util.ConcurrentModificationException at
java.util.HashMap$HashIterator.nextEntry(HashMap.java:793) at
java.util.HashMap$KeyIterator.next(HashMap.java:828) at
Test$1.findTruckParts(Test.java:19) at Test$1.run(Test.java:12) at
java.lang.Thread.run(Thread.java:680) Thread 2: done
So even though the findTruckParts() method creates its own map to return, if it has to look into the source map and some other thread is modifying its keys/values, there will be a problem. The the other threads are only reading, it shouldn't blow up. But I'm not sure you want to talk about thread safety in this case because it's still precarious.
One way to help with thread safety is to change the first line of the main method with:
final ConcurrentHashMap<String, Object> test = new ConcurrentHashMap<String, Object>(new HashMap<String, Object>());
But you can see how safety requirement is pushed on to the caller, which isn't great. So to help that, you could also alter the signature of the method:
public Map<String,Object> findTruckParts(ConcurrentHashMap<String,Object> output);
And now there is thread safety.
Therefore, as I stated on the first line, the problem is here:
The problem is here:
public Map<String,Object> findTruckParts(Map<String,Object> output)
I hope this helps.
I'm having a bit of trouble concerning concurrency and maps in Java.
Basically I have multiple threads using (reading and modifying) their own maps, however each of these maps is a part of a larger map which is being read and modified by a further thread:
My main method creates all threads, the threads create their respective maps which are then put into the "main" map:
Map<String, MyObject> mainMap = new HashMap<String, Integer>();
FirstThread t1 = new FirstThread();
mainMap.putAll(t1.getMap());
t1.start();
SecondThread t2 = new SecondThread();
mainMap.putAll(t2.getMap());
t2.start();
ThirdThread t3 = new ThirdThread(mainMap);
t3.start();
The problem I'm facing now is that the third (main) thread sees arbitrary values in the map, depending on when one or both of the other threads update "their" items.
I must however guarantee that the third thread can iterate over - and use the values of - the map without having to fear that a part of what is being read is "old":
FirstThread (analogue to SecondThread):
for (MyObject o : map.values()) {
o.setNewValue(getNewValue());
}
ThirdThread:
for (MyObject o : map.values()) {
doSomethingWith(o.getNewValue());
}
Any ideas? I've considered using a globally accessible (static final Object through a static class) lock which will be synchronized in each thread when the map must be modified.
Or are there specific Map implementations that assess this particular problem which I could use?
Thanks in advance!
Edit:
As suggested by #Pyranja, it would be possible to synchronize the getNewValue() method. However I forgot to mention that I am in fact trying to do something along the lines of transactions, where t1 and t2 modify multiple values before/after t3 works with said values. t3 is implemented in such a way that doSomethingWith() will not actually do anything with the value if it hasn't changed.
To synchronize at a higher level than the individual value objects, you need locks to handle the synchronization between the various threads. One way to do this, without changing your code too much, is a ReadWriteLock. Thread 1 and Thread 2 are writers, Thread 3 is a reader.
You can either do this with two locks, or one. I've sketched out below doing it with one lock, two writer threads, and one reader thread, without worrying about what happens with an exception during data update (ie, transaction rollback...).
All that said, this sounds like a classic producer-consumer scenario. You should consider using something like a BlockingQueue for communication between threads, as is outlined in this question.
There's other things you may want to consider changing as well, like using Runnable instead of extending Thread.
private static final class Value {
public void update() {
}
}
private static final class Key {
}
private final class MyReaderThread extends Thread {
private final Map<Key, Value> allValues;
public MyReaderThread(Map<Key, Value> allValues) {
this.allValues = allValues;
}
#Override
public void run() {
while (!isInterrupted()) {
readData();
}
}
private void readData() {
readLock.lock();
try {
for (Value value : allValues.values()) {
// Do something
}
}
finally {
readLock.unlock();
}
}
}
private final class WriterThread extends Thread {
private final Map<Key, Value> data = new HashMap<Key, Value>();
#Override
public void run() {
while (!isInterrupted()) {
writeData();
}
}
private void writeData() {
writeLock.lock();
try {
for (Value value : data.values()) {
value.update();
}
}
finally {
writeLock.unlock();
}
}
}
private final ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
private final ReadLock readLock;
private final WriteLock writeLock;
public Thing() {
readLock = lock.readLock();
writeLock = lock.writeLock();
}
public void doStuff() {
WriterThread thread1 = new WriterThread();
WriterThread thread2 = new WriterThread();
Map<Key, Value> allValues = new HashMap<Key, Value>();
allValues.putAll(thread1.data);
allValues.putAll(thread2.data);
MyReaderThread thread3 = new MyReaderThread(allValues);
thread1.start();
thread2.start();
thread3.start();
}
ConcurrentHashMap from java.util.concurrent - a thread-safe implementation of Map, which provides a much higher degree of concurrency than synchronizedMap. Just a lot of reads can almost always be performed in parallel, simultaneous reads and writes can usually be done in parallel, and multiple simultaneous recordings can often be done in parallel. (The class ConcurrentReaderHashMap offers a similar parallelism for multiple read operations, but allows only one active write operation.) ConcurrentHashMapis designed to optimize the retrieval operations.
Your example code may be misleading. In your first example you create a HashMap<String,Integer> but the second part iterates the map values which in this case are MyObject. The key to synchronization is to understand where and which mutable state is shared.
An Integer is immutable. It can be shared freely (but the reference to an Integer is mutable - it must be safely publicated and/or synchronized). But your code example suggests that the maps are populated with mutable MyObject instances.
Given that the map entries (key -> MyObject references) are not changed by any thread and all maps are created and safely publicated before any thread starts it would be in my opinion sufficient to synchronize the modification of MyObject. E.g.:
public class MyObject {
private Object value;
synchronized Object getNewValue() {
return value;
}
synchronized void setNewValue(final Object newValue) {
this.value = newValue;
}
}
If my assumptions are not correct, clarify your question / code example and also consider #jacobm's comment and #Alex answer.