I'm using a DelayQueue. I need to use this so as to only take from the queue when a delay has passed. I also want to enforce a capacity, much like a BlockingQueue. I can't seem to find a Collections implementation of this. Does one exist? If not, what's the best way of implementing it? A basic approach would be to do something like this:
public void addSomethingToQueue(Object somethingToAdd){
int capacity = 4;
while(queue.size() >= capacity){
try{
wait();
}catch(InterruptedException e){
e.printStackTrace();
}
}
queue.add(somethingToAdd);
}
This would mean calling notify / notifyAll each time something was removed. It's quite a small class so that's doable. It doesn't sound great though. And I'm not sure if the wait / notify may cause further problems?
Would it be better to sub-class DelayQueue and mess around with its methods? It feels a bit dodgy...
Why not compose a BlockingQueue and a DelayQueue? For e.g.:
class MyDelayBlockingQueue<T> implements Queue {
private final DelayQueue<T> delayQ = ...
private final BlockingQueue<T> blockingQ = ...
public synchronized void offer(T obj) {
blockingQ.offer(obj); // this will block if the Q is full
delayQ.offer(obj);
}
public synchronized T poll() {
T obj = delayQ.poll(); // This will handle the delay
if (obj != null) {
blockingQ.poll();
}
return obj;
}
// ...
}
EDIT
The code above will deadlock. If the Q is full, offer will block in a synchronized block, and all future calls to poll will block to acquire the intrinsic lock of the Q - causing a deadlock. Try something like instead:
public class DelayBlockingQueue<E extends Delayed>
{
private final DelayQueue<E> delayQ = new DelayQueue<E>();
private final Semaphore available;
public DelayBlockingQueue(int capacity)
{
available = new Semaphore(capacity, true);
}
public void offer(E e) throws InterruptedException
{
available.acquire();
delayQ.offer(e);
}
public E poll()
{
E e = delayQ.poll();
if (e != null)
{
available.release();
}
return e;
}
}
You may using LRU:
http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
Example implementation from Apache Commons:
http://commons.apache.org/collections/api/org/apache/commons/collections/LRUMap.html
So you don't write this again ;-)
Related
Within ConcurrentHashMap.compute() I increment and decrement some long value located in shared memory. Read, increment/decrement only gets performed within compute method on the same key.
So the access to long value is synchronised by locking on ConcurrentHashMap segment, thus increment/decrement is atomic. My question is: Does this synchronisation on a map guarantee visibility for long value? Can I rely on Map's internal synchronisation or should I make my long value volatile?
I know that when you explicitly synchronise on a lock, visibility is guaranteed. But I do not have perfect understanding of ConcurrentHashMap internals. Or maybe I can trust it today but tomorrow ConcurrentHashMap's internals may somehow change: exclusive access will be preserved, but visibility will disappear... and it is an argument to make my long value volatile.
Below I will post a simplified example. According to the test there is no race condition today. But can I trust this code long-term without volatile for long value?
class LongHolder {
private final ConcurrentMap<Object, Object> syncMap = new ConcurrentHashMap<>();
private long value = 0;
public void increment() {
syncMap.compute("1", (k, v) -> {
if (++value == 2000000) {
System.out.println("Expected final state. If this gets printed, this simple test did not detect visibility problem");
}
return null;
});
}
}
class IncrementRunnable implements Runnable {
private final LongHolder longHolder;
IncrementRunnable(LongHolder longHolder) {
this.longHolder = longHolder;
}
#Override
public void run() {
for (int i = 0; i < 1000000; i++) {
longHolder.increment();
}
}
}
public class ConcurrentMapExample {
public static void main(String[] args) throws InterruptedException {
LongHolder longholder = new LongHolder();
Thread t1 = new Thread(new IncrementRunnable(longholder));
Thread t2 = new Thread(new IncrementRunnable(longholder));
t1.start();
t2.start();
}
}
UPD: adding another example which is closer to the code I am working on. I would like to remove map entries when no one else is using the object. Please note that reading and writing of the long value happens only inside of remapping function of ConcurrentHashMap.compute:
public class ObjectProvider {
private final ConcurrentMap<Long, CountingObject> map = new ConcurrentHashMap<>();
public CountingObject takeObjectForId(Long id) {
return map.compute(id, (k, v) -> {
CountingObject returnLock;
returnLock = v == null ? new CountingObject() : v;
returnLock.incrementUsages();
return returnLock;
});
}
public void releaseObjectForId(Long id, CountingObject o) {
map.compute(id, (k, v) -> o.decrementUsages() == 0 ? null : o);
}
}
class CountingObject {
private int usages;
public void incrementUsages() {
--usages;
}
public int decrementUsages() {
return --usages;
}
}
UPD2: I admit that I failed to provide the simplest code examples previously, posting a real code now:
public class LockerUtility<T> {
private final ConcurrentMap<T, CountingLock> locks = new ConcurrentHashMap<>();
public void executeLocked(T entityId, Runnable synchronizedCode) {
CountingLock lock = synchronizedTakeEntityLock(entityId);
try {
lock.lock();
try {
synchronizedCode.run();
} finally {
lock.unlock();
}
} finally {
synchronizedReturnEntityLock(entityId, lock);
}
}
private CountingLock synchronizedTakeEntityLock(T id) {
return locks.compute(id, (k, l) -> {
CountingLock returnLock;
returnLock = l == null ? new CountingLock() : l;
returnLock.takeForUsage();
return returnLock;
});
}
private void synchronizedReturnEntityLock(T lockId, CountingLock lock) {
locks.compute(lockId, (i, v) -> lock.returnBack() == 0 ? null : lock);
}
private static class CountingLock extends ReentrantLock {
private volatile long usages = 0;
public void takeForUsage() {
usages++;
}
public long returnBack() {
return --usages;
}
}
}
No, this approach will not work, not even with volatile. You would have to use AtomicLong, LongAdder, or the like, to make this properly thread-safe. ConcurrentHashMap doesn't even work with segmented locks these days.
Also, your test does not prove anything. Concurrency issues by definition don't happen every time. Not even every millionth time.
You must use a proper concurrent Long accumulator like AtomicLong or LongAdder.
Do not get fooled by the line in the documentation of compute:
The entire method invocation is performed atomically
This does work for side-effects, like you have in that value++; it only works for the internal data of ConcurrentHashMap.
The first thing that you miss is that locking in CHM, the implementation has changed a lot (as the other answer has noted). But even if it did not, your understanding of the:
I know that when you explicitly synchronize on a lock, visibility is guaranteed
is flawed. JLS says that this is guaranteed when both the reader and the writer use the same lock; which in your case obviously does not happen; as such no guarantees are in place. In general happens-before guarantees (that you would require here) only work for pairs, for both reader and writer.
I've written a Java class and someone has reviewed the code and insisted that there could be a race condition in method calculate. Here's a simplified version of the class code:
public class MyClass {
private List<Integer> list;
private final ReadWriteLock lock;
public MyClass() {
list = new ArrayList<>();
lock = new ReentrantReadWriteLock();
}
public void add(Integer integer) {
lock.writeLock().lock();
try {
list.add(integer);
} finally {
lock.writeLock().unlock();
}
}
public void deleteAll() {
lock.writeLock().lock();
try {
list.clear();
} finally {
lock.writeLock().unlock();
}
}
public Integer calculate() {
List<Integer> newList = new ArrayList<>();
Integer result = 0;
lock.readLock().lock();
try {
list.forEach(integer -> {
// calculation logic that reads values from 'list' and adds only a subset of elements from 'list' in 'newList'
});
} finally {
lock.readLock().unlock();
}
setList(newList);
return result;
}
private void setList(List<Integer> newList) {
lock.writeLock().lock();
try {
list = newList;
} finally {
lock.writeLock().unlock();
}
}
}
Now my question is:
Can a race condition really happen in this method, and if so how can I solve it (either using locks or using any other method to make the class thread safe)?
Any advice would be appreciated.
There is a time gap between creation of newList and call to setList(newList). We may assume this time gap is arbitrary long, and everything can happen when it lasts, e.g. another thread adds an object which must be retained, but it will be lost when call to setList(newList) removes list with that new object.
In fact, the method calculate is modifying and should do all the work under write lock.
To clarify the above ... the statement
List<Integer> newList = new ArrayList<>();
... instantiates a data-structure (list ...) that will subsequently be used within the block of code that is intended to be protected by lock.readLock().lock();, but is not contained within it. Therefore it is not protected.
To remedy the problem, the declaration of newList should not include initialization. Nothing which affects the presumed value of this variable should exist outside of the lock-protected block.
I've written a Multithreading code for producer consumer problem in which I've written synchronized block inside the run method of consumer and producer thread which takes lock on shared list(I assumed)
So the point of question is that, will there be locking on the list, because as per each thread will have their own synchronized block but they are sharing the same list instance
public class Main {
static boolean finishFlag=false;
final int queueSize = 20;
List<Integer> queue = new LinkedList<>();
Semaphore semaphoreForList = new Semaphore(queueSize);
public Main(int producerCount,int consumerCount) {
while(producerCount!=0) {
new MyProducer(queue,semaphoreForList,queueSize).start(); //produces the producer
producerCount--;
}
while(consumerCount!=0) {
new MyConsumer(queue,semaphoreForList,queueSize).start(); //produces the consumer
consumerCount--;
}
}
public static void main(String args[]) {
/*
* input is from command line 1st i/p is number of producer and 2nd i/p is number of consumer
*/
try {
Main newMain = new Main(Integer.parseInt(args[0]),Integer.parseInt(args[1]));
try {
Thread.sleep(30000);
}
catch(InterruptedException e) {
}
System.out.println("exit");
finishFlag=true;
}
catch(NumberFormatException e) {
System.out.println(e.getMessage());
}
}
}
class MyProducer extends Thread{
private List<Integer> queue;
Semaphore semaphoreForList;
int queueSize;
public MyProducer(List<Integer> queue, Semaphore semaphoreForList,int queueSize) {
this.queue = queue;
this.semaphoreForList = semaphoreForList;
this.queueSize = queueSize;
}
public void run() {
while(!Main.finishFlag) {
try {
Thread.sleep((int)(Math.random()*1000));
}
catch(InterruptedException e) {
}
try {
if(semaphoreForList.availablePermits()==0) {//check if any space is left on queue to put the int
System.out.println("no more spaces left");
}
else {
synchronized(queue) {
semaphoreForList.acquire(); //acquire resource by putting int on the queue
int rand=(int)(Math.random()*10+1);
queue.add(rand);
System.out.println(rand+" was put on queue and now length is "+(queueSize-semaphoreForList.availablePermits()));
}
}
}
catch(InterruptedException m) {
System.out.println(m);
}
}
}
}
public class MyConsumer extends Thread{
private List<Integer> queue; //shared queue by consumer and producer
Semaphore semaphoreForList;
int queueSize;
public MyConsumer(List<Integer> queue, Semaphore semaphoreForList,int queueSize) {
this.queue = queue;
this.semaphoreForList = semaphoreForList;
this.queueSize = queueSize;
}
public void run() {
while(!Main.finishFlag) {//runs until finish flag is set to false by main
try {
Thread.sleep((int)(Math.random()*1000));//sleeps for random amount of time
}
catch(InterruptedException e) {
}
if((20-semaphoreForList.availablePermits())==0) {//checking if any int can be pulled from queue
System.out.println("no int on queue");
}
else {
synchronized(queue) {
int input=queue.remove(0);//releases the resource(position in queue) by pulling the int out of the queue and computing factorial
semaphoreForList.release();
int copyOfInput=input;
int fact=1;
while(copyOfInput!=0) {
fact = fact*copyOfInput;
copyOfInput--;
}
System.out.println(input+" was pulled out from queue and the computed factorial is "+fact+
" the remaining length of queue is "+(queueSize-semaphoreForList.availablePermits()));
}
}
}
}
}
I would rather recommend to use the java.lang.Object methods wait() and notify() to create a consumer-producer algorithm. Using this approach the queue won't be blocked by endlessly repeating and unnecessary synchronized statements which I think is a more performant and "event driven" solution.
This link might be helpful -
https://www.geeksforgeeks.org/producer-consumer-solution-using-threads-java/
Yes, the mutex/monitor is associated with the Java Object instance, which is the shared list in this instance. Which means all threads lock same mutex (associated with queue, and are synchronized through this.
So the good part: You program is actually thread-safe.
However the additional semaphore actually doesn't make a lot of sense in a variety of ways:
The checks (e.g. for availablePermits) happen outside of the lock, and are therefore only a best-guess about the state of your queue. It could be different shortly afterwards.
Trying to acquire a semaphore inside a lock, which can only be released inside the same lock, looks like a guaranteed recipe for a deadlock.
As AnDus has mentioned, this could probably be better solved via using the wait and notify methods which act as a condition variable. Most likely you need even two, one to unblock producers and one to unblock consumers.
In general, if this is not a coding exercise, use a class which already implements your desired functionality. In this case, java.util.concurrent.BlockingQueue seems like what you want.
We need to send messages with highest priority first so we use a PriorityQueue for our purpose.
PriorityQueue<MessageData> queue = new PriorityQueue<MessageData>();
However, we also want our queue to behave like a sorted set as well. Therefore, we adapt the PriorityQueue to ignore insertions which repeat existing members.
import java.util.Comparator;
import java.util.PriorityQueue;
public class PrioritySet<E> extends PriorityQueue<E> {
private static final long serialVersionUID = 34658778L;
public PrioritySet() {
super();
}
public PrioritySet(int initialCapacity, Comparator<? super E> comparator) {
super(initialCapacity, comparator);
}
#Override
public boolean offer(E e) {
boolean isAdded = false;
if(!super.contains(e)) {
isAdded = super.offer(e);
}
return isAdded;
}
}
Now our app specific implementation of the data structure.
import java.util.Comparator;
public class MessagePrioritySet extends PrioritySet<MessageData> {
private static final long serialVersionUID = 34658779L;
private int minPriorityNumber;
public MessagePrioritySet() {
super();
}
public MessagePrioritySet(int initialCapacity, Comparator<MessageData> comparator) {
super(initialCapacity, comparator);
}
public synchronized int getMinPriorityNumber() {
return minPriorityNumber;
}
public synchronized void setMinPriorityNumber(int minPriorityNumber) {
this.minPriorityNumber = minPriorityNumber;
}
#Override
public synchronized boolean offer(MessageData notification) {
boolean isAdded = super.offer(notification);
if (notification.getPriority() < minPriorityNumber)
minPriorityNumber = notification.getPriority();
return isAdded;
}
public synchronized void reportSent(MessageData notification) {
MessageData nextMessageData = peek();
if (nextMessageData == null)
minPriorityNumber = 0;
else if (nextMessageData.getPriority() > notification.getPriority())
minPriorityNumber = nextMessageData.getPriority();
}
}
Here, we want the data structure to be aware of the minimum priority value of the messages so we declare an instance variable for that. The priority of the incoming message is checked and if this priority is lower than the stored value, the value stored is updated. The use of the class is required to report any sent messages. If no other member of the data structure has a priority as low as the one being removed, then the next element's priority becomes the stored priority.
Two threads share the implemented queue. One thread fetches data from the database and inserts them into the queue. The other reads the queue and sends the highest priority message with the lowest priority number. Because the queue sets the minimum priority value to 0 and the thread which fetches data from the database reads rows with priority value lower than or equal to the minimum value stored in the queue if the stored minimum value is not zero, we can be pretty sure that while the current messages in the queue are being sent, only the new messages which are more important than those already in the queue will be added to the queue.
We think that the operations in the while loops in the threads should be atomic and would thank anyone who could tell us how to make them atomic.
private void startMptSender() {
sleepInterval = 1000;
final MessagePrioritySet messagePrioritySet = new MessagePrioritySet();
Runnable mptReader = new Runnable() {
#Override
public void run() {
while (true) {
List<MessageData> messageDataList;
if (messagePrioritySet.getMinPriorityNumber() == 0)
messageDataList = messageDao.readSMSMpt();
else
messageDataList = messageDao.readSMSMpt(messagePrioritySet.getMinPriorityNumber());
for (MessageData messageData : messageDataList) {
messagePrioritySet.offer(messageData);
}
try {
Thread.sleep(sleepInterval);
} catch (InterruptedException ie) {
}
}
}
};
executor.execute(mptReader);
Runnable mptPusher = new Runnable() {
#Override
public void run() {
while (status) {
if (messagePrioritySet.size() > 0) {
while (messagePrioritySet.size() != 0) {
MessageData noti = messagePrioritySet.remove();
mptSender.sendSms(noti);
messageDao.markNotificationAsRead(noti.getSyskey());
messagePrioritySet.reportSent(noti);
try {
Thread.sleep(sleepInterval);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
} else {
try {
Thread.sleep(sleepInterval);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
};
executor.execute(mptPusher);
}
}
I assume what you mean with atomic translates to: you want that each thread is doing all of its work for one iteration without being interrupted by the other thread.
In other words: you have (probably multiple) operations; and while thread A is doing his operations, thread B shouldn't be doing anything - because you want to make sure that B only sees the "complete set" of updates made by A.
Sure, when that operation would be just about writing to one int for example, you could be using AtomicInteger for example. But when you are talking about several operations ... you need something else.
A "brute force" solution would be to add some sort of locking. Meaning: your threads share some LOCK object; and whenever one thread enters a "critical section" ... it needs to acquire that LOCK first (and of course release directly afterwards). But this will need very careful designing; as want to make sure that thread A isn't "starving" B by holding that lock for too long.
Looking at your code again, more closely ... maybe you could try to make your minPriority to be an AtomicInteger; the question is how that would relate to the other thread that is working the "size" of your queue.
I am looking at some code that is causing an issue (Deadlock) in Java 6 and above, but not in Java 1.5.
BMP Bean:
private MyClass m_c;
public String ejbCreate(String id) throws CreateException, MyException
{
try
{
m_c = Singleton.getInstance().getObj(id);
}
catch (MyException e)
{
synchronized (Singleton.getInstance())
{
//check again
if (!Singleton.getInstance().hasObj(id)) {
m_c = new MyClass(id);
Singleton.getInstance().addObj(id, m_c);
}
else {
m_c = Singleton.getInstance().getObj(id);
}
}
}
}
Singleton:
private Map objCache = new HashMap();
private static Singleton INSTANCE = new Singleton();
public static Singleton getInstance() {
return INSTANCE;
}
public void addObj(String id, MyClass o)
{
if (this.objCache.containsKey(id)) {
this.objCache.remove(id);
}
this.objCache.put(id, o);
}
public MyClass getObj(String id) throws Exception
{
MyClass o = null;
o = (MyClass)this.objCache.get(id);
if (o == null) {
throw new MyException("Obj " +id+ " not found in cache");
}
return o;
}
public boolean hasObj(String id)
{
return this.objCache.containsKey(id);
}
The empirical evidence so far shows that putting synchronization round the whole try/catch resolves the deadlock when using Java 6.
Clearly there can be one or more threads calling
Singleton.getInstance().getObj(id)
without obtaining the lock whilst another thread has the lock and is executing the code in the synchronized block, but even after considering memory synchronization detailed in JSR-133, it doesn't look like there should be any issues in this scenario.
I am aware that I haven't explained what the issue is apart from saying it is a deadlock and that it is not ideal to paint only a prat of the picture but to paint the whole picture would take a very big canvas.
I have looked at the notes for Java 6 release and the only area that sounds relevant is around uncontended synchronization, but I do not know if that is significant in this case.
Thank you for any help.
I suspect you are not getting a deadlock (holding two locks in two different threads obtained in a different order), but rather going into an infinite loop. This can happen with HashMap if you are accessing it in a manner which is not thread safe. What happens in the linked list used to handle collisions appears to go back on itself and the reader runs forever. This has always been an issue, though some subtle difference in Java 6 could show up this problem when a different version might not.
I suggest you fix this class so it uses a thread safe collection and not retry on Exception because there is not guarantee this will happen.
There is a lot you could do to improve this class but what you really need is ConcurrentMap.computeIfAbsent added in Java 8.
Note: there is no reason to
check a key exists before attempting to remove it.
remove a key just before attempting to put it.
throw an Exception instead of returning null.
returning null when you can pass it a factory. (as per computeIfAbsent)
use a factory when the type is known in advance.
I suggest you
use a ConcurrentMap for thread safe concurrent access.
use an enum for a Singleton.
Both of these were added in Java 5.0.
public enum MyClassCache {
INSTANCE;
private final Map<String, MyClass> cache = new ConcurrentHashMap<>();
public boolean hasId(String id) {
return cache.containsKey(id);
}
public MyClass get(String id) throws IllegalStateException {
MyClass ret = cache.get(id);
if (ret == null) throw new IllegalStateException(id);
return ret;
}
public MyClass getOrCreate(String id) throws IllegalStateException {
MyClass ret = cache.get(id);
if (ret == null) {
synchronized (cache) {
ret = cache.get(id);
if (ret == null) {
cache.put(id, ret = new MyClass(id));
}
}
}
return ret;
}
}
In Java 8 you can use computeIfAbsent
public MyClass getOrCreate(String id) {
return cache.computeIfAbsent(id, MyClass::new);
}
Am I right that the core of this question is the difference between:
public void ejbCreate1(String id) throws Exception {
try {
m_c = Singleton.getInstance().getObj(id);
} catch (Exception e) {
synchronized (Singleton.getInstance()) {
//check again
if (!Singleton.getInstance().hasObj(id)) {
m_c = new MyClass(id);
Singleton.getInstance().addObj(id, m_c);
} else {
m_c = Singleton.getInstance().getObj(id);
}
}
}
}
and
public void ejbCreate2(String id) throws Exception {
synchronized (Singleton.getInstance()) {
try {
m_c = Singleton.getInstance().getObj(id);
} catch (Exception e) {
//check again
if (!Singleton.getInstance().hasObj(id)) {
m_c = new MyClass(id);
Singleton.getInstance().addObj(id, m_c);
} else {
m_c = Singleton.getInstance().getObj(id);
}
}
}
}
in Java-6 that can cause the first to hang and the second to work fine.
Clearly the primary difference is that getObj might be called by two different threads at the same time, and may even be called while another threads is creating the new object.
From Is it safe to get values from a java.util.HashMap from multiple threads (no modification)? it is likely that you are not in that situation. Conclusion is that one thread is readng from the Map (perhaps o = (MyClass) this.objCache.get(id);) while another is writing to the map by calling addObj. This is clearly a recipe for the read to crash and burn.
See Is a HashMap thread-safe for different keys? for details about the potential sinkholes.