I have a very simple class:
public class IdProvider {
private Map<String,AtomicLong> idMap;
public IdProvider(){
idMap = new HashMap<>();
}
public long getAvailableId(String conversation){
AtomicLong id = idMap.get(conversation);
if(id == null){
id = new AtomicLong(0);
idMap.put(conversation,id);
}
return id.getAndIncrement();
}
}
Different methods asynchronously may pass the same conversation identifier and call getAvailableId() where they will be returned a unique id.
Is this thread safe? I'm I guaranteed that the no two methods will receive the same id or do I need to opt for something else?
There's multiple ways to make this thread safe, but below is the simplest, I think. First, you need to safely publish the initial Map. Then you need to make each access of that map thread safe.
public class IdProvider {
private final Map<String,AtomicLong> idMap;
public IdProvider(){
idMap = new HashMap<>();
}
public synchronized long getAvailableId(String conversation){
AtomicLong id = idMap.get(conversation);
if(id == null){
id = new AtomicLong(0);
idMap.put(conversation,id);
}
return id.getAndIncrement();
}
}
The final keyword is one way to provide "safe publication". (That's an actual term in Java, look it up.)
And without being tricky, just synchronizing the whole method is the easiest way to provide both synchronization and atomicity. You shouldn't try to do more unless you can profile this code and determine that it is in fact a performance bottle-neck. Keep It Simple.
This isn't thread safe.
public long getAvailableId(String conversation){
AtomicLong id = idMap.get(conversation);
// Thread could be paused here, causing bad interleavings
// If now a similar call to "getAvailableId" is done you will have two times the same id
if(id == null){
id = new AtomicLong(0);
idMap.put(conversation,id);
}
return id.getAndIncrement();
}
Make the method synchronized to avoid possible bad interleavings and data races.
When you need to use multiple providers working on same set of ids at the same time,
public class IdProvider {
private static Map<String,Long> idMap;
static
{
idMap = new HashMap<>();
}
public Object lock=new Object();
public IdProvider(Object l){
lock=l;
}
public long getAvailableId(String conversation){
// do other work
synchronized(lock)
{
Long id = idMap.get(conversation);
if(id == null){
id = new Long(0);
idMap.put(conversation,id);
}
return idMap.put(conversation,id+1);
}
}
}
Object lock=new Object();
... in a thread:
IdProvider provider=new IdProvider(lock); // providing from a thread
... in another thread:
IdProvider provider2=new IdProvider(lock); // providing from another
Related
Within ConcurrentHashMap.compute() I increment and decrement some long value located in shared memory. Read, increment/decrement only gets performed within compute method on the same key.
So the access to long value is synchronised by locking on ConcurrentHashMap segment, thus increment/decrement is atomic. My question is: Does this synchronisation on a map guarantee visibility for long value? Can I rely on Map's internal synchronisation or should I make my long value volatile?
I know that when you explicitly synchronise on a lock, visibility is guaranteed. But I do not have perfect understanding of ConcurrentHashMap internals. Or maybe I can trust it today but tomorrow ConcurrentHashMap's internals may somehow change: exclusive access will be preserved, but visibility will disappear... and it is an argument to make my long value volatile.
Below I will post a simplified example. According to the test there is no race condition today. But can I trust this code long-term without volatile for long value?
class LongHolder {
private final ConcurrentMap<Object, Object> syncMap = new ConcurrentHashMap<>();
private long value = 0;
public void increment() {
syncMap.compute("1", (k, v) -> {
if (++value == 2000000) {
System.out.println("Expected final state. If this gets printed, this simple test did not detect visibility problem");
}
return null;
});
}
}
class IncrementRunnable implements Runnable {
private final LongHolder longHolder;
IncrementRunnable(LongHolder longHolder) {
this.longHolder = longHolder;
}
#Override
public void run() {
for (int i = 0; i < 1000000; i++) {
longHolder.increment();
}
}
}
public class ConcurrentMapExample {
public static void main(String[] args) throws InterruptedException {
LongHolder longholder = new LongHolder();
Thread t1 = new Thread(new IncrementRunnable(longholder));
Thread t2 = new Thread(new IncrementRunnable(longholder));
t1.start();
t2.start();
}
}
UPD: adding another example which is closer to the code I am working on. I would like to remove map entries when no one else is using the object. Please note that reading and writing of the long value happens only inside of remapping function of ConcurrentHashMap.compute:
public class ObjectProvider {
private final ConcurrentMap<Long, CountingObject> map = new ConcurrentHashMap<>();
public CountingObject takeObjectForId(Long id) {
return map.compute(id, (k, v) -> {
CountingObject returnLock;
returnLock = v == null ? new CountingObject() : v;
returnLock.incrementUsages();
return returnLock;
});
}
public void releaseObjectForId(Long id, CountingObject o) {
map.compute(id, (k, v) -> o.decrementUsages() == 0 ? null : o);
}
}
class CountingObject {
private int usages;
public void incrementUsages() {
--usages;
}
public int decrementUsages() {
return --usages;
}
}
UPD2: I admit that I failed to provide the simplest code examples previously, posting a real code now:
public class LockerUtility<T> {
private final ConcurrentMap<T, CountingLock> locks = new ConcurrentHashMap<>();
public void executeLocked(T entityId, Runnable synchronizedCode) {
CountingLock lock = synchronizedTakeEntityLock(entityId);
try {
lock.lock();
try {
synchronizedCode.run();
} finally {
lock.unlock();
}
} finally {
synchronizedReturnEntityLock(entityId, lock);
}
}
private CountingLock synchronizedTakeEntityLock(T id) {
return locks.compute(id, (k, l) -> {
CountingLock returnLock;
returnLock = l == null ? new CountingLock() : l;
returnLock.takeForUsage();
return returnLock;
});
}
private void synchronizedReturnEntityLock(T lockId, CountingLock lock) {
locks.compute(lockId, (i, v) -> lock.returnBack() == 0 ? null : lock);
}
private static class CountingLock extends ReentrantLock {
private volatile long usages = 0;
public void takeForUsage() {
usages++;
}
public long returnBack() {
return --usages;
}
}
}
No, this approach will not work, not even with volatile. You would have to use AtomicLong, LongAdder, or the like, to make this properly thread-safe. ConcurrentHashMap doesn't even work with segmented locks these days.
Also, your test does not prove anything. Concurrency issues by definition don't happen every time. Not even every millionth time.
You must use a proper concurrent Long accumulator like AtomicLong or LongAdder.
Do not get fooled by the line in the documentation of compute:
The entire method invocation is performed atomically
This does work for side-effects, like you have in that value++; it only works for the internal data of ConcurrentHashMap.
The first thing that you miss is that locking in CHM, the implementation has changed a lot (as the other answer has noted). But even if it did not, your understanding of the:
I know that when you explicitly synchronize on a lock, visibility is guaranteed
is flawed. JLS says that this is guaranteed when both the reader and the writer use the same lock; which in your case obviously does not happen; as such no guarantees are in place. In general happens-before guarantees (that you would require here) only work for pairs, for both reader and writer.
I have the following set of classes (along with a failing unit test):
Sprocket:
public class Sprocket {
private int serialNumber;
public Sprocket(int serialNumber) {
this.serialNumber = serialNumber;
}
#Override
public String toString() {
return "sprocket number " + serialNumber;
}
}
SlowSprocketFactory:
public class SlowSprocketFactory {
private final AtomicInteger maxSerialNumber = new AtomicInteger();
public Sprocket createSprocket() {
// clang, click, whistle, pop and other expensive onomatopoeic operations
int serialNumber = maxSerialNumber.incrementAndGet();
return new Sprocket(serialNumber);
}
public int getMaxSerialNumber() {
return maxSerialNumber.get();
}
}
SprocketCache:
public class SprocketCache {
private SlowSprocketFactory sprocketFactory;
private Sprocket sprocket;
public SprocketCache(SlowSprocketFactory sprocketFactory) {
this.sprocketFactory = sprocketFactory;
}
public Sprocket get(Object key) {
if (sprocket == null) {
sprocket = sprocketFactory.createSprocket();
}
return sprocket;
}
}
TestSprocketCache unit test:
public class TestSprocketCache {
private SlowSprocketFactory sprocketFactory = new SlowSprocketFactory();
#Test
public void testCacheReturnsASprocket() {
SprocketCache cache = new SprocketCache(sprocketFactory);
Sprocket sprocket = cache.get("key");
assertNotNull(sprocket);
}
#Test
public void testCacheReturnsSameObjectForSameKey() {
SprocketCache cache = new SprocketCache(sprocketFactory);
Sprocket sprocket1 = cache.get("key");
Sprocket sprocket2 = cache.get("key");
assertEquals("cache should return the same object for the same key", sprocket1, sprocket2);
assertEquals("factory's create method should be called once only", 1, sprocketFactory.getMaxSerialNumber());
}
}
The TestSprocketCache unit test always returns a green bar even if I change the following as follows:
Sprocket sprocket1 = cache.get("key");
Sprocket sprocket2 = cache.get("pizza");
Am guessing that I have to use a HashMap.contains(key) inside SprocketCache.get() method but can't seem to figure the logic.
The problem you're having here is that your get(Object) implementation only allows one instance to be created:
public Sprocket get(Object key) {
// Creates object if it doesn't exist yet
if (sprocket == null) {
sprocket = sprocketFactory.createSprocket();
}
return sprocket;
}
This is a typical lazy-loading instantiation singleton pattern. If you invoke get again, an instance will be assigned to sprocket and it will skip the instantiation completely. Note that you don't even use the key parameter at all, so it does not affect anything.
Using a Map would indeed be one way to achieve your objective:
public class SprocketCache {
private SlowSprocketFactory sprocketFactory;
private Map<Object, Sprocket> instances = new HashMap<Object, Sprocket>();
public SprocketCache(SlowSprocketFactory sprocketFactory) {
this.sprocketFactory = sprocketFactory;
}
public Sprocket get(Object key) {
if (!instances.containsKey(key)) {
instances.put(sprocket);
}
return instances.get(key);
}
}
Well, your current Cache implementation does not rely on key, so no wonder it always returns same cached-once value.
If you want to store different values for keys, and assuming you want it to be thread safe, you might end up doing something like this:
public class SprocketCache {
private SlowSprocketFactory sprocketFactory;
private ConcurrentHashMap<Object, Sprocket> cache = new ConcurrentHashMap<?>();
public SprocketCache(SlowSprocketFactory sprocketFactory) {
this.sprocketFactory = sprocketFactory;
}
public Sprocket get(Object key) {
if (!cache.contains(key)) {
// we only wan't acquire lock for cache seed operation rather than for every get
synchronized (key){
// kind of double check locking to make sure no other thread has populated cache while we were waiting for monitor to be released
if (!cache.contains(key)){
cache.putIfAbsent(key, sprocketFactory.createSprocket());
}
}
}
return cache.get(key);
}
}
Couple important side notes:
you'll need CocncurrentHashMap to ensure happens-before paradigm and so other thread will instantly see if cache has been filled;
new cache value creation has to be synchronized so each concurrent
thread won't generate it's own value, overriding previous values during race condition;
synchronization is quite expensive so we only wan't to engage it when needed, and due to same race condition you might get several threads holding monitor at the same time. That is why another check is required AFTER synchronized block to make sure that other thread hasn't already filled that value.
Inspired by a comment to an given answer I tried to create a thread-safe implementation of the multiton pattern, which relies on unique keys and performs locks on them (I have the idea from JB Nizet's answer on this question).
Question
Is the implementation I provided viable?
I'm not interested in whether Multiton (or Singleton) are in general good patterns, it would result in a discussion. I just want a clean and working implementation.
Contras:
You have to know how many instances you want to create at compile time .
Pros
No lock on whole class, or whole map. Concurrent calls to getInstanceare possible.
Getting instances via key object, and not just unbounded int or String, so you can be sure to get an non-null instance after the method call.
Thread-safe (at least that's my impression).
public class Multiton
{
private static final Map<Enum<?>, Multiton> instances = new HashMap<Enum<?>, Multiton>();
private Multiton() {System.out.println("Created instance."); }
/* Can be called concurrently, since it only synchronizes on id */
public static <KEY extends Enum<?> & MultitionKey> Multiton getInstance(KEY id)
{
synchronized (id)
{
if (instances.get(id) == null)
instances.put(id, new Multiton());
}
System.out.println("Retrieved instance.");
return instances.get(id);
}
public interface MultitionKey { /* */ }
public static void main(String[] args) throws InterruptedException
{
//getInstance(Keys.KEY_1);
getInstance(OtherKeys.KEY_A);
Runnable r = new Runnable() {
#Override
public void run() { getInstance(Keys.KEY_1); }
};
int size = 100;
List<Thread> threads = new ArrayList<Thread>();
for (int i = 0; i < size; i++)
threads.add(new Thread(r));
for (Thread t : threads)
t.start();
for (Thread t : threads)
t.join();
}
enum Keys implements MultitionKey
{
KEY_1;
/* define more keys */
}
enum OtherKeys implements MultitionKey
{
KEY_A;
/* define more keys */
}
}
I tried to prevent the resizing of the map and the misuse of the enums I sychronize on.
It's more of a proof of concept, before I can get it over with! :)
public class Multiton
{
private static final Map<MultitionKey, Multiton> instances = new HashMap<MultitionKey, Multiton>((int) (Key.values().length/0.75f) + 1);
private static final Map<Key, MultitionKey> keyMap;
static
{
Map<Key, MultitionKey> map = new HashMap<Key, MultitionKey>();
map.put(Key.KEY_1, Keys.KEY_1);
map.put(Key.KEY_2, OtherKeys.KEY_A);
keyMap = Collections.unmodifiableMap(map);
}
public enum Key {
KEY_1, KEY_2;
}
private Multiton() {System.out.println("Created instance."); }
/* Can be called concurrently, since it only synchronizes on KEY */
public static <KEY extends Enum<?> & MultitionKey> Multiton getInstance(Key id)
{
#SuppressWarnings ("unchecked")
KEY key = (KEY) keyMap.get(id);
synchronized (keyMap.get(id))
{
if (instances.get(key) == null)
instances.put(key, new Multiton());
}
System.out.println("Retrieved instance.");
return instances.get(key);
}
private interface MultitionKey { /* */ }
private enum Keys implements MultitionKey
{
KEY_1;
/* define more keys */
}
private enum OtherKeys implements MultitionKey
{
KEY_A;
/* define more keys */
}
}
It is absolutely not thread-safe. Here is a simple example of the many, many things that could go wrong.
Thread A is trying to put at key id1. Thread B is resizing the buckets table due to a put at id2. Because these have different synchronization monitors, they're off to the races in parallel.
Thread A Thread B
-------- --------
b = key.hash % map.buckets.size
copy map.buckets reference to local var
set map.buckets = new Bucket[newSize]
insert keys from old buckets into new buckets
insert into map.buckets[b]
In this example, let's say Thread A saw the map.buckets = new Bucket[newSize] modification. It's not guaranteed to (since there's no happens-before edge), but it may. In that case, it'll be inserting the (key, value) pair into the wrong bucket. Nobody will ever find it.
As a slight variant, if Thread A copied the map.buckets reference to a local var and did all its work on that, then it'd be inserting into the right bucket, but the wrong buckets table; it wouldn't be inserting into the new one that Thread B is about to install as the table for everyone to see. If the next operation on key 1 happens to see the new table (again, not guaranteed to but it may), then it won't see Thread A's actions because they were done on a long-forgotten buckets array.
I'd say not viable.
Synchronizing on the id parameter is fraught with dangers - what if they use this enum for another synchronization mechanism? And of course HashMap is not concurrent as the comments have pointed out.
To demonstrate - try this:
Runnable r = new Runnable() {
#Override
public void run() {
// Added to demonstrate the problem.
synchronized(Keys.KEY_1) {
getInstance(Keys.KEY_1);
}
}
};
Here's an implementation that uses atomics instead of synchronization and therefore should be more efficient. It is much more complicated than yours but handling all of the edge cases in a Miltiton IS complicated.
public class Multiton {
// The static instances.
private static final AtomicReferenceArray<Multiton> instances = new AtomicReferenceArray<>(1000);
// Ready for use - set to false while initialising.
private final AtomicBoolean ready = new AtomicBoolean();
// Everyone who is waiting for me to initialise.
private final Queue<Thread> waiters = new ConcurrentLinkedQueue<>();
// For logging (and a bit of linguistic fun).
private final int forInstance;
// We need a simple constructor.
private Multiton(int forInstance) {
this.forInstance = forInstance;
log(forInstance, "New");
}
// The expensive initialiser.
public void init() throws InterruptedException {
log(forInstance, "Init");
// ... presumably heavy stuff.
Thread.sleep(1000);
// We are now ready.
ready();
}
private void ready() {
log(forInstance, "Ready");
// I am now ready.
ready.getAndSet(true);
// Unpark everyone waiting for me.
for (Thread t : waiters) {
LockSupport.unpark(t);
}
}
// Get the instance for that one.
public static Multiton getInstance(int which) throws InterruptedException {
// One there already?
Multiton it = instances.get(which);
if (it == null) {
// Lazy make.
Multiton newIt = new Multiton(which);
// Successful put?
if (instances.compareAndSet(which, null, newIt)) {
// Yes!
it = newIt;
// Initialise it.
it.init();
} else {
// One appeared as if by magic (another thread got there first).
it = instances.get(which);
// Wait for it to finish initialisation.
// Put me in its queue of waiters.
it.waiters.add(Thread.currentThread());
log(which, "Parking");
while (!it.ready.get()) {
// Park me.
LockSupport.park();
}
// I'm not waiting any more.
it.waiters.remove(Thread.currentThread());
log(which, "Unparked");
}
}
return it;
}
// Some simple logging.
static void log(int which, String s) {
log(new Date(), "Thread " + Thread.currentThread().getId() + " for Multiton " + which + " " + s);
}
static final DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
// synchronized so I don't need to make the DateFormat ThreadLocal.
static synchronized void log(Date d, String s) {
System.out.println(dateFormat.format(d) + " " + s);
}
// The tester class.
static class MultitonTester implements Runnable {
int which;
private MultitonTester(int which) {
this.which = which;
}
#Override
public void run() {
try {
Multiton.log(which, "Waiting");
Multiton m = Multiton.getInstance(which);
Multiton.log(which, "Got");
} catch (InterruptedException ex) {
Multiton.log(which, "Interrupted");
}
}
}
public static void main(String[] args) throws InterruptedException {
int testers = 50;
int multitons = 50;
// Do a number of them. Makes n testers for each Multiton.
for (int i = 1; i < testers * multitons; i++) {
// Which one to create.
int which = i / testers;
//System.out.println("Requesting Multiton " + i);
new Thread(new MultitonTester(which+1)).start();
}
}
}
I'm not a Java programmer, but: HashMap is not safe for concurrent access. Might I recommend ConcurrentHashMap.
private static final ConcurrentHashMap<Object, Multiton> instances = new ConcurrentHashMap<Object, Multiton>();
public static <TYPE extends Object, KEY extends Enum<Keys> & MultitionKey<TYPE>> Multiton getInstance(KEY id)
{
Multiton result;
synchronized (id)
{
result = instances.get(id);
if(result == null)
{
result = new Multiton();
instances.put(id, result);
}
}
System.out.println("Retrieved instance.");
return result;
}
I need a way to allow only one thread to modify data related to a service ticket. More than one thread may be attempting to modify the ticket data at the same time.
Below is a simplified version of my approach. Is there a better way to do this? Maybe with java.util.concurrent packages?
public class SomeClass1
{
static final HashMap<Integer, Object> ticketLockMap = new HashMap<Integer, Object>();
public void process(int ticketNumber)
{
synchronized (getTicketLock(ticketNumber))
{
// only one thread may modify ticket data here
// ... ticket modifications here...
}
}
protected static Object getTicketLock(int ticketNumber)
{
Object ticketLock;
// allow only one thread to use map
synchronized (ticketLockMap)
{
ticketLock = ticketLockMap.get(ticketNumber);
if (ticketLock == null)
{
// first time ticket is locked
ticketLock = new Object();
ticketLockMap.put(ticketNumber, ticketLock);
}
}
return ticketLock;
}
}
Additionally, if I don't want the HashMap filling up with unused locks, I would need a more complex approach like the following:
public class SomeClass2
{
static final HashMap<Integer, Lock> ticketLockMap = new HashMap<Integer, Lock>();
public void process(int ticketNumber)
{
synchronized (getTicketLock(ticketNumber))
{
// only one thread may modify ticket data here
// ... ticket modifications here...
// after all modifications, release lock
releaseTicketLock(ticketNumber);
}
}
protected static Lock getTicketLock(int ticketNumber)
{
Lock ticketLock;
// allow only one thread to use map
synchronized (ticketLockMap)
{
ticketLock = ticketLockMap.get(ticketNumber);
if (ticketLock == null)
{
// first time ticket is locked
ticketLock = new Lock();
ticketLockMap.put(ticketNumber, ticketLock);
}
}
return ticketLock;
}
protected static void releaseTicketLock(int ticketNumber)
{
// allow only one thread to use map
synchronized (ticketLockMap)
{
Lock ticketLock = ticketLockMap.get(ticketNumber);
if (ticketLock != null && --ticketLock.inUseCount == 0)
{
// lock no longer in use
ticketLockMap.remove(ticketLock);
}
}
}
}
class Lock
{
// constructor/getters/setters omitted for brevity
int inUseCount = 1;
}
You might be looking for the Lock interface. The second case could be solved by a ReentrantLock, which counts the number of times it has been locked.
Locks have a .lock() method which waits for the lock to acquire and an .unlock method which should be called like
Lock l = ...;
l.lock();
try {
// access the resource protected by this lock
} finally {
l.unlock();
}
This could then be combined with a HashMap<Integer, Lock>. You could omit the synchronized calls and cut down on lines of code.
I had posted somewhat similar question before also. I got clarification for my doubts as well. But still I need something more. The Hashmap will be initialized with the enum object as the key and a threadpool instance as the value. I am confused as of how to initialize the HashMap for every object been called by some other process ..To make clear :
My program, MyThreadpoolExcecutorPgm.java initializes a HashMap
My Progran AdditionHandler.java requests a thread from the HashMap by passing ThreadpoolName (enum). I am getting "No thread available from HashMap" message. Please do help me.
Below given is my code:
public class MyThreadpoolExcecutorPgm {
enum ThreadpoolName {
DR, BR, SV, MISCELLENEOUS;
}
private static String threadName;
private static HashMap<ThreadpoolName, ThreadPoolExecutor>
threadpoolExecutorHash;
public MyThreadpoolExcecutorPgm(String p_threadName) {
threadName = p_threadName;
}
public static void fillthreadpoolExecutorHash() {
int poolsize = 3;
int maxpoolsize = 3;
long keepAliveTime = 10;
ThreadPoolExecutor tp = null;
threadpoolExecutorHash = new HashMap<ThreadpoolName, ThreadPoolExecutor>();
for (ThreadpoolName poolName : ThreadpoolName.) // failing to implement
{
tp = new ThreadPoolExecutor(poolsize, maxpoolsize, keepAliveTime,
TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(5));
threadpoolExecutorHash.put(poolName, tp);
}
}
public static ThreadPoolExecutor getThreadpoolExcecutor(
ThreadpoolName poolName) {
ThreadPoolExecutor thread = null;
if (threadpoolExecutorHash != null && poolName != null) {
thread = threadpoolExecutorHash.get(poolName);
} else {
System.out.println("No thread available from HashMap");
}
return thread;
}
}
AdditionHandler.java
public class AdditionHandler{
public void handle() {
AddProcess setObj = new AddProcess(5, 20);
ThreadPoolExecutor tpe = null;
ThreadpoolName poolName =ThreadpoolName.DR; //i am using my enum
tpe = MyThreadpoolExcecutorPgm.getThreadpoolExcecutor(poolName);
tpe.execute(setObj);
}
public static void main(String[] args) {
AdditionHandler obj = new AdditionHandler();
obj.handle();
}
}
I suspect you're just looking for the static values() method which is added to every enum:
for (ThreadpoolName poolName : ThreadpoolName.getValues())
Alternatively, you can use EnumSet.allOf():
for (ThreadpoolName poolName : EnumSet.allOf(ThreadpoolName.class))
(As Bozho says, EnumMap is a good alternative here. You still need to loop through the enum values.)
First, you'd better use EnumMap. Then make sure you have filled the map before you invoked the method.
You can iterate through enum values by one of (in descending order of preference)
for(Enum value : Enum.values())
for(Enum value : EnumSet.allOf(Enum.class))
for(Enum value : Enum.class.getEnumConstants())
But you should also be using an EnumMap.