Java - Compare and Swap and synchronized Block - java

public class SimulatedCAS {
private int value;
public synchronized int get() { return value; }
public synchronized int compareAndSwap(int expectedValue, int newValue)
{
int oldValue = value;
if (oldValue == expectedValue)
value = newValue;
return oldValue;
}
}
public class CasCounter
{
private SimulatedCAS value;
public int getValue()
{
return value.get();
}
public int increment()
{
int value.get();
while (v != value.compareAndSwap(v, v + 1))
{
v = value.get();
}
}
}
I refereed a Book "Java Concurrency in Practice"
a Counter must be increased by multiple threads. I tried using the compare and swap method but at the end it make used of synchronized keyword which might again result in blocking and waiting of threads. using a synchronized block provides me the same performance can anybody state. what is the difference between using compare and swap and synchronized block ? or any other way to implement compare and swap without using synchronized block.

I need to increment counter with multiple threads
The AtomicInteger class is good for that.
You can create it with final AtomicInteger i=new AtomicInteger(initial_value); Then you can call i.set(new_value) to set its value, and you can call i.get() to get its value, and most importantly for your application, you can call i.incrementAndGet() to atomically increment the value.
If N different threads all call i.incrementAndGet() at "the same time," then
Each thread is guaranteed to see a different return value, and
The final value after they're all done is guaranteed to increase by exactly N.
The AtomicInteger class has quite a few other methods as well. Most of them make useful guarantees about what happens when multiple threads access the varaible.

Real Compare and Swap does optimistic locking. It changes value and then makes a rollback if something has changed the variable simultaneously. So, if the variable is modified rarely, then CAS performs better, than synchronized.
But if the variable is modified often, then synchronized performs better, because it doesn't allow anything to mess with the variable while it is changed. And so there's no need to make an expensive rollback.

Related

Is the return statement atomic?

I pasted some code about Java concurrency:
public class ValueLatch <T> {
#GuardedBy("this") private T value = null;
private final CountDownLatch done = new CountDownLatch(1);
public boolean isSet() {
return (done.getCount() == 0);
}
public synchronized void setValue(T newValue) {
if (!isSet()) {
value = newValue;
done.countDown();
}
}
public T getValue() throws InterruptedException {
done.await();
synchronized (this) {
return value;
}
}
}
Why does return value; need to be synchronized???
Is the return statement not atomic??
The return does not need to be synchronized. Since CountDownLatch.countDown() is not called until after the value is set for the last time, CountDownLatch.await() ensures that value is stable before it is read and returned.
The developer who wrote this was probably not quite sure of what he was doing (concurrency is difficult and dangerous) or, more likely, his use of the GuardedBy annotation on value caused his build system to emit a warning on the return, and some other developer synchronized it unnecessarily just to make the warning go away.
I say 'some other developer', because this class otherwise seems to be specifically designed to allow getValue() to proceed without locking once the value has been set.
The return statement needs to perform a read operation over value.
The read operation is atomic for most primitives, but you're dealing with a generic, meaning you won't know value's type.
For that reason, the return should be synchronized.
return value does not need to be synchronized:
reads of references is atomic according to the JLS: "Writes to and reads of references are always atomic, ..."
each thread reading value is guaranteed to so see its latest value as according to the Java Memory Model value = newValue happens-before done.countDown(), which happens-before done.await(), which happens-before return value. By transitivity value = newValue thus happens-before return value.

Volatile and ArrayBlockingQueue and perhaps other concurrent objects

I understand (or at least I think I do;) ) the principle behind volatile keyword.
When looking into ConcurrentHashMap source, you can see that all nodes and values are declared volatile, which makes sense because the value can be written/read from more than one thread:
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
...
}
However, looking into ArrayBlockingQueue source, it's a plain array that is being updated/read from multiple threads:
private void enqueue(E x) {
// assert lock.getHoldCount() == 1;
// assert items[putIndex] == null;
final Object[] items = this.items;
items[putIndex] = x;
if (++putIndex == items.length)
putIndex = 0;
count++;
notEmpty.signal();
}
How is it guaranteed that the value inserted into items[putIndex] will be visible from another thread, providing that the element inside the array is not volatile (i know that declaring the array itself doesnt have any effect anyhow on the elements themselves) ?
Couldn't another thread hold a cached copy of the array?
Thanks
Notice that enqueue is private. Look for all calls to it (offer(E), offer(E, long, TimeUnit), put(E)). Notice that every one of those looks like:
public void put(E e) throws InterruptedException {
checkNotNull(e);
final ReentrantLock lock = this.lock;
lock.lockInterruptibly();
try {
// Do stuff.
enqueue(e);
} finally {
lock.unlock();
}
}
So you can conclude that every call to enqueue is protected by a lock.lock() ... lock.unlock() so you don't need volatile because lock.lock/unlock are also a memory barrier.
According to my understanding volatile is not needed as all BlockingQueue implementations already have a locking mechanism unlike the ConcurrentHashMap.
If you look at he public methods of the Queue you will find a ReentrantLock that guards for concurrent access.

Atomic operation on read/write variable in java

I have a java class as below:
public class Example implements Runnable {
private int num;
...
// Getter
public int getNum(){
return this.num;
}
// Setter
public void addToNum(int amount) {
if (this.amount> 0) {
this.num += amount;
}
}
...
}
This class can be instantiated by multiple threads. Each of this instances have its own 'num', that is, I do not want 'num' variable to be shared between all them.
To each instance, multiple threads can be accessed in concurreny in order to read/write 'num' variable. So what is the best option to protect read/write operations on 'num' variable in order to they are atomic operations?
I know that in case on C# it can be done using lock(object) like below link but in java I have no idea (I am new on it):
Atomic operations on C#
You can synchronized the methods, but you might find using AtomicInteger a faster option.
private final AtomicInteger num = new AtomicInteger();
...
// Getter
public int getNum(){
return this.num.get();
}
// Setter
public void addToNum(int amount) {
if (amount > 0) {
this.num.getAndAdd(amount);
}
}
Both of these methods are lock-less and avoid exposing a lock which could be used in an unintended way.
In Java 8, the getAndAdd uses a single machine code instruction for the addition via the Unsafe class. From AtomicInteger
private volatile int value;
public final int get() {
return value;
}
public final int getAndAdd(int delta) {
return unsafe.getAndAddInt(this, valueOffset, delta);
}
public synchronized void addToNum(int amount) {
if (this.num > 0) {
this.num += amount;
}
}
here you'll find documentation for it
http://www.programcreek.com/2014/02/how-to-make-a-method-thread-safe-in-java/
You can use synchronized , read about it. You can synchronized methods.
In Java ,I doubt about using volatile variables because volatile variables can used only when one thread is writing and other reads are reading. Volatile works only when one thread is writing .
"where one thread (T1) modifies the counter, and another thread (T2) reads the counter (but never modifies it), declaring the counter variable volatile is enough to guarantee visibility for T2 of writes to the counter variable.
If, however, both T1 and T2 were incrementing the counter variable, then declaring the counter variable volatile would not have been enough. More on that later."
Link : http://tutorials.jenkov.com/java-concurrency/volatile.html#:~:text=The%20Java%20volatile%20keyword%20is%20intended%20to%20address%20variable%20visibility,read%20directly%20from%20main%20memory.

LRU Cache Implementation in Java

I have seen the following code, and I think that there is a useless while loop in the implementation of addElement method. It should never happen to have more elements than size+1 since there is already a write lock.
So why is the addElement method removing elements till it gets this condition
true
while(concurrentLinkedQueue.size() >=maxSize)
Any pointers around this would be great.
Here is the Implementation:
public class LRUCache<K,V> {
private ConcurrentLinkedQueue<K> concurrentLinkedQueue = new ConcurrentLinkedQueue<K>();
private ConcurrentHashMap<K,V> concurrentHashMap = new ConcurrentHashMap<K, V>();
private ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private Lock readLock = readWriteLock.readLock();
private Lock writeLock = readWriteLock.writeLock();
int maxSize=0;
public LRUCache(final int MAX_SIZE){
this.maxSize=MAX_SIZE;
}
public V getElement(K key){
readLock.lock();
try {
V v=null;
if(concurrentHashMap.contains(key)){
concurrentLinkedQueue.remove(key);
v= concurrentHashMap.get(key);
concurrentLinkedQueue.add(key);
}
return v;
}finally{
readLock.unlock();
}
}
public V removeElement(K key){
writeLock.lock();
try {
V v=null;
if(concurrentHashMap.contains(key)){
v=concurrentHashMap.remove(key);
concurrentLinkedQueue.remove(key);
}
return v;
} finally {
writeLock.unlock();
}
}
public V addElement(K key,V value){
writeLock.lock();
try {
if(concurrentHashMap.contains(key)){
concurrentLinkedQueue.remove(key);
}
while(concurrentLinkedQueue.size() >=maxSize){
K queueKey=concurrentLinkedQueue.poll();
concurrentHashMap.remove(queueKey);
}
concurrentLinkedQueue.add(key);
concurrentHashMap.put(key, value);
return value;
} finally{
writeLock.unlock();
}
}
}
the point here is, i guess, that you need to check if the LRU is at it's maximum size. the check here is NOT (map.size() > maxSize), it is ">=". now, you could probably replace that with "if (map.size() == maxSize) {...}" - which, in ideal conditions, should do exactly the same thing.
but in not-so-ideal conditions, if for whatever reason, somebody put an EXTRA entry in the map without checking, then with this code, the map would NEVER go down in size again, because the if condition would never be true.
so - why not "while" and ">=" instead of "if" and "=="? same amount of code, plus more robust against "unexpected" conditions.
An easy implementation of an LRU cache does the following, a while loop is only need when the max size is adjusted, but not for the primitive operations:
During put, remove superflous element.
During get, move element to top.
The primitive operations will be one shot. You can then use either ordinary synchronized or read write lock around this data structure.
When using read write locks the fairness on who comes first is then rather an issue of the used read write locks than of the LRU cache itself.
Here is a sample implementation.
It's not wrong but just a safety in case of accidental modification. You could check for equality with concurrentLinkedQueue.size() == maxSize in a conditional statement.

Does the actual lock matter when deciding to use volatile?

Say I have the following code:
private Integer number;
private final Object numberLock = new Object();
public int get(){
synchronized(number or numberLock){
return Integer.valueOf(number);
}
}
My question is, do the following versions of the add method need to have number as volatile in the below cases:
public void add(int num){
synchronized(number)
number = number + num;
}
public void add(int num){
synchronized(numberLock)
number = number + num;
}
I understand that these are both atomic operations, but my question is, is the value of number guarennteed to be pushed out to global memory and visible to all threads without using volatile?
is the value of number guarennteed to be pushed out to global memory and visible to all threads without using volatile?
Yes. synchronization offers visibility also. Actually synchronization offers visibility and atomicity, while volatile only visibility.
You haven't synchronized get so your code is not thread-safe:
public int get(){
return Integer.valueOf(number);
}
Apart from that, synchronization will guarantee visibility as Eugene already noted.

Categories

Resources