If class has field with int type (not Atomic Integer and without volatile keyword) and all access to this field happens under read/write locks - will this field thread-safe in this case? Or in some moment some thread can see not real value of this field but something from cache?
public static class Example {
private int isSafe;
private final ReadWriteLock lock;
public Example(int i) {
isSafe = i;
lock = new ReentrantReadWriteLock();
}
public int getIsSafe() {
final Lock lock = this.lock.readLock();
lock.lock();
try {
return isSafe;
} finally {
lock.unlock();
}
}
public void someMethod1() {
final Lock lock = this.lock.writeLock();
lock.lock();
try {
isSafe++;
} finally {
lock.unlock();
}
}
}
Yes, This approach is thread-safe. If there is no thread that has requested the write lock and the lock for writing, then multiple threads can lock the lock for reading. It means multiple threads can read the data at the very moment, as long as there’s no thread to write the data or to update the data.
Get answer from #pveentjer in comments under question:
It is important to understand that caches on modern cpus are always
coherent due to the cache coherence protocol like MESI. Another
important thing to understand is that correctly synchronized programs
exhibit sequential consistent behavior and for sequential consistency
the real time order isnt relevant. So reads and writes can be skewed
as long as nobody can observe a violation of the program order.
Related
Just want to know how the below codes that does the same functionality differs
Code 1:
class ReadWriteCounter {
ReadWriteLock lock = new ReentrantReadWriteLock();
private Integer count = 0;
public Integer incrementAndGetCount() {
lock.writeLock().lock();
try {
count = count + 1;
return count;
} finally {
lock.writeLock().unlock();
}
}
public Integer getCount() {
lock.readLock().lock();
try {
return count;
} finally {
lock.readLock().unlock();
}
}
}
Code 2:
class ReadWriteCounter {
private Integer count = 0;
public getCount()
{
synchronized(count){
return count;
}
}
public void setCount(Integer i)
{
synchronized(count){
count = i;
}
}
}
The purpose is to ensure that when count is modified no other threads access it for reading and while reading no other threads should should access it for writing. Which is an optimum solution and why? Also, I will be using this in a class where there are field variables which needs to edited. Please offer your suggestions.
ReentrantReadWriteLock is the best way to implement your thoughts.
synchronized would only allow one thread if two or more threads attempt to read count.
But everyone could get the value of count when they all attempt to read it.
Both your solutions work however there is a bug in the way you are implementing locking.
First the difference in the two approaches:
The ReentrantReadWriteLock is mainly used in situations wherein you have many more reads than writes typically in ratios of 10 reads : 1 write. This allows the reads to happen concurrently without blocking each other however when a write starts all reads will be blocked. So performance is the primary reason.
Bug in your approach :
The object on which you are locking should be final. In setCount() you are effectively swapping the object out and that can cause a dirty read at that time.
Also, never expose the object that you are locking on. The object you are locking should be private and final. The reason is if you happen to expose the object the caller may happen to use the returned object itself for locking, in which case you will run into contention issues with components outside this class itself.
I have a question on ReadwriteLocks good practice. I've only ever used synchronized blocks before, so please bear with me.
Is the code below a correct way in which to use a ReadWriteLock? That is,
Obtain the lock in the private method.
If a condition is met, return from the private method having not released the lock. Release the lock in the public method.
Alternatively:
Obtain the lock in the private method.
If the condition is not met, release the lock immediately in the private method.
Many thanks
private List<Integer> list = new ArrayList<Integer>();
private ReadWriteLock listLock = new ReentrantReadWriteLock
public int methodA(int y) {
...........
long ago = methodB(y);
list.remove(y);
listLock.writeLock().unlock();
}
private long methodB(int x) {
listLock.writeLock().lock();
if(list.contains(x) {
long value = // do calculations on x
return value;
}
else {
listLock.writeLock().unlock();
// return something else unconnected with list
}
Normally when using locks you would do something similar to this.
Lock lock = ...; // Create type of lock
lock.lock();
try {
// Do synchronized stuff
}
finally {
lock.unlock();
}
This ensures that the lock is always unlocked at the end of the block. No matter if there is an exception thrown. Since you are using a reentrant lock you can place this in both methods and it will work correctly, not releasing the lock until the last finally block executes.
Edit: Javadocs for the Lock interface reinterates what I posted.
I would like to know if there is an existing alternative or how to implement the semantics of java.util.concurrent.locks.Lock#tryLock() before Java 5. That is the possibility to back off immediately if the lock is already held by another thread.
If you need a Lock supporting a tryLock operation, you can’t use the intrinsic locking facility of Java. You have to implement your own Lock class which maintains the required state, i.e. an owner Thread and a counter and might use the intrinsic locking for its implementation of the thread-safe updates and blocking (there are not much alternatives in older Java versions).
A very simple implementation might look like this:
public final class Lock {
private Thread owner;
private int nestCount;
public synchronized void lock() throws InterruptedException {
for(;;) {
if(tryLock()) return;
wait();
}
}
public synchronized boolean tryLock() {
Thread me=Thread.currentThread();
if(owner!=me) {
if(nestCount!=0) return false;
owner=me;
}
nestCount++;
return true;
}
public synchronized void unlock() {
if(owner!=Thread.currentThread())
throw new IllegalMonitorStateException();
if(--nestCount == 0) {
owner=null;
notify();
}
}
}
Note that the intrinsic lock of the Lock instance enforced by the synchronized methods is hold for a very short time only. The threads will either return immediately or go into the wait state which implies releasing the lock as well. Hence the tryLock will exhibit the desired behavior, though the Java 5 and newer equivalent will likely be more efficient. (The Java 5 and newer implementation of synchronized is more efficient as well…)
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Consider:
public class A
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
}
I've heard people argue that it's possible for getA() and incrementA() to be called at roughly the same time and have getA() return to wrong thing. However it seems like, in the case that they're called at the same time, even if the getter is synchronized you can get the wrong thing. In fact the "right thing" doesn't even seem defined if these are called concurrently. The big thing for me is that the state remains consistent.
I've also heard talk about JIT optimizations. Given an instance of the above class and the following code(the code would be depending on a to be set in another thread):
while(myA.getA() < 10)
{
//incrementA is not called here
}
it is apparently a legal JIT optimization to change this to:
int temp = myA.getA();
while(temp < 10)
{
//incrementA is not called here
}
which can obviously result in an infinite loop.
Why is this a legal optimization? Would this be illegal if a was volatile?
Update
I did a little bit of testing into this.
public class Test
{
private int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
a += 1;
}
}
public int getA()
{
return a;
}
public static void main(String[] args)
{
final Test myA = new Test();
Thread t = new Thread(new Runnable(){
public void run() {
while(true)
{
try {
Thread.sleep(100);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
myA.incrementA();
}
}});
t.start();
while(myA.getA() < 15)
{
System.out.println(myA.getA());
}
}
}
Using several different sleep times, this worked even when a is not volatile. This of course isn't conclusive, it still may be legal. Does anyone have some examples that could trigger such JIT behaviour?
Is is okay to synchronize all methods which mutate the state of an object, but not synchronize anything which is atomic? In this case, just returning a field?
Depends on the particulars. It is important to realize that synchronization does two important things. It is not just about atomicity but it is also required because of memory synchronization. If one thread updates the a field, then other threads may not see the update because of memory caching on the local processor. Making the int a field be volatile solves this problem. Making both the get and the set method be synchronized will as well but it is more expensive.
If you want to be able to change and read a from multiple threads, the best mechanism is to use an AtomicInteger.
private AtomicInteger a = new AtomicInteger(5);
public void setA(int a) {
// no need to synchronize because of the magic of the `AtomicInteger` code
this.a.set(a);
}
public int getA() {
// AtomicInteger also takes care of the memory synchronization
return a.get();
}
I've heard people argue that it's possible for getA() and setA() to be called at roughly the same time and have getA() return to wrong thing.
This is true but you can get the wrong value if getA() is called after setA() as well. A bad cache value can stick forever.
which can obviously result in an infinite loop. Why is this a legal optimization?
It is a legal optimization because threads running with their own memory cache asynchronously is one of the important reasons why you see performance improvements with them. If all memory accesses where synchronized with main memory then the per-CPU memory caches would not be used and threaded programs would run a lot slower.
Would this be illegal if a was volatile?
It is not legal if there is some way for a to be altered – by another thread possibly. If a was final then the JIT could make that optimization. If a was volatile or the get method marked as synchronized then it would certainly not be a legal optimization.
It's not thread safe because that getter does not ensure that a thread will see the latest value, as the value may be stale. Having the getter be synchronized ensures that any thread calling the getter will see the latest value instead of a possible stale one.
You basically have two options:
1) Make your int volatile
2) Use an atomic type like AtomicInt
using a normal int without synchronization is not thread safe at all.
Your best solution is to use an AtomicInteger, they were basically designed for exactly this use case.
If this is more of a theoretical "could this be done question", I think something like the following would be safe (but still not perform as well as an AtomicInteger):
public class A
{
private volatile int a = 5;
private static final Object lock = new Object();
public void incrementA()
{
synchronized(lock)
{
final int tmp = a + 1;
a = tmp;
}
}
public int getA()
{
return a;
}
}
The short answer is your example will be thread-safe, if
the variable is declared as volatile, or
the getter is declared as synchronized.
The reason that your example class A is not thread-safe is that one can create a program using it that doesn't have a "well-formed execution" (see JLS 17.4.7).
For instance, consider
// in thread #1
int a1 = A.getA();
Thread.sleep(...);
int a2 = A.getA();
if (a1 == a2) {
System.out.println("no increment");
// in thread #2
A.incrementA();
in the scenario that the increment happens during the sleep.
For this execution to be well-formed, there must be a "happens before" (HB) chain between the assignment to a in incrementA called by thread #2, and the subsequent read of a in getA called by thread #1.
If the two threads synchronize using the same lock object, then there is a HB between one thread releasing the lock and a second thread acquiring the lock. So we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #2 releases lock --HB-->
thread #1 acquires lock --HB-->
thread #1 reads a
If two threads share a a volatile variable, there is a HB between any write and any subsequent read (without an intervening write). So we typically get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a --HB-->
thread #1 reads a
Note that incrementA needs to be synchronized to avoid race conditions with other threads calling incrementA.
If neither of the above is true, we get this:
thread #2 acquires lock --HB-->
thread #2 reads a --HB-->
thread #2 writes a // No HB!!
thread #1 reads a
Since there is no HB between the write by thread #2 and the subsequent read by thread #1, the JLS does not guarantee that the latter will see the value written by the former.
Note that this is a simplified version of the rules. For the complete version, you need to read all of JLS Chapter 17.
Hello I just had phone interview I was not able to answer this question and would like to know the answer, I believe, its advisable to reach out for answers that you don't know. Please encourage me to understand the concept.
His question was:
"The synchronized block only allows one thread a time into the mutual exclusive section.
When a thread exits the synchronized block, the synchronized block does not specify
which of the waiting threads will be allowed next into the mutual exclusive section?
Using synchronized and methods available in Object, can you implement first-come,
first-serve mutual exclusive section? One that guarantees that threads are let into
the mutual exclusive section in the order of arrival? "
public class Test {
public static final Object obj = new Object();
public void doSomething() {
synchronized (obj) {
// mutual exclusive section
}
}
}
Here's a simple example:
public class FairLock {
private int _nextNumber;
private int _curNumber;
public synchronized void lock() throws InterruptedException {
int myNumber = _nextNumber++;
while(myNumber != _curNumber) {
wait();
}
}
public synchronized void unlock() {
_curNumber++;
notifyAll();
}
}
you would use it like:
public class Example {
private final FairLock _lock = new FairLock();
public void doSomething() {
_lock.lock();
try {
// do something mutually exclusive here ...
} finally {
_lock.unlock();
}
}
}
(note, this does not handle the situation where a caller to lock() receives an interrupted exception!)
what they were asking is a fair mutex
create a FIFO queue of lock objects that are pushed on it by threads waiting for the lock and then wait on it (all this except the waiting in a synchronized block on a separate lock)
then when the lock is released an object is popped of the queue and the thread waiting on it woken (also synchronized on the same lock for adding the objects)
You can use ReentrantLock with fairness parameter set to true. Then the next thread served will be the thread waiting for the longest time i.e. the one that arrived first.
Here is my attempt. The idea to give a ticket number for each thread. Threads are entered based on the order of their ticket numbers. I am not familiar with Java, so please read my comments:
public class Test {
public static final Object obj = new Object();
unsigned int count = 0; // unsigned global int
unsigned int next = 0; // unsigned global int
public void doSomething() {
unsigned int my_number; // my ticket number
// the critical section is small. Just pick your ticket number. Guarantee FIFO
synchronized (obj) { my_number = count ++; }
// busy waiting
while (next != my_number);
// mutual exclusion
next++; // only one thread will modify this global variable
}
}
The disadvantage of this answer is the busy waiting which will consume CPU time.
Using only Object's method and synchronized, in my point of view is a little difficult. Maybe, by setting each thread a priority, you can garantee an ordered access to the critical section.