Should locks in multi threading always remain immutable? - java

I am trying the below code snippet which is giving expected output.But I am curious to know how many lock objects I am creating 1 or 2?
package com.practice;
class A{
String lock="";
A(String lock){
this.lock = lock;
}
A(){ }
public void printMe(int x) {
int counter = x;
synchronized (lock) {
System.out.println(++counter);
}
}
}
public class MultiThreadingPractice {
public static void main(String[] args) {
A a = new A("1");
Thread t1=new Thread(() -> {
a.printMe(1);
});
a.lock = new String();
Thread t2=new Thread(() -> {
a.printMe(1);
});
t1.start();
t2.start();
}
}

Should locks in multi threading always remain immutable? Yes.
You're using a String (any Object would do) as a lock. When you assign a new value (new String) to the lock, that means you have more than one lock instance around. It's ok as long as all threads are synchronizing on the same lock instance, but there is nothing overt in your class A code to ensure that is the case.
In your actual use in the example, you're safe. Since no thread get started until after you've finished setting the lock to the third and last instance, nothing will be trying to sync on the lock until it's stable. (3 instances: the first is the initialization to the empty String; the second is to the supplied constructor argument "1"; and the third is the explicit assignment, to a different empty String). So, though this code "works", it only works by what I refer to as "coincidence", i.e., it's not thread-safe by design.
But let's assume a case where you start each thread immediately after you construct it. This means you'd be reassigning the lock member after t1 was running but before t2 was even created.
After a while both threads will be synchronizing on the new lock instance, but for a period around the point at which you switched the lock, thread t1 could be and probably is in the synchronized(lock) { ... } clause using the old lock instance. And around that time, thread t2 could execute and attempt to synchronize on the new lock instance.
In short, you've created a timing window (race hazard) in the mechanism that you intend to use to eliminate timing windows.
You could arrange a further level of synchronization that allows you to replace the lock, but I am unable to imagine any straightforward situation where that would be necessary, useful, or sensible. Much better to allocate one lock before any contention can occur and then stick to that one lock.
P.S. "How many locks am I creating?" 3. Though the first two are never used.

curious to know how many lock objects I am creating 1 or 2?
This line in your program creates a single String instance when the program is loaded, and both of your A instances start out with a reference to the same String.
String lock="";
This line creates a second String instance, because new String(...) always creates a new object, regardless of whether or not some other String with the same value has been interned.
a.lock = new String();

Should locks in multi threading always remain immutable? No.
The typical way to synchronize is to use synchronized methods, not separate locks. This approach is less error-prone and therefore preferred. It is equivalent to use current object as lock:
synchronized(this) {
...
}
and since this object is not usually immutable, we can conclude that using mutable objects as locks is a common practice.
What you are doing in your code, when you change the reference to the lock object (yes, you are changing the reference and not the lock object itself), is a very bad approach. It provokes programming errors.

Related

What object is the lock on when a String parameter is used for locking?

I came across code like this in one of the repos. I checked it out and it works. (Only one thread enters the synchronized block.)
public Void hello(String s) {
synchronized (s) {
i++;
System.out.println(i);
}
return null;
}
My question is is the lock obtained on the String class itself? If yes, wouldn't it mean if code like this exists else where in the code base, they will all wait to obtain a lock on the same object? Thus adding unnecessary delays?
The intrinsic lock on the String object is the lock that gets acquired. But whether locking works depends on if the string is always the same instance or not. String pools and interning will affect this.
That it is difficult to figure out if the same instance will be used is only one reason not to do this.
If two classes in the application use the same string instance then one of them can acquire the lock and shut the other out. So you can have conceptually unrelated objects affecting each other, contending for the same lock.
Also people are going to be confused into thinking they can use the string value to mean something and have code change the value of s inside the synchronized block or method. That will break all the locking. Locks are on the objects, not on the variables. Changing values means the thread currently holding the lock now has the old object but threads trying to get in are trying to get the lock on the new object, a thread can acquire the new object and start executing the synchronized code before the previous thread is done.
It might work by accident sometimes but it is a terrible idea. Use a dedicated object as a lock instead:
private final Object lock = new Object();

Synchronization on object, java [duplicate]

This question already has answers here:
Avoid synchronized(this) in Java?
(23 answers)
Closed 7 years ago.
I am reading the Oracle tutorials about multithreading programming in Java. I don't understand why should I create a new object to sync the some part of code? For what reason does the creation of new dummy object serve?
I do understand that creating these two objects will prevent compiler from reordering the code segment guarded by the construction syncronized(lock1){}
However, I would like to know can I use any other objects (except MsLunch) in the construction syncronized(lock1){} ?
What is the motivation behind introducing such construction syncronized(lock1){} ?
Here is the piece of code, I am concerned with:
public class MsLunch {
private long c1 = 0;
private long c2 = 0;
// what is the purpose of these two objects? how do they serve as locks?
private Object lock1 = new Object();
private Object lock2 = new Object();
public void inc1() {
synchronized(lock1) {
c1++;
}
}
public void inc2() {
synchronized(lock2) {
c2++;
}
}
}
First some basics:
The synchronization object is used as a key to open a door to a restricted area (the synchronization block).
As long as a thread entered this restricted area it holds the monitor (lock) so that no other thread can enter. When a thread exits the restricted area it releases the monitor and another thread can take it.
This means that each synchronization block that uses the same synchronization object will prevent other thread to enter until the monitor is available (unlocked).
For what reason does the creation of new dummy object serve?
The reason is that you should use objects that are not accessible by others objects than the ones that use them. That's why the synchronization objects are private. If they would be accessible by others, other might use them in their synchronization blocks somewhere in the application. If the synchronizcation object is e.g. public then every other object can use it. This might lead to unexpected usage and might result in deadlocks.
So you should make sure who will get a reference to the synchronization object.
The lock is accessed by the threads to check if the code is currently "in use" or if they can execute it after locking the object themselves.
You may think it could be automatic, but it has a use compilers couldn't infer : you can use the same lock to synchronize multiple code blocks, restraining the access to any of them at the same time.
In my opinion, dummy Object just represents the point of synchronization.
For synchronize different threads, you must be able to declare point of synchronization and(if it is needed) give access to this point from different parts of the code. For example:
private Object lock = new Object();
public void inc1() {
synchronized(lock) {
c1++;
}
}
public void inc2() {
synchronized(lock) {
c2++;
}
}
In this example shown the usage of same point of synchronization from different part of code.
And because Java is Object oriented language i.e. unit of language in Java is Object, we have exactly such a construction.
By the way, you can use any object in as point of synchronization. You can do even this:
public void inc1() {
synchronized(this) {
c1++;
}
}
what is the purpose of these two objects? how do they serve as locks?
As shown in your code, inc1() and inc2() can only be invoked by one thread each. This means that c1 and c2 can be increased just once per thread - but simultaneuosly by two different threads (because both threads synchronize on different objects).
If both blocks were synchronized like this synchronized(lock1), c1 and c2 could not be increased simultaneuosly.

How is this lock useful anyway?

I am relatively new to concurrency. I am practicing concurrency from Oracle website. I am stucked at the following example:-
public class MsLunch {
private long c1 = 0;
private long c2 = 0;
private Object lock1 = new Object();
private Object lock2 = new Object();
public void inc1() {
synchronized(lock1) {
c1++;
}
}
public void inc2() {
synchronized(lock2) {
c2++;
}
}
}
I want to know how is this type of lock useful over using synchronized(this) type lock? And in which situations should this type of lock be preferred?
Thanks
It's useful when you have a single object that needs a finer resolution of exclusion than the object itself. In other words, an object with multiple resources that need to be protected but not necessarily from each other.
Being bought up in the pthreads world, this is easy to understand since the mutex and it's protected resource tended to be always decoupled - there was never this handy shorthand found in Java for using an object as the lock.
As an example, say each object has a array of a thousand integers and you want to lock a group of a hundred at a time (0xx, 1xx, and so on) for maximum concurrency.
In that case, you create ten objects, one per group of a hundred, and you can lock individual parts of the array. That way, if you have a thread fiddling about with the 0xx and 4xx blocks, it won't stop another thread from coming in and doing something with the 7xx block.
Now that's a pretty contrived example but the concept does occasionally show up in reality.
In the specific example you've given, only one thread at a time can come in and increment c1 concurrently, but a lock on lock1 still permits another thread to come in and change c2.
With a object (this) lock, concurrency would be reduced because c1 and c2 couldn't be updated concurrently despite the fact that there is no conflict.
Object lock is nothing but you are locking the critical section explicitly by an object. synchronized (this) describes that you are locking the critical section by the current object.
Having two separate locks for inc1 and inc2 is useful in situations where you want to make sure that no two concurrent threads can both call the same method at the same time, but where you do want to allow two concurrent threads to call different methods on the same object.
In your example, if thread 1 calls inc1, thread 2 is blocked from calling inc1 until thread 1 is done. However, thread 2 is free to call inc2.

Guarding the initialization of a non-volatile field with a lock?

For educational purposes I'm writing a simple version of AtomicLong, where an internal variable is guarded by ReentrantReadWriteLock.
Here is a simplified example:
public class PlainSimpleAtomicLong {
private long value;
private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
public PlainSimpleAtomicLong(long initialValue) {
this.value = initialValue;
}
public long get() {
long result;
rwLock.readLock().lock();
result = value;
rwLock.readLock().unlock();
return result;
}
// incrementAndGet, decrementAndGet, etc. are guarded by rwLock.writeLock()
}
My question: since "value" is non-volatile, is it possible for other threads to observe incorrect initial value via PlainSimpleAtomicLong.get()?
E.g. thread T1 creates L = new PlainSimpleAtomicLong(42) and shares reference with a thread T2. Is T2 guaranteed to observe L.get() as 42?
If not, would wrapping this.value = initialValue; into a write lock/unlock make a difference?
Chapter 17 reasons about concurrent code in terms of happens before relationships. In your example, if you take two random threads then there is no happens-before relationship between this.value = initialValue; and result = value;.
So if you have something like:
T1.start();
T2.start();
...
T1: L = new PlainSimpleAtomicLong(42);
T2: long value = L.get();
The only happens-before (hb) relationships you have (apart from program order in each thread) is: 1 & 2 hb 3,4,5.
But 4 and 5 are not ordered. If however T1 called L.get() before T2 called L.get() (from a wall clock perspective) then you would have a hb relationship between unlock() in T1 and lock() in T2.
As already commented, I don't think your proposed code could break on any combination of JVM/hardware but it could break on a theoretical implementation of the JMM.
As for your suggestion to wrap the constructor in a lock/unlock, I don't think it would be enough because, in theory at least, T1 could release a valid reference (non null) to L before running the body of the constructor. So the risk would be that T2 could acquire the lock before T1 has acquired it in the constructor. There again, this is an interleaving that is probably impossible on current JVMs/hardware.
So to conclude, if you want theoretical thread safety, I don't think you can do without a volatile long value, which is how AtomicLong is implemented. volatile would guarantee that the field is initialised before the object is published. Note finally that the issues I mention here are not due to your object being unsafe (see #BrettOkken answer) but are based on a scenario where the object is not safely published across threads.
Assuming that you do not allow a reference to the instance to escape your constructor (your example looks fine), then a second thread can never see the object with any value of "value" other than what it was constructed with because all accesses are protected by a monitor (the read write lock) which was final in the constructor.
https://www.ibm.com/developerworks/library/j-jtp0618/
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/Lock.html
I think that for initial values , than both threads would see the same values (since they can have the object only after the constructor is finished).
But
If you change the value in 1 thread , then other thread may not see the same value if you don't use volatile
If you want to implement set, wrapping set with lock/unlock will not solve the problem - this is good when need atomic operation (like increment).
I
It doesn't mean that it would work the way you want since you don't control the context switch. For example if 2 threads call set, with values 4 & 8 , since you don't know when the context switch occurs , you don't know who will gain the lock first.

What is the memory visibility of variables accessed in static singletons in Java?

I've seen this type of code a lot in projects, where the application wants a global data holder, so they use a static singleton that any thread can access.
public class GlobalData {
// Data-related code. This could be anything; I've used a simple String.
//
private String someData;
public String getData() { return someData; }
public void setData(String data) { someData = data; }
// Singleton code
//
private static GlobalData INSTANCE;
private GlobalData() {}
public synchronized GlobalData getInstance() {
if (INSTANCE == null) INSTANCE = new GlobalData();
return INSTANCE;
}
}
I hope it's easy to see what's going on. One can call GlobalData.getInstance().getData() at any time on any thread. If two threads call setData() with different values, even if you can't guarantee which one "wins", I'm not worried about that.
But thread-safety isn't my concern here. What I'm worried about is memory visibility. Whenever there's a memory barrier in Java, the cached memory is synched between the corresponding threads. A memory barrier happens when passing through synchronizations, accessing volatile variables, etc.
Imagine the following scenario happening in chronological order:
// Thread 1
GlobalData d = GlobalData.getInstance();
d.setData("one");
// Thread 2
GlobalData d = GlobalData.getInstance();
d.setData("two");
// Thread 1
String value = d.getData();
Isn't it possible that the last value of value in thread 1 can still be "one"? The reason being, thread 2 never called any synchronized methods after calling d.setData("two") so there was never a memory barrier? Note that the memory-barrier in this case happens every time getInstance() is called because it's synchronized.
You are absolutely correct.
There is no guarantee that writes in one Thread will be visible is another.
To provide this guarantee you would need to use the volatile keyword:
private volatile String someData;
Incidentally you can leverage the Java classloader to provider thread safe lazy init of your singleton as documented here. This avoids the synchronized keyword and therefore saves you some locking.
It is worth noting that the current accepted best practice is to use an enum for storing singleton data in Java.
Correct, it is possible that thread 1 still sees the value as being "one" since no memory synchronization event occurred and there is no happens before relationship between thread 1 and thread 2 (see section 17.4.5 of the JLS).
If someData was volatile then thread 1 would see the value as "two" (assuming thread 2 completed before thread 1 fetched the value).
Lastly, and off topic, the implementation of the singleton is slightly less than ideal since it synchronizes on every access. It is generally better to use an enum to implement a singleton or at the very least, assign the instance in a static initializer so no call to the constructor is required in the getInstance method.
Isn't it possible that the last value of value in thread 1 can still be "one"?
Yes it is. The java memory model is based on happens before (hb) relationships. In your case, you only have getInstance exit happens-before subsequent getInstance entry, due to the synchronized keyword.
So if we take your example (assuming the thread interleaving is in that order):
// Thread 1
GlobalData d = GlobalData.getInstance(); //S1
d.setData("one");
// Thread 2
GlobalData d = GlobalData.getInstance(); //S2
d.setData("two");
// Thread 1
String value = d.getData();
You have S1 hb S2. If you called d.getData() from Thread2 after S2, you would see "one". But the last read of d is not guaranteed to see "two".

Categories

Resources