Concurrent collection:
ConcurrentMap<LocalDate, A> ex = new ConcurrentHashMap<>();
class A {
AtomicLong C;
AtomicLong D
}
How can I synchronize by locking "C" and "D"? That is, I need to change "C" and "B" at the same time with a guarantee that while I change the other one, the first one does not change from external actions.
Thank you.
What you're solving for
You are describing that you want to:
allow a caller to modify something on an object
prevent any other callers (other threads) from modifying things at the same time
Solution description
This solution uses synchronized, though there are number of other mechanisms available in Java that would support this (several of which are covered in
the Lock Objects section of the Java Tutorials).
The way "synchronized" works is that you designate some code using the "synchronized" keyword, along with an object to synchronize on.
When your code runs, the JVM will guarantee that, for all code which is synchronized – on the same object – only one thread
can proceed at a time.
You can make a synchronized code block, like below. Note: this defines an Object named "lock", but it's just a name chosen for clarity when
reading the code – you could name it anything you like.
Object lock;
synchronized (lock) {
... // all things here run only when "lock" is available
}
You can also designate an entire method as being synchronized, like this:
public void synchronized print() {
System.out.println("hello");
}
This second example behaves like the first - it also locks on an object – but it's not clear at a glance what the object is; that is, how does
the JVM know which object to sychronize on? This approach works if the method itself is called on an object instance, and in that case the lock becomes this. I'll show an example below.
There's good info in the Java Tutorials about synchronized methods.
Solution #1: synchronized block using Object lock
Here are a few notes about a class AllowOneEditAtATime:
it has two private members: one and two
because they're private, they cannot be changed directly – so it would not be allowed to do something like this:
AllowOneEditAtATime a = new AllowOneEditAtATime();
a.one = new AtomicLong(1); // cannot change "one" directly because it is private
defines private Object lock – this is meant to act as the thing that two different synchronized code blocks will lock on. It's totally fine to have different blocks of code each synchronize on the same object. This is the main technique you're after.
uses a synchronized block inside setOne(), synchronizing on "lock"
uses another synchronized block inside the other method – setTwo() – also synchronizing on "lock"
because both setOne() and setTwo() are synchronized on the same object, one of them will be allowed to run at a time
class AllowOneEditAtATime1 {
private Object lock;
private AtomicLong one;
private AtomicLong two;
public void setOne(AtomicLong newOne) {
synchronized (lock) {
one = newOne;
}
}
public void setTwo(AtomicLong newTwo) {
synchronized (lock) {
two = newTwo;
}
}
}
Solution #2: synchronized block, using this
Solution #1 works fine, but it isn't necessary (in this case) to create an entire object just for locking. Instead, you can rely on the fact that
this code runs only after someone called new AllowOneEditAtATime(), which means there's always an object instance, which means inside the code
you can use this. The this keyword refers to the object instance itself, the actual instance of AllowOneEditAtATime.
So here's a variation using this (no more Object lock):
class AllowOneEditAtATime2 {
private AtomicLong one;
private AtomicLong two;
public void setOne(AtomicLong newOne) {
synchronized (this) {
one = newOne;
}
}
public void setTwo(AtomicLong newTwo) {
synchronized (this) {
two = newTwo;
}
}
}
Solution #3: synchronized methods, implicit lock
Solution #2 works fine, but since we're using this as the lock, and since the code paths fit with doing this, we can use
synchronized methods
instead of synchronized code blocks.
That means we can replace this:
public void setTwo(AtomicLong newTwo) {
synchronized (this) {
two = newTwo;
}
}
with this:
public synchronized void setOne(AtomicLong newOne) {
one = newOne;
}
Under the covers, the entire setOne() method is synchronized on this automatically, so it isn't necessary to
include synchronized (this) { .. }
at all. In Solution #2, both methods were doing that, so both can be replaced.
By synchronizing both methods, they will both be synchronized on the object instance (this), which is similar to Solution #2, but with less code.
class AllowOneEditAtATime3 {
private AtomicLong one;
private AtomicLong two;
public synchronized void setOne(AtomicLong newOne) {
one = newOne;
}
public synchronized void setTwo(AtomicLong newTwo) {
two = newTwo;
}
}
Any of the above would work, as would other synchronization mechanisms. As with all things, there are multiple ways you could solve the problem.
For additional reading,
the Concurrency lesson (in Java Tutorials) has good info
and might be worth your time.
The Java memory model guarantees a happens-before relationship between an object's construction and finalizer:
There is a happens-before edge from the end of a constructor of an
object to the start of a finalizer (§12.6) for that object.
As well as the constructor and the initialization of final fields:
An object is considered to be completely initialized when its
constructor finishes. A thread that can only see a reference to an
object after that object has been completely initialized is guaranteed
to see the correctly initialized values for that object's final
fields.
There's also a guarantee about volatile fields since, there's a happens-before relations with regard to all access to such fields:
A write to a volatile field (§8.3.1.4) happens-before every subsequent
read of that field.
But what about regular, good old non-volatile fields? I've seen a lot of multi-threaded code that doesn't bother creating any sort of memory barrier after object construction with non-volatile fields. But I've never seen or heard of any issues because of it and I wasn't able to recreate such partial construction myself.
Do modern JVMs just put memory barriers after construction? Avoid reordering around construction? Or was I just lucky? If it's the latter, is it possible to write code that reproduces partial construction at will?
Edit:
To clarify, I'm talking about the following situation. Say we have a class:
public class Foo{
public int bar = 0;
public Foo(){
this.bar = 5;
}
...
}
And some Thread T1 instantiates a new Foo instance:
Foo myFoo = new Foo();
Then passes the instance to some other thread, which we'll call T2:
Thread t = new Thread(() -> {
if (myFoo.bar == 5){
....
}
});
t.start();
T1 performed two writes that are interesting to us:
T1 wrote the value 5 to bar of the newly instantiated myFoo
T1 wrote the reference to the newly created object to the myFoo variable
For T1, we get a guarantee that write #1 happened-before write #2:
Each action in a thread happens-before every action in that thread
that comes later in the program's order.
But as far as T2 is concerned the Java memory model offers no such guarantee. Nothing prevents it from seeing the writes in the opposite order. So it could see a fully built Foo object, but with the bar field equal to equal to 0.
Edit2:
I took a second look at the example above a few months after writing it. And that code is actually guaranteed to work correctly since T2 was started after T1's writes. That makes it an incorrect example for the question I wanted to ask. The fix it to assume that T2 is already running when T1 is performing the write. Say T2 is reading myFoo in a loop, like so:
Foo myFoo = null;
Thread t2 = new Thread(() -> {
for (;;) {
if (myFoo != null && myFoo.bar == 5){
...
}
...
}
});
t2.start();
myFoo = new Foo(); //The creation of Foo happens after t2 is already running
Taking your example as the question itself - the answer would be yes, that is entirely possible. The initialized fields are visible only to the constructing thread, like you quoted. This is called safe publication (but I bet you already knew about this).
The fact that you are not seeing that via experimentation is that AFAIK on x86 (being a strong memory model), stores are not re-ordered anyway, so unless JIT would re-ordered those stores that T1 did - you can't see that. But that is playing with fire, literately, this question and the follow-up (it's close to the same) here of a guy that (not sure if true) lost 12 milion of equipment
The JLS guarantees only a few ways to achieve the visibility. And it's not the other way around btw, the JLS will not say when this would break, it will say when it will work.
1) final field semantics
Notice how the example shows that each field has to be final - even if under the current implementation a single one would suffice, and there are two memory barriers inserted (when final(s) are used) after the constructor: LoadStore and StoreStore.
2) volatile fields (and implicitly AtomicXXX); I think this one does not need any explanations and it seems you quoted this.
3) Static initializers well, kind of should be obvious IMO
4) Some locking involved - this should be obvious too, happens-before rule...
But anecdotal evidence suggests that it doesn't happen in practice
To see this issue, you have to avoid using any memory barriers. e.g. if you use thread safe collection of any kind or some System.out.println can prevent the problem occurring.
I have seen a problem with this before though a simple test I just wrote for Java 8 update 161 on x64 didn't show this problem.
It seems there is no synchronization during object construction.
The JLS doesn't permit it, nor was I able to produce any signs of it in code. However, it's possible to produce an opposition.
Running the following code:
public class Main {
public static void main(String[] args) throws Exception {
new Thread(() -> {
while(true) {
new Demo(1, 2);
}
}).start();
}
}
class Demo {
int d1, d2;
Demo(int d1, int d2) {
this.d1 = d1;
new Thread(() -> System.out.println(Demo.this.d1+" "+Demo.this.d2)).start();
try {
Thread.sleep(500);
} catch(InterruptedException e) {
e.printStackTrace();
}
this.d2 = d2;
}
}
The output would continuously show 1 0, proving that the created thread was able to access data of a partially created object.
However, if we synchronized this:
Demo(int d1, int d2) {
synchronized(Demo.class) {
this.d1 = d1;
new Thread(() -> {
synchronized(Demo.class) {
System.out.println(Demo.this.d1+" "+Demo.this.d2);
}
}).start();
try {
Thread.sleep(500);
} catch(InterruptedException e) {
e.printStackTrace();
}
this.d2 = d2;
}
}
The output is 1 2, showing that the newly created thread will in fact wait for a lock, opposed to the unsynchronized exampled.
Related: Why can't constructors be synchronized?
I've got few questions about threads in Java. Here is the code:
TestingThread class:
public class TestingThread implements Runnable {
Thread t;
volatile boolean pause = true;
String msg;
public TestingThread() {
t = new Thread(this, "Testing thread");
}
public void run() {
while (pause) {
//wait
}
System.out.println(msg);
}
public boolean isPause() {
return pause;
}
public void initMsg() {
msg = "Thread death";
}
public void setPause(boolean pause) {
this.pause = pause;
}
public void start() {
t.start();
}
}
And main thread class:
public class Main {
public static void main(String[] args) {
TestingThread testingThread = new TestingThread();
testingThread.start();
testingThread.initMsg();
testingThread.setPause(false);
}
}
Question list:
Should t be volatile?
Should msg be volatile?
Should setPause() be synchronized?
Is this a good example of good thread structure?
You have hit quite a subtlety with your question number 2.
In your very specific case, you:
first write msg from the main thread;
then write the volatile pause from the main thread;
then read the volatile pause from the child thread;
then read msg from the child thread.
Therefore you have transitively established a happens-before relationship between the write and the read of msg. Therefore msg itself does not have to be volatile.
In real-life code, however, you should avoid depending on such subtle behavior: better overapply volatile and sleep calmly.
Here are some relevant quotes from the Java Language Specification:
If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
If an action x synchronizes-with a following action y, then we also have hb(x, y).
Note that, in my list of actions,
1 comes before 2 in program order;
same for 3 and 4;
2 synchronizes-with 3.
As for your other questions,
ad 1: t doesn't have to be volatile because it's written to prior to thread creation and never mutated later. Starting a thread induces a happens-before on its own;
ad 3: setPause does not have to be synchronized because all it does is set the volatile var.
> Should msg be volatile?
Yes. Does it have to be in this example, No. But I urge you to use it anyway as the codes correctness becomes much clearer ;) Please note that I am assuming that we are discussing Java 5 or later, before then volatile was broken anyway.
The tricky part to understand is why this example can get away without msg being declared as volatile.
Consider this order part of main().
testingThread.start(); // starts the other thread
testingThread.initMsg(); // the other thread may or may not be in the
// while loop by now msg may or may not be
// visible to the testingThread yet
// the important thing to note is that either way
// testingThread cannot leave its while loop yet
testingThread.setPause(false); // after this volatile, all data earlier
// will be visible to other threads.
// Thus even though msg was not declared
// volatile it will piggy back the pauses
// use of volatile; as described [here](http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#volatile)
// and now the testingThread can leave
// its while loop
So if we now consider the testingThread
while (pause) { // pause is volatile, so it will see the change as soon
// as it is made
//wait
}
System.out.println(msg); // this line cannot be reached until the value
// of pause has been set to false by the main
// method. Which under the post Java5
// semantics will guarantee that msg will
// have been updated too.
> Should t be volatile?
It does not matter, but I would suggest making it private final.
> Should setPause() be synchronized?
Before Java 5, then yes. After Java 5 reading a volatile has the same memory barrier as entering a synchronized block. And writing to a volatile has the same memory barrier as at the end of a synchronized block. Thus unless you need the scoping of a synchronized block, which in this case you do not then you are fine with volatile.
The changes to volatile in Java 5 are documented by the author of the change here.
1&2
Volatile can be treated something like as "synchronization on variable",though the manner is different, but the result is alike, to make sure it is read-consistent.
3.
I feel it does not need to, since this.pause = pause should be an atomic statement.
4.
It is a bad example to do any while (true) {do nothing}, which will result in busy waiting, if you put Thread.sleep inside, which may help just a little bit. Please refer to http://en.wikipedia.org/wiki/Busy_waiting
One of a more appropriate way to do something like "wait until being awaken" is using the monitor object(Object in java is a monitor object), or using condition object along with a lock to do so. You may need to refer to http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html
Also, I don't think it is good idea either, that you have a local filed of thread inside your custom Runnable . Please refer to Seelenvirtuose 's comment.
A warning is showing every time I synchronize on a non-final class field. Here is the code:
public class X
{
private Object o;
public void setO(Object o)
{
this.o = o;
}
public void x()
{
synchronized (o) // synchronization on a non-final field
{
}
}
}
so I changed the coding in the following way:
public class X
{
private final Object o;
public X()
{
o = new Object();
}
public void x()
{
synchronized (o)
{
}
}
}
I am not sure the above code is the proper way to synchronize on a non-final class field. How can I synchronize a non final field?
First of all, I encourage you to really try hard to deal with concurrency issues on a higher level of abstraction, i.e. solving it using classes from java.util.concurrent such as ExecutorServices, Callables, Futures etc.
That being said, there's nothing wrong with synchronizing on a non-final field per se. You just need to keep in mind that if the object reference changes, the same section of code may be run in parallel. I.e., if one thread runs the code in the synchronized block and someone calls setO(...), another thread can run the same synchronized block on the same instance concurrently.
Synchronize on the object which you need exclusive access to (or, better yet, an object dedicated to guarding it).
It's really not a good idea - because your synchronized blocks are no longer really synchronized in a consistent way.
Assuming the synchronized blocks are meant to be ensuring that only one thread accesses some shared data at a time, consider:
Thread 1 enters the synchronized block. Yay - it has exclusive access to the shared data...
Thread 2 calls setO()
Thread 3 (or still 2...) enters the synchronized block. Eek! It think it has exclusive access to the shared data, but thread 1 is still furtling with it...
Why would you want this to happen? Maybe there are some very specialized situations where it makes sense... but you'd have to present me with a specific use case (along with ways of mitigating the sort of scenario I've given above) before I'd be happy with it.
I agree with one of John's comment: You must always use a final lock dummy while accessing a non-final variable to prevent inconsistencies in case of the variable's reference changes. So in any cases and as a first rule of thumb:
Rule#1: If a field is non-final, always use a (private) final lock dummy.
Reason #1: You hold the lock and change the variable's reference by yourself. Another thread waiting outside the synchronized lock will be able to enter the guarded block.
Reason #2: You hold the lock and another thread changes the variable's reference. The result is the same: Another thread can enter the guarded block.
But when using a final lock dummy, there is another problem: You might get wrong data, because your non-final object will only be synchronized with RAM when calling synchronize(object). So, as a second rule of thumb:
Rule#2: When locking a non-final object you always need to do both: Using a final lock dummy and the lock of the non-final object for the sake of RAM synchronisation. (The only alternative will be declaring all fields of the object as volatile!)
These locks are also called "nested locks". Note that you must call them always in the same order, otherwise you will get a dead lock:
public class X {
private final LOCK;
private Object o;
public void setO(Object o){
this.o = o;
}
public void x() {
synchronized (LOCK) {
synchronized(o){
//do something with o...
}
}
}
}
As you can see I write the two locks directly on the same line, because they always belong together. Like this, you could even do 10 nesting locks:
synchronized (LOCK1) {
synchronized (LOCK2) {
synchronized (LOCK3) {
synchronized (LOCK4) {
//entering the locked space
}
}
}
}
Note that this code won't break if you just acquire an inner lock like synchronized (LOCK3) by another threads. But it will break if you call in another thread something like this:
synchronized (LOCK4) {
synchronized (LOCK1) { //dead lock!
synchronized (LOCK3) {
synchronized (LOCK2) {
//will never enter here...
}
}
}
}
There is only one workaround around such nested locks while handling non-final fields:
Rule #2 - Alternative: Declare all fields of the object as volatile. (I won't talk here about the disadvantages of doing this, e.g. preventing any storage in x-level caches even for reads, aso.)
So therefore aioobe is quite right: Just use java.util.concurrent. Or begin to understand everything about synchronisation and do it by yourself with nested locks. ;)
For more details why synchronisation on non-final fields breaks, have a look into my test case: https://stackoverflow.com/a/21460055/2012947
And for more details why you need synchronized at all due to RAM and caches have a look here: https://stackoverflow.com/a/21409975/2012947
I'm not really seeing the correct answer here, that is, It's perfectly alright to do it.
I'm not even sure why it's a warning, there is nothing wrong with it. The JVM makes sure that you get some valid object back (or null) when you read a value, and you can synchronize on any object.
If you plan on actually changing the lock while it's in use (as opposed to e.g. changing it from an init method, before you start using it), you have to make the variable that you plan to change volatile. Then all you need to do is to synchronize on both the old and the new object, and you can safely change the value
public volatile Object lock;
...
synchronized (lock) {
synchronized (newObject) {
lock = newObject;
}
}
There. It's not complicated, writing code with locks (mutexes) is actally quite easy. Writing code without them (lock free code) is what's hard.
EDIT: So this solution (as suggested by Jon Skeet) might have an issue with atomicity of implementation of "synchronized(object){}" while object reference is changing. I asked separately and according to Mr. erickson it is not thread safe - see: Is entering synchronized block atomic?. So take it as example how to NOT do it - with links why ;)
See the code how it would work if synchronised() would be atomic:
public class Main {
static class Config{
char a='0';
char b='0';
public void log(){
synchronized(this){
System.out.println(""+a+","+b);
}
}
}
static Config cfg = new Config();
static class Doer extends Thread {
char id;
Doer(char id) {
this.id = id;
}
public void mySleep(long ms){
try{Thread.sleep(ms);}catch(Exception ex){ex.printStackTrace();}
}
public void run() {
System.out.println("Doer "+id+" beg");
if(id == 'X'){
synchronized (cfg){
cfg.a=id;
mySleep(1000);
// do not forget to put synchronize(cfg) over setting new cfg - otherwise following will happend
// here it would be modifying different cfg (cos Y will change it).
// Another problem would be that new cfg would be in parallel modified by Z cos synchronized is applied on new object
cfg.b=id;
}
}
if(id == 'Y'){
mySleep(333);
synchronized(cfg) // comment this and you will see inconsistency in log - if you keep it I think all is ok
{
cfg = new Config(); // introduce new configuration
// be aware - don't expect here to be synchronized on new cfg!
// Z might already get a lock
}
}
if(id == 'Z'){
mySleep(666);
synchronized (cfg){
cfg.a=id;
mySleep(100);
cfg.b=id;
}
}
System.out.println("Doer "+id+" end");
cfg.log();
}
}
public static void main(String[] args) throws InterruptedException {
Doer X = new Doer('X');
Doer Y = new Doer('Y');
Doer Z = new Doer('Z');
X.start();
Y.start();
Z.start();
}
}
AtomicReference suits for your requirement.
From java documentation about atomic package:
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
In above example, you replace String with your own Object
Related SE question:
When to use AtomicReference in Java?
If o never changes for the lifetime of an instance of X, the second version is better style irrespective of whether synchronization is involved.
Now, whether there's anything wrong with the first version is impossible to answer without knowing what else is going on in that class. I would tend to agree with the compiler that it does look error-prone (I won't repeat what the others have said).
Just adding my two cents: I had this warning when I used component that is instantiated through designer, so it's field cannot really be final, because constructor cannot takes parameters. In other words, I had quasi-final field without the final keyword.
I think that's why it is just warning: you are probably doing something wrong, but it might be right as well.
Can any one tell me the advantage of synchronized method over synchronized block with an example?
Can anyone tell me the advantage of the synchronized method over the synchronized block with an example? Thanks.
There is not a clear advantage of using synchronized method over the block.
Perhaps the only one ( but I wouldn't call it an advantage ) is you don't need to include the object reference this.
Method:
public synchronized void method() { // blocks "this" from here....
...
...
...
} // to here
Block:
public void method() {
synchronized( this ) { // blocks "this" from here ....
....
....
....
} // to here...
}
See? No advantage at all.
Blocks do have advantages over methods though, mostly in flexibility because you can use another object as lock whereas syncing the method would lock the entire object.
Compare:
// locks the whole object
...
private synchronized void someInputRelatedWork() {
...
}
private synchronized void someOutputRelatedWork() {
...
}
vs.
// Using specific locks
Object inputLock = new Object();
Object outputLock = new Object();
private void someInputRelatedWork() {
synchronized(inputLock) {
...
}
}
private void someOutputRelatedWork() {
synchronized(outputLock) {
...
}
}
Also if the method grows you can still keep the synchronized section separated:
private void method() {
... code here
... code here
... code here
synchronized( lock ) {
... very few lines of code here
}
... code here
... code here
... code here
... code here
}
The only real difference is that a synchronized block can choose which object it synchronizes on. A synchronized method can only use 'this' (or the corresponding Class instance for a synchronized class method). For example, these are semantically equivalent:
synchronized void foo() {
...
}
void foo() {
synchronized (this) {
...
}
}
The latter is more flexible since it can compete for the associated lock of any object, often a member variable. It's also more granular because you could have concurrent code executing before and after the block but still within the method. Of course, you could just as easily use a synchronized method by refactoring the concurrent code into separate non-synchronized methods. Use whichever makes the code more comprehensible.
Synchronized Method
Pros:
Your IDE can indicate the synchronized methods.
The syntax is more compact.
Forces to split the synchronized blocks to separate methods.
Cons:
Synchronizes to this and so makes it possible to outsiders to synchronize to it too.
It is harder to move code outside the synchronized block.
Synchronized block
Pros:
Allows using a private variable for the lock and so forcing the lock to stay inside the class.
Synchronized blocks can be found by searching references to the variable.
Cons:
The syntax is more complicated and so makes the code harder to read.
Personally I prefer using synchronized methods with classes focused only to the thing needing synchronization. Such class should be as small as possible and so it should be easy to review the synchronization. Others shouldn't need to care about synchronization.
The main difference is that if you use a synchronized block you may lock on an object other than this which allows to be much more flexible.
Assume you have a message queue and multiple message producers and consumers. We don't want producers to interfere with each other, but the consumers should be able to retrieve messages without having to wait for the producers.
So we just create an object
Object writeLock = new Object();
And from now on every time a producers wants to add a new message we just lock on that:
synchronized(writeLock){
// do something
}
So consumers may still read, and producers will be locked.
Synchronized method
Synchronized methods have two effects.
First, when one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
Note that constructors cannot be synchronized — using the synchronized keyword with a constructor is a syntax error. Synchronizing constructors doesn't make sense, because only the thread that creates an object should have access to it while it is being constructed.
Synchronized Statement
Unlike synchronized methods, synchronized statements must specify the object that provides the intrinsic lock: Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
Q: Intrinsic Locks and Synchronization
Synchronization is built around an internal entity known as the intrinsic lock or monitor lock. (The API specification often refers to this entity simply as a "monitor.") Intrinsic locks play a role in both aspects of synchronization: enforcing exclusive access to an object's state and establishing happens-before relationships that are essential to visibility.
Every object has an intrinsic lock associated with it. By convention, a thread that needs exclusive and consistent access to an object's fields has to acquire the object's intrinsic lock before accessing them, and then release the intrinsic lock when it's done with them. A thread is said to own the intrinsic lock between the time it has acquired the lock and released the lock. As long as a thread owns an intrinsic lock, no other thread can acquire the same lock. The other thread will block when it attempts to acquire the lock.
package test;
public class SynchTest implements Runnable {
private int c = 0;
public static void main(String[] args) {
new SynchTest().test();
}
public void test() {
// Create the object with the run() method
Runnable runnable = new SynchTest();
Runnable runnable2 = new SynchTest();
// Create the thread supplying it with the runnable object
Thread thread = new Thread(runnable,"thread-1");
Thread thread2 = new Thread(runnable,"thread-2");
// Here the key point is passing same object, if you pass runnable2 for thread2,
// then its not applicable for synchronization test and that wont give expected
// output Synchronization method means "it is not possible for two invocations
// of synchronized methods on the same object to interleave"
// Start the thread
thread.start();
thread2.start();
}
public synchronized void increment() {
System.out.println("Begin thread " + Thread.currentThread().getName());
System.out.println(this.hashCode() + "Value of C = " + c);
// If we uncomment this for synchronized block, then the result would be different
// synchronized(this) {
for (int i = 0; i < 9999999; i++) {
c += i;
}
// }
System.out.println("End thread " + Thread.currentThread().getName());
}
// public synchronized void decrement() {
// System.out.println("Decrement " + Thread.currentThread().getName());
// }
public int value() {
return c;
}
#Override
public void run() {
this.increment();
}
}
Cross check different outputs with synchronized method, block and without synchronization.
Note: static synchronized methods and blocks work on the Class object.
public class MyClass {
// locks MyClass.class
public static synchronized void foo() {
// do something
}
// similar
public static void foo() {
synchronized(MyClass.class) {
// do something
}
}
}
When java compiler converts your source code to byte code, it handles synchronized methods and synchronized blocks very differently.
When the JVM executes a synchronized method, the executing thread identifies that the method's method_info structure has the ACC_SYNCHRONIZED flag set, then it automatically acquires the object's lock, calls the method, and releases the lock. If an exception occurs, the thread automatically releases the lock.
Synchronizing a method block, on the other hand, bypasses the JVM's built-in support for acquiring an object's lock and exception handling and requires that the functionality be explicitly written in byte code. If you read the byte code for a method with a synchronized block, you will see more than a dozen additional operations to manage this functionality.
This shows calls to generate both a synchronized method and a synchronized block:
public class SynchronizationExample {
private int i;
public synchronized int synchronizedMethodGet() {
return i;
}
public int synchronizedBlockGet() {
synchronized( this ) {
return i;
}
}
}
The synchronizedMethodGet() method generates the following byte code:
0: aload_0
1: getfield
2: nop
3: iconst_m1
4: ireturn
And here's the byte code from the synchronizedBlockGet() method:
0: aload_0
1: dup
2: astore_1
3: monitorenter
4: aload_0
5: getfield
6: nop
7: iconst_m1
8: aload_1
9: monitorexit
10: ireturn
11: astore_2
12: aload_1
13: monitorexit
14: aload_2
15: athrow
One significant difference between synchronized method and block is that, Synchronized block generally reduce scope of lock. As scope of lock is inversely proportional to performance, its always better to lock only critical section of code. One of the best example of using synchronized block is double checked locking in Singleton pattern where instead of locking whole getInstance() method we only lock critical section of code which is used to create Singleton instance. This improves performance drastically because locking is only required one or two times.
While using synchronized methods, you will need to take extra care if you mix both static synchronized and non-static synchronized methods.
Most often I use this to synchronize access to a list or map but I don't want to block access to all methods of the object.
In the following code one thread modifying the list will not block waiting for a thread that is modifying the map. If the methods were synchronized on the object then each method would have to wait even though the modifications they are making would not conflict.
private List<Foo> myList = new ArrayList<Foo>();
private Map<String,Bar) myMap = new HashMap<String,Bar>();
public void put( String s, Bar b ) {
synchronized( myMap ) {
myMap.put( s,b );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public void hasKey( String s, ) {
synchronized( myMap ) {
myMap.hasKey( s );
}
}
public void add( Foo f ) {
synchronized( myList ) {
myList.add( f );
// then some thing that may take a while like a database access or RPC or notifying listeners
}
}
public Thing getMedianFoo() {
Foo med = null;
synchronized( myList ) {
Collections.sort(myList);
med = myList.get(myList.size()/2);
}
return med;
}
With synchronized blocks, you can have multiple synchronizers, so that multiple simultaneous but non-conflicting things can go on at the same time.
Synchronized methods can be checked using reflection API. This can be useful for testing some contracts, such as all methods in model are synchronized.
The following snippet prints all the synchronized methods of Hashtable:
for (Method m : Hashtable.class.getMethods()) {
if (Modifier.isSynchronized(m.getModifiers())) {
System.out.println(m);
}
}
Important note on using the synchronized block: careful what you use as lock object!
The code snippet from user2277816 above illustrates this point in that a reference to a string literal is used as locking object.
Realize that string literals are automatically interned in Java and you should begin to see the problem: every piece of code that synchronizes on the literal "lock", shares the same lock! This can easily lead to deadlocks with completely unrelated pieces of code.
It is not just String objects that you need to be careful with. Boxed primitives are also a danger, since autoboxing and the valueOf methods can reuse the same objects, depending on the value.
For more information see:
https://www.securecoding.cert.org/confluence/display/java/LCK01-J.+Do+not+synchronize+on+objects+that+may+be+reused
Often using a lock on a method level is too rude. Why lock up a piece of code that does not access any shared resources by locking up an entire method. Since each object has a lock, you can create dummy objects to implement block level synchronization.
The block level is more efficient because it does not lock the whole method.
Here some example
Method Level
class MethodLevel {
//shared among threads
SharedResource x, y ;
public void synchronized method1() {
//multiple threads can't access
}
public void synchronized method2() {
//multiple threads can't access
}
public void method3() {
//not synchronized
//multiple threads can access
}
}
Block Level
class BlockLevel {
//shared among threads
SharedResource x, y ;
//dummy objects for locking
Object xLock = new Object();
Object yLock = new Object();
public void method1() {
synchronized(xLock){
//access x here. thread safe
}
//do something here but don't use SharedResource x, y
// because will not be thread-safe
synchronized(xLock) {
synchronized(yLock) {
//access x,y here. thread safe
}
}
//do something here but don't use SharedResource x, y
//because will not be thread-safe
}//end of method1
}
[Edit]
For Collection like Vector and Hashtable they are synchronized when ArrayList or HashMap are not and you need set synchronized keyword or invoke Collections synchronized method:
Map myMap = Collections.synchronizedMap (myMap); // single lock for the entire map
List myList = Collections.synchronizedList (myList); // single lock for the entire list
The only difference : synchronized blocks allows granular locking unlike synchronized method
Basically synchronized block or methods have been used to write thread safe code by avoiding memory inconsistency errors.
This question is very old and many things have been changed during last 7 years.
New programming constructs have been introduced for thread safety.
You can achieve thread safety by using advanced concurrency API instead of synchronied blocks. This documentation page provides good programming constructs to achieve thread safety.
Lock Objects support locking idioms that simplify many concurrent applications.
Executors define a high-level API for launching and managing threads. Executor implementations provided by java.util.concurrent provide thread pool management suitable for large-scale applications.
Concurrent Collections make it easier to manage large collections of data, and can greatly reduce the need for synchronization.
Atomic Variables have features that minimize synchronization and help avoid memory consistency errors.
ThreadLocalRandom (in JDK 7) provides efficient generation of pseudorandom numbers from multiple threads.
Better replacement for synchronized is ReentrantLock, which uses Lock API
A reentrant mutual exclusion Lock with the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but with extended capabilities.
Example with locks:
class X {
private final ReentrantLock lock = new ReentrantLock();
// ...
public void m() {
lock.lock(); // block until condition holds
try {
// ... method body
} finally {
lock.unlock()
}
}
}
Refer to java.util.concurrent and java.util.concurrent.atomic packages too for other programming constructs.
Refer to this related question too:
Synchronization vs Lock
Synchronized method is used for lock all the objects
Synchronized block is used to lock specific object
In general these are mostly the same other than being explicit about the object's monitor that's being used vs the implicit this object. One downside of synchronized methods that I think is sometimes overlooked is that in using the "this" reference to synchronize on you are leaving open the possibility of external objects locking on the same object. That can be a very subtle bug if you run into it. Synchronizing on an internal explicit Object or other existing field can avoid this issue, completely encapsulating the synchronization.
As already said here synchronized block can use user-defined variable as lock object, when synchronized function uses only "this". And of course you can manipulate with areas of your function which should be synchronized.
But everyone says that no difference between synchronized function and block which covers whole function using "this" as lock object. That is not true, difference is in byte code which will be generated in both situations. In case of synchronized block usage should be allocated local variable which holds reference to "this". And as result we will have a little bit larger size for function (not relevant if you have only few number of functions).
More detailed explanation of the difference you can find here:
http://www.artima.com/insidejvm/ed2/threadsynchP.html
In case of synchronized methods, lock will be acquired on an Object. But if you go with synchronized block you have an option to specify an object on which the lock will be acquired.
Example :
Class Example {
String test = "abc";
// lock will be acquired on String test object.
synchronized (test) {
// do something
}
lock will be acquired on Example Object
public synchronized void testMethod() {
// do some thing
}
}
I know this is an old question, but with my quick read of the responses here, I didn't really see anyone mention that at times a synchronized method may be the wrong lock.
From Java Concurrency In Practice (pg. 72):
public class ListHelper<E> {
public List<E> list = Collections.syncrhonizedList(new ArrayList<>());
...
public syncrhonized boolean putIfAbsent(E x) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
The above code has the appearance of being thread-safe. However, in reality it is not. In this case the lock is obtained on the instance of the class. However, it is possible for the list to be modified by another thread not using that method. The correct approach would be to use
public boolean putIfAbsent(E x) {
synchronized(list) {
boolean absent = !list.contains(x);
if(absent) {
list.add(x);
}
return absent;
}
}
The above code would block all threads trying to modify list from modifying the list until the synchronized block has completed.
As a practical matter, the advantage of synchronized methods over synchronized blocks is that they are more idiot-resistant; because you can't choose an arbitrary object to lock on, you can't misuse the synchronized method syntax to do stupid things like locking on a string literal or locking on the contents of a mutable field that gets changed out from under the threads.
On the other hand, with synchronized methods you can't protect the lock from getting acquired by any thread that can get a reference to the object.
So using synchronized as a modifier on methods is better at protecting your cow-orkers from hurting themselves, while using synchronized blocks in conjunction with private final lock objects is better at protecting your own code from the cow-orkers.
From a Java specification summary:
http://www.cs.cornell.edu/andru/javaspec/17.doc.html
The synchronized statement (§14.17) computes a reference to an object;
it then attempts to perform a lock action on that object and does not
proceed further until the lock action has successfully completed. ...
A synchronized method (§8.4.3.5) automatically performs a lock action
when it is invoked; its body is not executed until the lock action has
successfully completed. If the method is an instance method, it
locks the lock associated with the instance for which it was invoked
(that is, the object that will be known as this during execution of
the body of the method). If the method is static, it locks the
lock associated with the Class object that represents the class in
which the method is defined. ...
Based on these descriptions, I would say most previous answers are correct, and a synchronized method might be particularly useful for static methods, where you would otherwise have to figure out how to get the "Class object that represents the class in which the method was defined."
Edit: I originally thought these were quotes of the actual Java spec. Clarified that this page is just a summary/explanation of the spec
TLDR; Neither use the synchronized modifier nor the synchronized(this){...} expression but synchronized(myLock){...} where myLock is a final instance field holding a private object.
The difference between using the synchronized modifier on the method declaration and the synchronized(..){ } expression in the method body are this:
The synchronized modifier specified on the method's signature
is visible in the generated JavaDoc,
is programmatically determinable via reflection when testing a method's modifier for Modifier.SYNCHRONIZED,
requires less typing and indention compared to synchronized(this) { .... }, and
(depending on your IDE) is visible in the class outline and code completion,
uses the this object as lock when declared on non-static method or the enclosing class when declared on a static method.
The synchronized(...){...} expression allows you
to only synchronize the execution of parts of a method's body,
to be used within a constructor or a (static) initialization block,
to choose the lock object which controls the synchronized access.
However, using the synchronized modifier or synchronized(...) {...} with this as the lock object (as in synchronized(this) {...}), have the same disadvantage. Both use it's own instance as the lock object to synchronize on. This is dangerous because not only the object itself but any other external object/code that holds a reference to that object can also use it as a synchronization lock with potentially severe side effects (performance degradation and deadlocks).
Therefore best practice is to neither use the synchronized modifier nor the synchronized(...) expression in conjunction with this as lock object but a lock object private to this object. For example:
public class MyService {
private final lock = new Object();
public void doThis() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
public void doThat() {
synchronized(lock) {
// do code that requires synchronous execution
}
}
}
You can also use multiple lock objects but special care needs to be taken to ensure this does not result in deadlocks when used nested.
public class MyService {
private final lock1 = new Object();
private final lock2 = new Object();
public void doThis() {
synchronized(lock1) {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThat() and doMore().
}
}
public void doThat() {
synchronized(lock1) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doMore() may execute concurrently
}
}
public void doMore() {
synchronized(lock2) {
// code here is guaranteed not to be executes at the same time
// as the synchronized code in doThis().
// doThat() may execute concurrently
}
}
}
I suppose this question is about the difference between Thread Safe Singleton and Lazy initialization with Double check locking. I always refer to this article when I need to implement some specific singleton.
Well, this is a Thread Safe Singleton:
// Java program to create Thread Safe
// Singleton class
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
//synchronized method to control simultaneous access
synchronized public static GFG getInstance()
{
if (instance == null)
{
// if instance is null, initialize
instance = new GFG();
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is thread safe.
Cons:
getInstance() method is synchronized so it causes slow performance as multiple threads can’t access it simultaneously.
This is a Lazy initialization with Double check locking:
// Java code to explain double check locking
public class GFG
{
// private instance, so that it can be
// accessed by only by getInstance() method
private static GFG instance;
private GFG()
{
// private constructor
}
public static GFG getInstance()
{
if (instance == null)
{
//synchronized block to remove overhead
synchronized (GFG.class)
{
if(instance==null)
{
// if instance is null, initialize
instance = new GFG();
}
}
}
return instance;
}
}
Pros:
Lazy initialization is possible.
It is also thread safe.
Performance reduced because of synchronized keyword is overcome.
Cons:
First time, it can affect performance.
As cons. of double check locking method is bearable so it can be
used for high performance multi-threaded applications.
Please refer to this article for more details:
https://www.geeksforgeeks.org/java-singleton-design-pattern-practices-examples/
Synchronizing with threads.
1) NEVER use synchronized(this) in a thread it doesn't work. Synchronizing with (this) uses the current thread as the locking thread object. Since each thread is independent of other threads, there is NO coordination of synchronization.
2) Tests of code show that in Java 1.6 on a Mac the method synchronization does not work.
3) synchronized(lockObj) where lockObj is a common shared object of all threads synchronizing on it will work.
4) ReenterantLock.lock() and .unlock() work. See Java tutorials for this.
The following code shows these points. It also contains the thread-safe Vector which would be substituted for the ArrayList, to show that many threads adding to a Vector do not lose any information, while the same with an ArrayList can lose information.
0) Current code shows loss of information due to race conditions
A) Comment the current labeled A line, and uncomment the A line above it, then run, method loses data but it shouldn't.
B) Reverse step A, uncomment B and // end block }. Then run to see results no loss of data
C) Comment out B, uncomment C. Run, see synchronizing on (this) loses data, as expected.
Don't have time to complete all the variations, hope this helps.
If synchronizing on (this), or the method synchronization works, please state what version of Java and OS you tested. Thank you.
import java.util.*;
/** RaceCondition - Shows that when multiple threads compete for resources
thread one may grab the resource expecting to update a particular
area but is removed from the CPU before finishing. Thread one still
points to that resource. Then thread two grabs that resource and
completes the update. Then thread one gets to complete the update,
which over writes thread two's work.
DEMO: 1) Run as is - see missing counts from race condition, Run severa times, values change
2) Uncomment "synchronized(countLock){ }" - see counts work
Synchronized creates a lock on that block of code, no other threads can
execute code within a block that another thread has a lock.
3) Comment ArrayList, unComment Vector - See no loss in collection
Vectors work like ArrayList, but Vectors are "Thread Safe"
May use this code as long as attribution to the author remains intact.
/mf
*/
public class RaceCondition {
private ArrayList<Integer> raceList = new ArrayList<Integer>(); // simple add(#)
// private Vector<Integer> raceList = new Vector<Integer>(); // simple add(#)
private String countLock="lock"; // Object use for locking the raceCount
private int raceCount = 0; // simple add 1 to this counter
private int MAX = 10000; // Do this 10,000 times
private int NUM_THREADS = 100; // Create 100 threads
public static void main(String [] args) {
new RaceCondition();
}
public RaceCondition() {
ArrayList<Thread> arT = new ArrayList<Thread>();
// Create thread objects, add them to an array list
for( int i=0; i<NUM_THREADS; i++){
Thread rt = new RaceThread( ); // i );
arT.add( rt );
}
// Start all object at once.
for( Thread rt : arT ){
rt.start();
}
// Wait for all threads to finish before we can print totals created by threads
for( int i=0; i<NUM_THREADS; i++){
try { arT.get(i).join(); }
catch( InterruptedException ie ) { System.out.println("Interrupted thread "+i); }
}
// All threads finished, print the summary information.
// (Try to print this informaiton without the join loop above)
System.out.printf("\nRace condition, should have %,d. Really have %,d in array, and count of %,d.\n",
MAX*NUM_THREADS, raceList.size(), raceCount );
System.out.printf("Array lost %,d. Count lost %,d\n",
MAX*NUM_THREADS-raceList.size(), MAX*NUM_THREADS-raceCount );
} // end RaceCondition constructor
class RaceThread extends Thread {
public void run() {
for ( int i=0; i<MAX; i++){
try {
update( i );
} // These catches show when one thread steps on another's values
catch( ArrayIndexOutOfBoundsException ai ){ System.out.print("A"); }
catch( OutOfMemoryError oome ) { System.out.print("O"); }
}
}
// so we don't lose counts, need to synchronize on some object, not primitive
// Created "countLock" to show how this can work.
// Comment out the synchronized and ending {, see that we lose counts.
// public synchronized void update(int i){ // use A
public void update(int i){ // remove this when adding A
// synchronized(countLock){ // or B
// synchronized(this){ // or C
raceCount = raceCount + 1;
raceList.add( i ); // use Vector
// } // end block for B or C
} // end update
} // end RaceThread inner class
} // end RaceCondition outter class