I'd like to better understand the mechanics of what actually happens when thread enters the synchronized(this) block vs synchronized(someObjectReference) block.
synchronized (this) {
// Statement 1
// Statement 2
}
synchronized (someObjectReference) {
// Statement 1
// Statement 2
}
As i understand it: (am i missing something? am i wrong?)
In both cases, only 1 thread can access synchronized block at a time
When we're synchronizing on someObjectReference :
Only 1 thread at a time may access/modify it in this block
Only 1 thread at a time may enter this block
What other mechanics are there please?
synchronized (objectReference) {
// Statement 1 dealing with someObjectReference
// Statement 2 not dealing with someObjectReference
}
In the example above, does it make any sense adding statements not dealing with mutex into the synchronized block?
There's only a difference when you mix the two together.
The single, basic rule of synchronized(foo) is that only one thread can be in a synchronized(foo) block for the same foo at any given time. That's it. (The only caveat maybe worth mentioning is that a thread can be inside several nested synchronized(foo) blocks for the same foo.)
If some code is inside a synchronized(foo) block, and some code is inside a synchronized(bar) block, then those pieces of code can run simultaneously -- but you can't have two threads running code in synchronized(foo) blocks simultaneously.
In both cases, only 1 thread can access synchronized block at a time
Not really. For exemple, when synchronizing on "this", 2 threads can access to the same block if they have 2 different instances of the same class. But yes, for one instance, there will be only one access to the block. And there will also have only one acess to any synchronized block on this
"Synchronized" means that only 1 thread can have access to any synchronized block on the same instance. So if you have 2 synchronized block in 2 different source files, but on the same instance, if one thread is inside one of those blocks, another thread cannot access to both synchronized block
About "what to do within a synchronized block" : do only things dealing with the synchronized object. Any other instruction that doesn't need synchronization will lock the ressource for nothing, an potentially create a bottleneck
Synchronize basically means program request to take a lock on the specified object...If a Thread is not able to enter any synchronized block then that means any other thread has already taken lock on the specified object..BLock of code specify that inside this region a thread can enter if lock is successfully acquired..
In both cases, only 1 thread can access synchronized block at a time
--Depend on the object to be locked availability
One important thing to note about synchronizing on this deals with visibility. Lets say you have a class A and it synchronizes on this. Any code that uses A has a reference to the object A is using to lock on. This means it is possible for a user of A to create a deadlock if they also lock on the A instance.
public class A implements Runnable {
public void run() {
synchronized (this) {
// something that blocks waiting for a signal
}
//other code
}
}
public class DeadLock {
public void deadLock() {
A a = new A();
Thread t = new Thread(a);
t.start();
Thread.sleep(5000); //make sure other thread starts and enters synchronized block
synchronized (a) {
// THIS CODE BLOCK NEVER EXECUTES
// signal a
}
}
}
Where as, if you always synchronize on a private member variable, you know that you are the only one using that reference as a lock.
Related
Could someone explain the difference between synchronized(this) and synchronized (c)?
I refereed various answers(1,2,3,4,5) but not able to understand and getting confused.
public class Check {
Consumer c = new Consumer(null);
public void getSynch() {
synchronized (this) {
// doing some task
System.out.println("Start");
}
synchronized (c) {
// doing some task
System.out.println("End");
}
}
}
I am able to understand the synchronized concept but not monitor object. Please explain in a simple way.
synchronized always works on a monitor object (sometimes also called lock or semaphore). Think of it as a token: when a thread enters a synchronized block it grads the token and other threads need to wait for the token to be returned. If you have multiple different monitor objects you basically have different tokens and thus synchronized blocks that operate on different monitors can run in parallel.
There are many possibilities with this and many possible use cases. One could be that multiple instances of a class could run in parallel but need access to a shared and non-threadsafe resource. You then could use that resource or any other object that represents the "token" for that resource as the monitor object.
However, note that there's potential for deadlocks, e.g. in the following situation:
Block 1:
synchronized(A) {
//do something with A
synchronized(B) {
//do something with B
}
}
Block 2:
synchronized(B) {
//do something with B
synchronized(A) {
//do something with A
}
}
Here both outer synchronized blocks could be entered in parallel because the two monitors A and B are available but then need to grab the other monitor and because they are now locked both threads would have to wait - class deadlock.
Also have a look at the dining philosophers problem which handles that topic as well (here the forks could be considered the monitor objects).
Edit:
In your case (the code you've posted), multiple threads could try to call getSynch() on the same instance of Check. The first block synchronizes on the instance itself thus preventing multiple threads from entering that block if called on the same instance.
The second block synchronizes on c which is a different object and could change over time. Assume the first block (synchronized(this) { ... }) changes c to reference another instance of Consumer. In that case you could have multiple threads run that block in parallel, e.g. if one entered the synchronized(c) block before the other thread reassigns c.
Consider this code:
public synchronized void onSignalsTimeout(List<SignalSpec> specs) {
if (specs != null && specs.size() > 0) {
for (SignalSpec spec : specs) {
ParsedCANSignal timeoutedSignal = new ParsedCANSignal();
SignalsProvider.getInstance().setSignal(spec.name, spec.parent.parent.channel, timeoutedSignal);
}
}
}
I've got simple question:
When Thread 1 calls onSignalsTimeout method, can Thread 2 access objects that are accessed in that method?
Can't find anywhere if 'synchronized' locks only access to this method or access to all objects used in this method.
First of all, forget about synchronized methods. A so-called synchronized method...
synchronized AnyType foobar(...) {
doSomething();
}
Is nothing but a shortcut way of writing this:
AnyType foobar(...) {
synchronized(this) {
doSomething();
}
}
There is nothing special about the method in either case. What is special is the synchronized block, and what a synchronized block does is very simple. When the JVM executes this:
synchronized(foo) {
doSomething();
}
It first evaluates the expression foo. The result must be an object reference. Then it locks the object, performs the body of the synchronized block, and then it unlocks the object.
But what does locked mean? It may mean less than you think. It does not prevent other threads from using the object. It doesn't prevent them from accessing the object's fields or, from updating its fields. The only thing that locking an object prevents is, it prevents other threads from locking the same object at the same time.
If thread A tries to enter synchronized(foo) {...} while thread B already has foo locked (either in the same synchronized block, or in a different one), then thread A will be forced to wait until thread B releases the lock.
You use synchronized blocks to protect data.
Suppose your program has some collection of objects that can be in different states. Suppose that some states make sense, but there are other states that don't make sense—invalid states.
Suppose that it is not possible for a thread to change the data from one valid state to another valid state without temporarily creating an invalid state.
If you put the code that changes the state in a synchronized(foo) block, and you put every block of code that can see the state into a synchronized block that locks the same object, foo, then you will prevent other threads from seeing the temporary invalid state.
Yes, other threads can access the objects used in the method; the synchronized keyword guarantees that no more than one thread at the time can execute the code of the method.
From https://docs.oracle.com/javase/tutorial/essential/concurrency/syncmeth.html:
First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing
a synchronized method for an object, all other threads that invoke
synchronized methods for the same object block (suspend execution)
until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent
invocation of a synchronized method for the same object. This
guarantees that changes to the state of the object are visible to all
threads. Note that constructors cannot be synchronized — using the
synchronized keyword with a constructor is a syntax error.
Synchronizing constructors doesn't make sense, because only the thread
that creates an object should have access to it while it is being
constructed.
In this context, synchronized simultaneously locks this method and any other method similarly marked as synchronized in your class.
I need to clear a few basic concepts so as to check whether my understanding is correct or not.
1) Once a thread enters any synchronized method on an instance, no other thread can enter any other synchronized method on the same instance.
2) However, nonsynchronized methods on that instance will continue to be callable (by other threads). If yes, can I then say the whole object doesn't gets locked but only the critical methods which are synchronized.
3) Will I get the same behaviour as above for synchronized statement too:
synchronised(this){
// statements to be synchronised
}
or the whole object gets locked here with nonsynchronized methods on that instance not callable.
I think I am pretty confused with locking scope.
A lock only prevents other threads from acquiring that same lock -- the lock itself has no idea what it protects -- it's just something that only one thread can have. We say the whole object is locked because any other thread that attempts to lock the whole object will be unable to acquire that lock until it's released. Otherwise, your understanding is correct.
Your statements are correct. Only synchronized code becomes protected. The object is simply used as the synchronizing lock, and it is not itself "locked".
And yes, you get the same behavior for synchronized blocks on the same object. In fact, you can synchronize blocks in the code of one object using another object as the lock.
The code,
public synchronized void abc() {
...
}
it is conceptually same as,
public void abc() {
synchronized(this) {
}
}
In either of these cases the non-synchronized methods can be called when the monitor (in this case the object on which the method abc is called) is locked by a Thread executing a synchronized block or method.
Your understanding is correct. All three statements you made, are valid.
Please note one thing though, locks are not on methods(synchronized or non-synchronized). Lock is always on the object, and once one thread acquired the lock on an object, other threads have to wait till the lock is released and available for other threads to acquire.
Why do you have to specify, which object has locked a synchronized block of code?
You don't have to specify which object has locked a synchronized method as it is always locked by 'this' (I believe).
I have two questions:
Why can't you block a none static method with an object other than
'this' ?
Why do you have to specify the object that has blocked
synchronized code?
I have read chapter nine of SCJP for Java 6, but I am still not clear on this.
I realize it is probably a basic question, but I am new to Threading.
Why can't you block a none static method with an object other than 'this' ?
You can:
public void foo() {
synchronized (lock) {
...
}
}
Why do you have to specify the object that has blocked synchronized code?
Because that's how the language designers chose to design the language. synchronized, when used on instance methods, implicitely uses this as the lock. synchronized when used on a block must explicitely specify the lock.
You can.
The code
synchronized foo() {
// some stuff
}
Is logically equal to code
foo() {
synchronized(this) {
// some stuff
}
}
I said "logically" because these 2 examples generate different byte code.
If method foo() is static synchronization is done of class object.
However you can wish to create several synchronized blocks that are synchronized on different objects into one class or even into one method. In this case you can use synchronized (lock) where lock is not this:
foo() {
synchronized(one) {}
///..........
synchronized(two) {}
}
You can specify any object you want which should have the lock on the synchronized code block. Actually, you shouldn't use synchronize(this) at all (or maybe be careful about it, see Avoid synchronized(this) in Java?).
EVERY object has a lock on which you can synchronize:
final Object lock = new Object()
synchronized ( lock ) { ... }
If you want to synchronize the whole method, you cannot say on which object, so it is always "this" object.
synchronized foo() {....}
The first way of locking is better, by the way.
It is not recommended to lock each method with this as it reduces the concurrency in most cases. So it is recommended to use Lock Stripping in which only the specific part of the code that needs to be protected is kept in synchronized block.
It is a practice that is explained well in Java Concurrency in Practice. But note this book is useful only when you have some basic experience with threading.
Some nuggets to keep in mind:
Do not overuse synchronization
use method level synchronization only when the whole method needs to be protected
Use different locks to protect two unrelated entities, which will increase the chances of concurrency. or else for reading or writing two unrelated entities threads will block on same lock.
public void incrementCounter1(){
synchronized(lockForCounter1){
counter1++;
}
}
public void incrementCounter2(){
synchronized(lockForCounter2){
counter2++;
}
}
I think your two questions actually are same. let's say if you wanna synchronized a method m of object a between multiple threads, you need a common base system or channel that threads can talk to, this is actually what object lock offers, so when you wanna a method of a object be synchronized there is no need for another object lock to do this, because the object you access itself has this lock, that's how and why lanuage designed.
while synchronized a block is not the same thing, threads can have different talking base other than this object itself, for instance, you can set a same object as synchronized object lock for synchronized block, so all objects of this Class can be synchronized at that block!
From the concurrency tutorial, the Synchronized methods part:
To make a method synchronized, simply add the synchronized keyword to its declaration:
public class SynchronizedCounter {
private int c = 0;
public synchronized void increment() {
c++;
}
public synchronized void decrement() {
c--;
}
public synchronized int value() {
return c;
}
}
If count is an instance of SynchronizedCounter, then making these methods synchronized has two effects:
First, it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object.
Second, when a synchronized method exits, it automatically establishes a happens-before relationship with any subsequent invocation of a synchronized method for the same object. This guarantees that changes to the state of the object are visible to all threads.
Simply put, that's how synchronization in Java works.
You don't have to specify which object has locked a synchronized method as it is always locked by 'this' (I believe).
True for instance methods, for static methods it locks upon the class object.
Why can't you block a none static method with an object other than 'this' ?
Note that:
public synchronized void increment() {
c++;
}
is equivalent in behavior to:
public void increment() {
synchronized(this) {
c++;
}
}
In this code snippet you can replace this with any object you wish.
Why do you have to specify the object that has blocked synchronized code?
Some, let's call it critical, block of code is to be ran only sequentially, but in uncontrolled concurrent environment it is possible that it will be ran in parallel. That's why synchronization mechanisms exists.
So we can divide the code like this:
the code safe to be ran in parallel (default situation)
the code that has to be synchronized (must be marked somehow)
That markation is made via the synchronized keyword and the corresponding lock object.
If you have two different critical code blocks that must not be ran together, they'll both have the synchronized keyword, and let's say they have the same lock object.
While the first block is executing, the lock object becomes "locked". If during that time the second block needs to get executed, the first command of that code block is:
synchronized(lock) {
but that lock object is in locked state because the first block is executing. The execution of the second block halts on that statement until the first block finishes the execution, and unlocks the state of the lock object. Then the second block may proceed with the execution (and lock the lock object again).
That mechanism is called the mutual exclusion and the lock is a general concept not tied to the Java programming language.
The details of the "lock object locking" process can be found here.
You can lock with any instance of an object but programmers usually
use this or locker...
Because it restrict access to that part of code by other threads
(unit of code processing) so you make sure whole of that part run in
consistence and nothing (i.e. a variable) would be alerted by another thread.
Consider:
a = 1;
a++;
Thread one reaches second line and you expect a = 2 but another thread executes first line and instead of 2 you have a = 1 for first thread and a = 2 for second thread. Now:
synchronized (whatever)
{
a = 1;
a++;
}
Now second thread would be blocked from entering into the code block (synchronized body) until first one leaves it (release the lock).
Can anyone tell me if I'm right or not? I have two thread which will run in parallel.
class MyThread extends Thread {
MyThread() {
}
method1() {
}
method2() {
}
method3() {
}
approach(1):
run() {
method1();
method2();
method3();
}
approach(2):
run() {
//the code of method1 is here (no method calling)
//the code of method2 is here (no method calling)
//the code of method3 is here (no method calling)
}
}
class Test{
public static void main(){
Thread t1 = new Thread();
t1.start();
Thread t2 = new Thread();
t2.start();
}
}
method1, method2 and method3 don't access global shared data but their codes perform some write in local variable within the method section, thus I guess I can not allow overlap execution within the method section.
Thereby:
in approach(1): I need to make the methods (method1, method2 and method3) synchronized, right?
in approach(2): No need to synchronize the code sections, right?
If I'm right in both approach, using the approach(2) will give better performance, right?
Short answer: you don't need the synchronization. Both approaches are equivalent from a thread safety perspective.
Longer answer:
It may be worthwhile taking a step back and remembering what the synchronized block does. It does essentially two things:
makes sure that if thread A is inside a block that's synchronized on object M, no other thread can enter a block that's synchronized on the same object M until thread A is done with its block of code
makes sure that if thread A has done work within a block that's synchronized object M, and then finishes that block, and then thread B enters a block that's also synchronized on
M, then thread B will see everything that thread A had done within its synchronized block. This is called establishing the happens-before relationship.
Note that a synchronized method is just shorthand for wrapping the method's code in synchronized (this) { ... }.
In addition to those two things, the Java Memory Model (JMM) guarantees that within one thread, things will happen as if they had not been reordered. (They may actually be reordered for various reasons, including efficiency -- but not in a way that your program can notice within a single thread. For instance, if you do "x = 1; y = 2" the compiler is free to switch that such that y = 2 happens before x = 1, since a single thread can't actually notice the difference. If multiple threads are accessing x and y, then it's very possible, without proper synchronization, for another thread to see y = 2 before it sees x = 1.)
So, getting back to your original question, there are a couple interesting notes.
First, since a synchronized method is shorthand for putting the whole method inside a "synchronized (this) { ... }" block, t1's methods and t2's methods will not be synchronized against the same reference, and thus will not be synchronized relative to each other. t1's methods will only be synchronized against the t1 object, and t2's will only be synchronized against t2. In other words, it would be perfectly fine for t1.method1() and t2.method1() to run at the same time. So, of those two things the synchronized keyword provides, the first one (the exclusivity of entering the block) isn't relevant. Things could go something like:
t1 wants to enter method1. It needs to acquire the t1 monitor, which is not contended -- so it acquires it and enters the block
t2. wants to enter method2. It needs to acquire the 11 monitor, which is not contended -- s it acquires it and enters the block
t1 finishes method1 and releases its hold on the t1 monitor
t2 finishes method1 and releases its hold on the t2 monitor
As for the second thing synchronization does (establishing happens-before), making method1() and method2() synchronized will basically be ensuring that t1.method1() happens-before t1.method2(). But since both of those happen on the same thread anyway (the t1 thread), the JMM anyway guarantees that this will happen.
So it actually gets even a bit uglier. If t1 and t2 did share state -- that is, synchronization would be necessary -- then making the methods synchronized would not be enough. Remember, a synchronized method means synchronized (this) { ... }, so t1's methods would be synchronized against t1, and t2's would be against t2. You actually wouldn't be establishing any happens-before relationship between t1's methods and t2's.
Instead, you'd have to ensure that the methods are synchronized on the same reference. There are various ways to do this, but basically, it has to be a reference to an object that the two threads both know about.
Assume t1 and t2 both know about the same reference, LOCK. Both have methods like:
method1() {
synchronized(LOCK) {
// do whatever
}
}
Now things could go something like this:
t1 wants to enter method1. It needs to acquire the LOCK monitor, which is not contended -- so it acquires it and enters the block
t2 wants to enter method1. It needs to acquire the LOCK monitor, which is already held by t1 -- so t2 is put on hold.
t1 finishes method1 and releases its hold on the LOCK monitor
t2 is now able to acquire the LOCK monitor, so it does, and starts on the meat of method1
t2 finishes method1 and releases its hold on the LOCK monitor
You are saying your methods don't access global shared data and write only local variables so there is no need to synchronize them Because both the threads will be having their own copies of local variables. They will not overlap or something.
This kind of problem is faced in case of static/class variables. If multiple threads try to change the value of static variables at same time then there comes the problem so there we need to synchronize.
If the methods you're calling don't write to global shared data, you don't have to synchronize them.
In a multithreaded program, each thread has its own call stack. The local variables of each method will be separate in each thread, and will not overwrite one another.
Therefore, approach 1 works fine, does not require synchronization overhead, and is much better programming practice because it avoids duplicated code.
Thread-wise your ok. local variables within methods are not shared between threads as each instance running in a thread will have its own stack.
You won't have any speed improvements between the two approaches it is just a better organisation of the code (shorter methods are easier to understand)
If each method is independent of the other you may want to consider if they belong in the same class. If you want the performance gain create 3 different classes and execute multiple threads for each method (performance gains depends on the number of available cores cpu/io ration etc.)
Thereby: in approach(1): I need to make the methods(method1,method2
and method3) synchronized, right? in approach(2): No need to
synchronize the code sections, right?
Invoking in-lined methods v/s invoking multiple methods don't determine whether a method should be synchronized or not. I'd recommend you to read this and then ask for more clarifications.
If I'm right in both approach, using the approach(2) will give better performance, right?
At the cost of breaking down methods into a single god method? Sure, but you would be looking at a "very" miniscule improvement as compared to the lost code readability, something definitely not recommended.
method1, 2 and 3 won't be executed concurrently so if the variables that they read/write are not shared outside the class with other threads while they're running then there is no synchronization required and no need to inline.
If they modify data that other threads will read at the same time that they're running then you need to guard access to that data.
If they read data that other threads will write at the same time that they're running then you need to guard access to that data.
If other threads are expected to read data modified by method1, 2, or 3, then you need to make the run method synchronized (or them in a synchronized block) to set up a gate so that the JVM will set up a memory barrier and ensure that other threads can see the data after m1,2 and 3 are done.