Do I have to use synchronized on main thread methods? - java

To be more specific my question is if the main thread methods are already synchronized?
For example:
#MainThread
class MyClass{
private Object o = null;
#MainThread
MyClass(){
}
#MainThread
public Object getObjectFromMainThread(){
return this.o.getObj2();
}
#MainThread
public void setObjectFromMainThread(Object obj){
obj.set(1);
this.o=obj;
}
#AnyThread
public synchronized Object getObjectFromAnyThread(){
return this.o;
}
#AnyThread
public synchronized void setObjectFromAnyThread(Object obj){
this.o=obj;
}
}
The methods getObjectFromMainThread and setObjectFromMainThread which are called only from main thread are not synchronized. Does it need to be synchronize as well or is not necessary?

The answer to your immediate question is yes, you will have to synchronize the getObjectFromMainThread and setObjectFromMainThread methods in your example. The answer to why there's this need is a mighty deep rabbit hole.
The general problem with multithreading is what happens when multiple threads access shared, mutable state. In this case, the shared, mutable state is this.o. It doesn't matter whether any of the threads involved is the main thread, it's a general problem that arises when more than one thread is in play.
The problem we're dealing with comes down to "what happens when a thread is reading the state at the same time that one or more threads are writing it?", with all its variations. This problem fans out into really intricate subproblems like each processor core having its own copy of the object in its own processor cache.
The only way of handling this is to make explicit what will happen. The synchronized mechanism is one such way. Synchronization involves a lock, when you use a synchronised method, the lock is this:
public synchronized void foo() {
// this code uses the same lock...
}
public void bar() {
synchronized (this) {
// ...as this code
}
}
Of all the program code that synchronizes on the same lock, only one thread can be executing it at the same time. That means that if (and only if) all code that interacts with this.o runs synchronized to the this lock, the problems described earlier are avoided.
In your example, the presence of setObjectFromAnyThread() means that you must also synchronize setObjectFromMainThread(), otherwise the state in this.o is accessed sometimes-synchronized and sometimes-unsynchronized, which is a broken program.
Synchronization comes at a cost: because your locking bits of code to be run by one thread at a time (and other threads are made to wait), you remove some or all of the speed-up you gained from using multi-threading in the first place. In some cases, you're better off forgetting multi-threading exists and using a simpler single-threaded program.
Within a multi-threaded program, it's useful to limit the amount of shared, mutable state to a minimum. Any state that's not accessed by more than one thread at a time doesn't need synchronization, and is going to be easier to reason about.
The #MainThread annotation, at least as it exists in Android, indicates that the method is intended to be accessed on the main thread only. It doesn't do anything, it's just there as a signal to both the programmer(s) and the compiler. There is no technical protection mechanism involved at run time; it all comes down to your self-discipline and some compile-time tool support. The advantage of this lack of protection is that there's no runtime overhead.
Multi-threaded programming is complicated and easy to get wrong. The only way to get it right is to truly understand it. There's a book called Java Concurrency In Practice that's a really good explanation of both the general principles and problems of concurrency and the specifics in Java.

Related

What is the importance of synchronized method in stopping one Thread from another?

From the book "Effective java" i have following famous code of stopping one thread from another
public class StopThread {
private static boolean stopRequested;
private static synchronized void requestStop() {
stopRequested = true;
}
private static synchronized boolean stopRequested() {
return stopRequested;
}
public static void main(String[] args)
throws InterruptedException {
Thread backgroundThread = new Thread(new Runnable() {
public void run() {
int i = 0;
while (!stopRequested()) {
i++;
}
}
});
backgroundThread.start();
TimeUnit.SECONDS.sleep(1);
requestStop();
}
}
A line is written there that is "synchronization has no effect unless both read and write
operations are synchronized."But it is clear that if we don't use synchronized keyword with method requestStop the code will work fine,i.e,it terminates nearly after 1 second which is desired.One thing more here is that if we don't synchronize both the method we will(most probably) go into infinite loop because of code optimization.So my questions are:-
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?Although here if we don't synchronize it,the program runs as desired,i.e,it terminates nearly in 1sec.
2.Does synchronized keyword enforces the VM to stop optimization each time?
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?Although here if we don't synchronize it,the program runs as desired,i.e,it terminates nearly in 1sec.
Things can go wrong if JVM decides to optimize the code within the run method of your backgroundThread. The read of stopRequested() can be reordered for optimization by JVM because of which it may never call the stopRequested() method again. But these days almost all JVM implementations take care of this and hence without making stopRequested as synchronized your code will still run fine. Also point to be noted here is that if you donot make stopRequested synchronized then the change done to stopRequested boolean variable may not be seen immediately by other non synchronized threads. Only if you used synchronization can the other threads immediately detect any change as an entry into synchronized method clears the cache and loads the data from the memory fresh. This immediate detection of memory changes is important in a highly concurrent system
2) Does synchronized keyword enforces the VM to stop optimization each time?
Synchronized keyword doesnot enforce VM to stop optimization but it makes it to adhere to the things listed below. VM can still do an optimization but it has to take care of the below things.
Synchronization effectively does the following things:-
It guarantees happens before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
It guarantees memory visibility that is all the modifications done within the block which may be cached are immediately flushed before the exit of synchronization block which results in any other synchronized thread to see the memory updates immediately. This will be important in case of highly concurrent systems.
Changes by a thread to a variable are not necessarily seen right away by other threads. Using synchronized here makes sure that the update by one thread is visible to the other thread.
1) The change would possibly not become visible to the other thread. In the absence of synchronization or volatile or atomic fields there's no assurance when the other thread will see the change.
2) The synchronized keyword helps the VM decide on limits on instruction reordering and on what the VM can optimize.
Testing this on your machine will not necessarily display the same results as using a server with more processors. Different platforms may do more optimizing. So just because it works on your machine doesn't necessarily mean it's ok.
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?
Assume if one thread is writing (updating) the field stopRequested, now before the first thread updates the value of stopRequested from requestStop(), another thread can read the value of stopRequested by calling stopRequested() (if stopRequested() was not synchronized. Thus it would not get the updated value.
2.Does synchronized keyword enforces the VM to stop optimization each time?
Not always, Escape Analysis implemented from JDK6U23 also plays a part in this.
Synchronization creates a memory barrier which ensures a happens-before relationship. i.e, any block of code executed after a synchronized block is sure to have the updated value (changes made earlier are reflected).
Statements can be executed out-of-order within a synchronized block to improve efficiency provided the happens-before holds good. On the other hand a synchronized block can be removed by the JVM if it determines that the block can be accessed only by a single thread.
Just make stopRequested volatile. Then method stopRequest does not have to be synchronized, because it does not change anything.

Performance issue: use Singleton object in multi thread environment

I have a class "A" with method "calculate()". Class A is of type singleton(Scope=Singleton).
public class A{
public void calculate(){
//perform some calculation and update DB
}
}
Now, I have a program that creates 20 thread. All threads need to access the method "calculate()".
I have multicore system. So I want the parallel processing of the threads.
In the above scenario, can i get performance? Can all threads access the method calculate at same instance of time?
Or, Since the class A is singleton so, the threads needs to be blocked waiting.
I have found similar questions in the web/Stackoverflow. But I cannot get clear answer.
Would you please help me?
Statements like "singletons need synchronization" or "singletons don't need synchronization" are overly simplistic, I'm afraid. No conclusions can be drawn only from the fact that you're dealing with the singleton pattern.
What really matters for purposes of multithreading is what is shared. If there are data that are shared by all threads performing the calculation, then you will probably need to synchronize that access. If there are critical sections of code than cannot run simultaneously between threads, then you will need to synchronize that.
The good news is that often times it will not be necessary to synchronize everything in the entire calculation. You might gain significant performance improvements from your multi-core system despite needing to synchronize part of the operation.
The bad news is that these things are very complex. Sorry. One possible reference:
http://www.amazon.com/Java-Concurrency-Practice-Brian-Goetz/dp/0321349601/ref=sr_1_1?ie=UTF8&qid=1370838949&sr=8-1&keywords=java+concurrency+in+practice
That's the fundamental concept of Singleton. Only one instance of the class would be present in the system (JVM). Now, it depends on the implementation of calculate(). Is it a stateless utility method? If yes, you might not want to make it synchronized. In that case, multiple threads will be able to access it at the same instance of time. If calculate() is NOT stateless, i.e. it uses instance variables (and those instance variables will be used by multiple threads), then be careful; You have to make calculate() thread safe. You have to synchronize the method. At least you have to use a synchronize block inside the method. But, once you do so, only one thread will be able to access it (the synchronized block or the synchronized block inside the method) at any point of time.
public void calculate() {
//Some code goes here which does not need require thread safety.
synchronized(someObj) {
//Some code goes here which requires thread safety.
}
//Some code goes here which does not need require thread safety.
}
If you want to use parallel processing (if that's the primary goal), then singleton is not the design pattern that you should use.
I have found similar questions in the web/Stackoverflow. But I cannot get clear answer.
There is a good reason for that!!
It is not possible to say whether a method on a singleton does, or does not, need to be synchronized by virtue of being singleton.
Synchronization and the need for synchronization is all about state that may be shared by different threads.
If different threads share state (even serially), then synchronization is required.
If not then no synchronization is required.
The only clues that you have provided us that would help us give you a yes / no answer are this enigmatic comment:
// perform some calculation and update DB
... and the fact that the calculate() method takes no arguments.
If we infer that the calculate() method gets its input from the state of the singleton itself, then at least the part of the method (or the methods it calls) must synchronize while retrieving that state. However, that doesn't mean that the entire method call must be synchronized. The proportion of its time that the calculate method needs to hold a lock on the shared data will determine how much parallelism you can actually get ...
The updating of the database will also require some kind of synchronization. However, this should be taken care of by the JDBC connection object and the objects you get from it ... provided that you obey the rules and don't try to share a connection between multiple threads. (The database update will also present a concurrency bottleneck ... assuming that the updates apply to the same database table or tables.)
It depends on how you implement Singleton. If you use Synchronized keyword then they will wait else not.
Use Singleton with eager initialization.
Something like this:
public final class Universe {
public static Universe getInstance() {
return fINSTANCE;
}
// PRIVATE //
/**
* Single instance created upon class loading.
*/
private static final Universe fINSTANCE = new Universe();
/**
* Private constructor prevents construction outside this class.
*/
private Universe() {
//..elided
}
}
Above will perform very well in multithreaded environment. or else you can go for enum implementation of Singleton.
Check this link for various singleton implementation: http://javarevisited.blogspot.in/2012/07/why-enum-singleton-are-better-in-java.html
Multiple threads can invoke calculate() at the same time.
Those invocations won't be queued (executed serially) within that JVM unless you perform some type of concurrency control (making the method synchronized is one option).
The fact that your object is a singleton may or may not affect performance, depending on how that object's attributes (if any) are used within calculate().
Also bear in mind that since you are "updating DB", table or row level locks may also limit concurrency.
If you are worried about performance, the best bet is to test it.

Why could this code fail?

While reviewing this question I noticed this code:
class MyThread extends Thread {
private boolean stop = false;
public void run() {
while(!stop) {
doSomeWork();
}
}
public void setStop() {
this.stop = true;
}
}
However I don't understand why would this fail. Do other threads not get access to the "actual" stop variable?
The JIT compiler can re-order reads and writes in an application so long as
the actions are sequentially consistent and
the altered actions do not violate intra-thread semantics.
That is just a fancy way of saying, all actions should appear to happen the same way as if it were executed by only a single thread. So you can get the JIT to recompile your code to look like this
class MyThread extends Thread {
private boolean stop = false;
public void run() {
if(!stop){
while(true){
}
}
}
This is a legal optimization called hoisting. It still acts the same as if serial but offers surprising results when using multiple threads.
By declaring a field volatile you are telling Java not to execute any re orderings. Along with the memory consistency as mentioned by Nathan Hughes
The instance variable stop needs to be volatile, otherwise there's no guarantee the other threads will see changes to it. There are a lot of conflicting interests at work: threads want a consistent view of the program state, CPUs want to be able to cache data, the JVM wants to be able to reorder instructions. Making the instance variable volatile means that it can't be cached and that happens-before relationships are established that limit instruction reordering.
See this other answer (+1) for a good example of what reordering may happen without marking the variable volatile.
(By the way using interruption for thread cancellation is preferable to using an instance variable.)
The variable stop must be declared as volatile.
Although i prefer using interrupt to stop a thread.
Other threads are not guaranteed to see updated values of stop - you need to establish a "happens before" relationship. The simplest way would be to make stop volatile.

Implementing a Mutex in Java

I have a multi-threaded application (a web app in Tomcat to be exact). In it there is a class that almost every thread will have its own instance of. In that class there is a section of code in one method that only ONE thread (user) can execute at a time. My research has led me to believe that what I need here is a mutex (which is a semaphore with a count of 1, it would seem).
So, after a bit more research, I think what I should do is the following. Of importance is to note that my lock Object is static.
Am I doing it correctly?
public Class MyClass {
private static Object lock = new Object();
public void myMethod() {
// Stuff that multiple threads can execute simultaneously.
synchronized(MyClass.lock) {
// Stuff that only one thread may execute at a time.
}
}
}
In your code, myMethod may be executed in any thread, but only in one at a time. That means that there can never be two threads executing this method at the same time. I think that's what you want - so: Yes.
Typically, the multithreading problem comes from mutability - where two or more threads are accessing the same data structure and one or more of them modifies it.
The first instinct is to control the access order using locking, as you've suggested - however you can quickly run into lock contention where your application looses a lot of processing time to context switching as your threads are parked on lock monitors.
You can get rid of most of the problem by moving to immutable data structures - so you return a new object from the setters, rather than modifying the existing one, as well as utilising concurrent collections, such a ConcurrentHashMap / CopyOnWriteArrayList.
Concurrent programming is something you'll need to get your head around, especially as throughput comes from parallelisation in todays modern computing world.
This will allow one thread at a time through the block. Other thread will wait, but no queue as such, there is no guarantee that threads will get the lock in a fair manner. In fact with Biased lock, its unlikely to be fair. ;)
Your lock should be final If there is any reason it can't its probably a bug. BTW: You might be able to use synchronized(MyClass.class) instead.

How many threads can simultaneously invoke an unsynchronized method of an object?

So let's say I have a class X with a method m. Method m is NOT synchronized and it doesn't need to be since it doesn't really change the state of the object x of type X.
In some threads I call the method like this: x.m(). All these threads use the same object x.
By how many threads can be this method (method m) called on object x simultaneously?
Can be the fact that the method is called by, let's say, 100 threads a bottleneck for my application?
thanks.
Other's have answered your direct question.
I'd like to clear up something that is could be a misconception on your part ... and if it is, it is a dangerous one.
Method m is NOT synchronized and it doesn't need to be since it doesn't really change the state of the object x of type X.
That is not a sufficient condition. Methods that don't change state typically need to be synchronized too.
Suppose that you have a class Test with a simple getter and setter:
public class Test {
private int foo;
public int getFoo() {
return foo;
}
public synchronized void setFoo(int foo) {
this.foo = foo;
}
}
Is the getter thread-safe?
According to your rule, yes.
In reality, no.
Why? Because unless the threads that call getFoo and setFoo synchronize properly, a call to getFoo() after a call to setFoo(...) may see a stale value for foo.
This is one of those nasty cases where you will get away with it nearly all of the time. But very occasionally, the timing of the two calls will be such that the bug bites you. This kind of bug is likely to slip through the cracks of your testing, and be very difficult to reproduce when it occurs in production.
The only case where it absolutely safe to access an object's state from multiple threads without synchronizing is when the state is declared as final, AND the constructor doesn't publish the object.
If you have more threads in the runnable state than you have physical cores, you'll end up wasting time by context switching... but that's about it. The fact that those threads are executing the same method is irrelevant if there's no coordination between them.
Remember the difference between threads and instances. One is executing the other is data. If the data is not under some locking mechanism, or some resource constraints then the access is only limited by the number of threads that can run by the underlying infrastructure. This is a system (jvm implementation + OS + machine) limitation.
Yep, an unsynchronized method doesn't "care" how many threads are invoking it. It's a purely passive entity and nothing special occurs when a new thread enters it.
Perhaps one thing that confuses some people is the "auto" storage used by a method. This storage is allocated on the thread's stack, and does not require the active participation of the method. The method's code is simply given a pointer to the storage.
(Many, many moons ago, it wasn't thus. Either the "auto" storage was allocated from heap when the method was called, or the method maintained a list of "auto" storage areas. But that paradigm disappeared maybe 40 years ago, and I doubt that there is any system in existence that still uses it. And I'm certain that no JVM uses the scheme.)
You'd have a bottleneck if one thread acquired a resource that others needed and held onto it for a long-running operation. If that isn't the situation for your method, I don't see how you'll experience a bottle.
Is this a theoretical question, or are you observing behavior in a real application that's running more slowly than you think it should?
The best answer of all is to get some data and see. Run a test and monitor it. Be a scientist.

Categories

Resources