I have been experimenting Java thread visibility problem with the popular example of sending a stop signal to the thread by means of a shared boolean and non-volatile variable and the target thread does not seem to get it) as below:
public class ThreadVisibilityTest {
//Shared variable to send a signal to the thread
static boolean stopped = false;
public static void main(String[] args) throws Exception {
Thread targetThread = new Thread(new Runnable() {
public void run() {
while(!stopped) {}
System.out.println("Target thread gets signal and stops...");
}
});
targetThread.start();
TimeUnit.SECONDS.sleep(5);
stopped=true;
System.out.println("Main thread has sent stop signal to the thread...");
}
}
Main thread sends stop signal to the target thread after 5 seconds by means of setting stopped to true and the target thread can not get it and so does not stop.
Defining stopped variable as volatile obviously solves the problem.
Bu then I realized that if I make stopped variable non volatile but instead access it in a synchronized context in the target thread, target thread gets the final value and stops. So thread visibility problem seems to be solved just like using volatile.
Thread targetThread = new Thread(new Runnable() {
public void run() {
while(true) {
synchronized(this) {
if(stopped) break;
}
}
System.out.println("Target thread gets signal and stops...");
}
});
And also the object monitor to be used for synchronization seems to have no effect as follows:
synchronized(Thread.class) {
if(stopped) break;
}
Is this something that happens by chance or do I miss something?
Or can we say that accessing shared variable with mutual exclusion seems to force target thread to refresh its cache memory just like accessing a volatile variable?
If the latter is true which way do you suggest to overcome thread visibility issue, by means volatile keyword or access with mutual exclusion?
Thanks in advance
Is this something that happens by chance or do I miss something?
You missed the chapter in the Java Language Reference (JLS) that talks about the Java Memory Model. Either that, or you missed working through the concurrency chapter in the Java tutorial. https://docs.oracle.com/javase/tutorial/essential/concurrency/
Either way, you would have learned that if thread A exits from a synchronized block, and then thread B enters a block that is synchronized on the same object, then everything that thread A wrote before releasing the lock is guaranteed to be visible to thread B after thread B locks the lock.
I think mutual exclusion also provides memory visibility as stated in Java Concurrency In Practice (By Brian Goetz) in section 3.1.3 Locking and Visibility.
See, you are reading it in synchronized context but not writing in the synchronized context. That might cause problem.
As other answers have already pointed out, memory visibility is established when using synchronization.
However, it would be preferable to use a volatile shared variable (unless synchronization is needed for non visibility related issues) for greater concurrency. Overuse of synchronization forces threads to constantly wait for each other, when it would be safe and faster to work concurrently.
Synchronized blocks or methods work very similarly to volatile variables regarding visibility. After the thread exits the synchronized block, it releases the monitor, which has the effect of flushing CPU registers to CPU cache - so that writes made by that thread are visible to other threads.
Before entering the synchronized block, the thread acquires the monitor, which has the effect of invalidating the local processor cache (CPU registers) and forcing the thread re-read from the CPU cache. Therefore, all changes made by the previous release would be visible to the thread which has the same monitor lock.
In the above example, the write was made outside of synchronized context. In this scenario, according to JMM, there is no guarantee as to when the flush CPU register will occur and - consequently, whether the latest values would be available to other threads.
For this reason, you can assume that the code “works” probably because the flush occurs in time and the synchronized block forces the thread to re-read from CPU cache on each loop. Besides, the JMM states that the “happen-before” visibility guarantee requires the same monitor lock. So, anything beyond that is just by chance.
Source: https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html#synchronization
Related
This question already has answers here:
Threads access on Synchronized Block/Code Java
(2 answers)
Closed 5 years ago.
Say one singleton instance accessed by two threads. Two threads are accessing the same function called doTask().
public class MySingleton {
Object lock = new Object();
// I omit the constructor here.
public void doTask() {
//first non-synchronized code
synchronize(lock) {
//some heavy task
}
//final non-synchronized code
}
}
If thread B is doing the heavy task, when thread A access doTask(), I know Thread A will run the //first non-synchronized code, then thread A noticed the lock is acquired by Thread B, so it can't run synchronized //some heavy task. But would thread A skip the synchronized heavy task continue run //final non-synchronized code or will thread A wait for the lock without even executing the //final non-synchronized code?
(I know I can try it out, but currently I don't have a proper development environment...)
The synchronized block in java forces threads to wait until they can acquire the object's lock.
It will wait until B is done, and then snag the lock for lock, and run the code inside the block, and continue out the other end.
It is important to note that when B finishes executing the contents of some heavy task it will release the lock on lock and run the final non-synchronized code at the "same time" that A runs the synchronized block.
When one thread acquires the monitor of synchronous object, then the remaining threads will also try to acquire it and this process is termed as POLLING. Because all the remaining threads try to acquire the monitor of that object by repeatedly checking the monitor lock status. The moment when the lock is released it can be acquired by any thread. It's actually decided by the scheduler.
Thread A will always wait indefinitely until Thread B releases the lock.
In the extreme case, if the lock is never released, Thread A will be stuck forever.
Sometimes this is good enough but often you will need better control over things, this is when classes like ReentrantLock come handy.
This can do everything synchronized offers, but can also do things like checking whether the lock is owned by the current thread already, attempting to acquire the lock without waiting (failing instantly if the lock is already taken by another thread), or limit its waiting to a certain amount of time.
Please also note that while these solutions can be used to control mutual exclusion, this isn't their only function, they also play an important role in visibility.
I have a class with static variables, and multiple threads will have instances of this class.
The static variable I'm concerned with is a Thread, that will pop a message from a queue and send it in an email, until the queue is empty. Every time a message is added to the queue, I check to see if the Thread is alive. If not, I restart it.
if (mailThread == null)
{
mailThread = new Thread(mailSender);
mailThread.start();
}
else if (!mailThread.isAlive())
{
mailThread = new Thread(mailSender);
mailThread.start();
}
In another question, it was said that static variables should be used within a synchronized block.
My question is, would it be safe to just use a ReentrantLock for these if checks? Or do I need to use synchronized? Or both?
You can use either ReentrantLock or a synchronized block. Both are equally safe. Although there is a difference in performance in certain situations. Check out these benchmarks: Benchmark 1 Benchmark 2.
According to the docs:
A reentrant mutual exclusion Lock with the same basic behavior and
semantics as the implicit monitor lock accessed using synchronized
methods and statements, but with extended capabilities. A
ReentrantLock is owned by the thread last successfully locking, but
not yet unlocking it. A thread invoking lock will return, successfully
acquiring the lock, when the lock is not owned by another thread. The
method will return immediately if the current thread already owns the
lock. This can be checked using methods isHeldByCurrentThread(), and
getHoldCount().
So a ReentrantLock must be safe enough.
From the book "Effective java" i have following famous code of stopping one thread from another
public class StopThread {
private static boolean stopRequested;
private static synchronized void requestStop() {
stopRequested = true;
}
private static synchronized boolean stopRequested() {
return stopRequested;
}
public static void main(String[] args)
throws InterruptedException {
Thread backgroundThread = new Thread(new Runnable() {
public void run() {
int i = 0;
while (!stopRequested()) {
i++;
}
}
});
backgroundThread.start();
TimeUnit.SECONDS.sleep(1);
requestStop();
}
}
A line is written there that is "synchronization has no effect unless both read and write
operations are synchronized."But it is clear that if we don't use synchronized keyword with method requestStop the code will work fine,i.e,it terminates nearly after 1 second which is desired.One thing more here is that if we don't synchronize both the method we will(most probably) go into infinite loop because of code optimization.So my questions are:-
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?Although here if we don't synchronize it,the program runs as desired,i.e,it terminates nearly in 1sec.
2.Does synchronized keyword enforces the VM to stop optimization each time?
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?Although here if we don't synchronize it,the program runs as desired,i.e,it terminates nearly in 1sec.
Things can go wrong if JVM decides to optimize the code within the run method of your backgroundThread. The read of stopRequested() can be reordered for optimization by JVM because of which it may never call the stopRequested() method again. But these days almost all JVM implementations take care of this and hence without making stopRequested as synchronized your code will still run fine. Also point to be noted here is that if you donot make stopRequested synchronized then the change done to stopRequested boolean variable may not be seen immediately by other non synchronized threads. Only if you used synchronization can the other threads immediately detect any change as an entry into synchronized method clears the cache and loads the data from the memory fresh. This immediate detection of memory changes is important in a highly concurrent system
2) Does synchronized keyword enforces the VM to stop optimization each time?
Synchronized keyword doesnot enforce VM to stop optimization but it makes it to adhere to the things listed below. VM can still do an optimization but it has to take care of the below things.
Synchronization effectively does the following things:-
It guarantees happens before relationship. If one action happens-before another, then the first is visible to and ordered before the second.
It guarantees memory visibility that is all the modifications done within the block which may be cached are immediately flushed before the exit of synchronization block which results in any other synchronized thread to see the memory updates immediately. This will be important in case of highly concurrent systems.
Changes by a thread to a variable are not necessarily seen right away by other threads. Using synchronized here makes sure that the update by one thread is visible to the other thread.
1) The change would possibly not become visible to the other thread. In the absence of synchronization or volatile or atomic fields there's no assurance when the other thread will see the change.
2) The synchronized keyword helps the VM decide on limits on instruction reordering and on what the VM can optimize.
Testing this on your machine will not necessarily display the same results as using a server with more processors. Different platforms may do more optimizing. So just because it works on your machine doesn't necessarily mean it's ok.
1.How and in what scenario things can go wrong if we don't synchronize 'stopRequested' method?
Assume if one thread is writing (updating) the field stopRequested, now before the first thread updates the value of stopRequested from requestStop(), another thread can read the value of stopRequested by calling stopRequested() (if stopRequested() was not synchronized. Thus it would not get the updated value.
2.Does synchronized keyword enforces the VM to stop optimization each time?
Not always, Escape Analysis implemented from JDK6U23 also plays a part in this.
Synchronization creates a memory barrier which ensures a happens-before relationship. i.e, any block of code executed after a synchronized block is sure to have the updated value (changes made earlier are reflected).
Statements can be executed out-of-order within a synchronized block to improve efficiency provided the happens-before holds good. On the other hand a synchronized block can be removed by the JVM if it determines that the block can be accessed only by a single thread.
Just make stopRequested volatile. Then method stopRequest does not have to be synchronized, because it does not change anything.
Referring to this topic(How to pause Thread execution), Peter Knego said:
Loop must be inside synchronized block.
But I don't see the point of synchronization if only one instance is there.
In another case, if the thread class has multiple instances and they are copping with different variables, does the loop need to be synchronized.
Actually, I wrote a few programs using threads (with multiple instances) without considering synchronization and they works fine.
You must synchronize any access to shared state. If all of your instances access local storage, then they are thread safe. If your methods are thread safe, they do not require synchronization. If you had a static (e.g. global) resource, and modified it in multiple threads then that is likely to be non-thread safe (excluding atomic operations of course).
The answer says
Use synchronized, wait() and notify() for that.
Create an atomic flag (e.g. boolean field) in the thread to be stopped. Stoppable thread monitors this flag in the loop. Loop must be inside synchronized block.
When you need to stop the thread (button click) you set this flag.
Thread sees the flag is set and calls wait() on a common object (possibly itself).
When you want to restart the thread, reset the flag and call commonObject.notify().
You cannot call wait() or notify on an object unless you get a lock on it's monitor. And putting it inside synchronized block is a way to do that.
this is because the wait and notify are part of the condition variable and using them without synchronizing on them leads in the general use-case to race conditions
the normal way of using wait is
synchronized(this){
while(someCondition())
wait();//while loop is needed to combat spurious wakeups
}
and you wake it up with
synchronized(this){
adjustCondition();
notify();
}
if you didn't synchronize on the condition as well then you get into a race for example
you just tested someCondition() and got true so you need to wait. but before you get a chance to another thread executes the adjustCondition();notify(); block
but the first thread will still enter the wait() (because the condition has already been checked) and which may lead to deadlock
The Thread monitor needs to be synchronized in your case. This is done only for the actual wait call, because it requires that. I recommend to have a special wait Object for this to not accidental synchronize on something else.
final static Object threadPauseMonitor = new Object();
// ...
while (shouldPause.get()) {
synchronized(threadPauseMonitor) {
threadPauseMonitor.wait();
}
}
Where shouldPause is an AtomicBoolean. Please note the while to counter the malicious spurious wakeup that can possibly occur.
I want to write a code with two different threads. The first one does somethin, the second one waits a specific time. The thread that ends first should interrupt the other one.
My problem is now, that the thread, I initialized first, cannot access/interrupt the second one, it always gives out the "symbol not found"-error. If I swap the positions of the threads in the code, it is the same, only the other way around.
Is there a possibility, to make both threads "global" and accessable by the other one? Please give coding examples, where to put the public void main, the void run(), etc., if possible, so I just need to add the code itself.
Thanks
Code examples:
public class FTPUpload extends Thread {
public static void main (String args[]) {
_//some code_
final Thread thread1 = new Thread(){;
public void run() {
_//code of thread1_
}
final Thread thread2 = new Thread(){;
public void run() {
_//code of thread2_
}
thread1.start();
thread2.start();
}
}
For your question is (currently?) a bit vague, my answer may not be that helpful. But...
Try declaring the Thread objects first and use them later. So each might be know to the other.
You can create a bool static Variable which both can access and once one of them is finished sets it to true, and you have to check this variable during the job in each thread, either on different places or if you have a loop in the loop for example.
Alternatively you can write a dummy file somewhere by the thread finished 1st and keep checking if file exists in both threads. The main idea is having a shared resource both can access.
Read this Question it very informative: Are static variables shared between threads?
My Idea May not work up to some answers but the one with the file should actually work.
A typical solution for communicating between 2 threads is to use condition variables. Thread1 could block on the condition variable, then when Thread2 has done what it needs to do and wants to tell Thread1 to go, it signals Thread1 via the condition variable, thus releasing its block. Both threads must be initialized with the same condition variable. Here is an example.
If you want both threads to wait until the other is initialized, this can be performed using a barrier sync (called a CyclicBarrier in Java). If Thread1 hits the barrier sync first it will block, until the other thread hits the barrier sync. Once both have hit the barrier sync, then they will continue processing. Here is an example.
Both condition variables and barrier syncs are thread safe, so you dont have to worry about if you need to synchronize them or not.
The general principal is to create a lock and condition outside both threads. The first thread acquires the lock and signals the condition when done. The second thread acquires the lock and awaits the condition (with timeout if needed). I am very concerned that you are relying on Thread.interrupt() which is a bad plan.
final Lock lock = new ReentrantLock();
final Condition done = lock.newCondition();
...
// in thread 1 when finished
lock.lock();
try {
done.signalAll();
} finally {
lock.unlock();
}
...
// in thread 2 for waiting
lock.lock();
try {
done.await(30,TimeUnit.SECONDS); // wait for the done or give up waiting after 30s
} finally {
lock.unlock();
}
Using the lock will ensure that both threads see a consistent view of shared objects, whereas Thread.interrupt() does not guarantee you have passed a boundary
A refinement is to use a CountDownLatch
final CountDownLatch latch = new CountDownLatch(1);
...
// in thread 1
latch.countDown();
...
// in thread 2
latch.await(30,TimeUnit.SECONDS)
This abstracts away the lock.
Others have suggested effectively a spin lock scanning for a file on the file system. Such an approach could lead to thread starvation or if not slower performance than a lock or latch based solution... Though for inter-process as opposed to inter-thread within the one jvm, file based is ok
I recommend the book "Java Concurrency In Practice" if you think you know threading go to a bookshop, open the book and try to predict what the program on page 33 will do... After reading that page you will end up buying the book