Related
At work today, I came across the volatile keyword in Java. Not being very familiar with it, I found this explanation.
Given the detail in which that article explains the keyword in question, do you ever use it or could you ever see a case in which you could use this keyword in the correct manner?
volatile has semantics for memory visibility. Basically, the value of a volatile field becomes visible to all readers (other threads in particular) after a write operation completes on it. Without volatile, readers could see some non-updated value.
To answer your question: Yes, I use a volatile variable to control whether some code continues a loop. The loop tests the volatile value and continues if it is true. The condition can be set to false by calling a "stop" method. The loop sees false and terminates when it tests the value after the stop method completes execution.
The book "Java Concurrency in Practice," which I highly recommend, gives a good explanation of volatile. This book is written by the same person who wrote the IBM article that is referenced in the question (in fact, he cites his book at the bottom of that article). My use of volatile is what his article calls the "pattern 1 status flag."
If you want to learn more about how volatile works under the hood, read up on the Java memory model. If you want to go beyond that level, check out a good computer architecture book like Hennessy & Patterson and read about cache coherence and cache consistency.
“… the volatile modifier guarantees that any thread that reads a field will see the most recently written value.” - Josh Bloch
If you are thinking about using volatile, read up on the package java.util.concurrent which deals with atomic behaviour.
The Wikipedia post on a Singleton Pattern shows volatile in use.
Volatile(vɒlətʌɪl): Easily evaporated at normal temperatures
Important point about volatile:
Synchronization in Java is possible by using Java keywords synchronized and volatile and locks.
In Java, we can not have synchronized variable. Using synchronized keyword with a variable is illegal and will result in compilation error. Instead of using the synchronized variable in Java, you can use the java volatile variable, which will instruct JVM threads to read the value of volatile variable from main memory and don’t cache it locally.
If a variable is not shared between multiple threads then there is no need to use the volatile keyword.
source
Example usage of volatile:
public class Singleton {
private static volatile Singleton _instance; // volatile variable
public static Singleton getInstance() {
if (_instance == null) {
synchronized (Singleton.class) {
if (_instance == null)
_instance = new Singleton();
}
}
return _instance;
}
}
We are creating instance lazily at the time the first request comes.
If we do not make the _instance variable volatile then the Thread which is creating the instance of Singleton is not able to communicate to the other thread. So if Thread A is creating Singleton instance and just after creation, the CPU corrupts etc, all other threads will not be able to see the value of _instance as not null and they will believe it is still assigned null.
Why does this happen? Because reader threads are not doing any locking and until the writer thread comes out of a synchronized block, the memory will not be synchronized and value of _instance will not be updated in main memory. With the Volatile keyword in Java, this is handled by Java itself and such updates will be visible by all reader threads.
Conclusion: volatile keyword is also used to communicate the content of memory between threads.
Example usage of without volatile:
public class Singleton {
private static Singleton _instance; //without volatile variable
public static Singleton getInstance() {
if (_instance == null) {
synchronized(Singleton.class) {
if (_instance == null)
_instance = new Singleton();
}
}
return _instance;
}
}
The code above is not thread-safe. Although it checks the value of instance once again within the synchronized block (for performance reasons), the JIT compiler can rearrange the bytecode in a way that the reference to the instance is set before the constructor has finished its execution. This means the method getInstance() returns an object that may not have been initialized completely. To make the code thread-safe, the keyword volatile can be used since Java 5 for the instance variable. Variables that are marked as volatile get only visible to other threads once the constructor of the object has finished its execution completely.
Source
volatile usage in Java:
The fail-fast iterators are typically implemented using a volatile counter on the list object.
When the list is updated, the counter is incremented.
When an Iterator is created, the current value of the counter is embedded in the Iterator object.
When an Iterator operation is performed, the method compares the two counter values and throws a ConcurrentModificationException if they are different.
The implementation of fail-safe iterators is typically light-weight. They typically rely on properties of the specific list implementation's data structures. There is no general pattern.
volatile is very useful to stop threads.
Not that you should be writing your own threads, Java 1.6 has a lot of nice thread pools. But if you are sure you need a thread, you'll need to know how to stop it.
The pattern I use for threads is:
public class Foo extends Thread {
private volatile boolean close = false;
public void run() {
while(!close) {
// do work
}
}
public void close() {
close = true;
// interrupt here if needed
}
}
In the above code segment, the thread reading close in the while loop is different from the one that calls close(). Without volatile, the thread running the loop may never see the change to close.
Notice how there's no need for synchronization
A variable declared with volatile keyword, has two main qualities which make it special.
If we have a volatile variable, it cannot be cached into the computer's(microprocessor) cache memory by any thread. Access always happened from main memory.
If there is a write operation going on a volatile variable, and suddenly a read operation is requested, it is guaranteed that the write operation will be finished prior to the read operation.
Two above qualities deduce that
All the threads reading a volatile variable will definitely read the latest value. Because no cached value can pollute it. And also the read request will be granted only after the completion of the current write operation.
And on the other hand,
If we further investigate the #2 that I have mentioned, we can see that volatile keyword is an ideal way to maintain a shared variable which has 'n' number of reader threads and only one writer thread to access it. Once we add the volatile keyword, it is done. No any other overhead about thread safety.
Conversly,
We can't make use of volatile keyword solely, to satisfy a shared variable which has more than one writer thread accessing it.
One common example for using volatile is to use a volatile boolean variable as a flag to terminate a thread. If you've started a thread, and you want to be able to safely interrupt it from a different thread, you can have the thread periodically check a flag. To stop it, set the flag to true. By making the flag volatile, you can ensure that the thread that is checking it will see it has been set the next time it checks it without having to even use a synchronized block.
No one has mentioned the treatment of read and write operation for long and double variable type. Reads and writes are atomic operations for reference variables and for most primitive variables, except for long and double variable types, which must use the volatile keyword to be atomic operations. #link
Yes, volatile must be used whenever you want a mutable variable to be accessed by multiple threads. It is not very common usecase because typically you need to perform more than a single atomic operation (e.g. check the variable state before modifying it), in which case you would use a synchronized block instead.
Volatile
volatile -> synchronized[About]
volatile says for a programmer that the value always will be up to date. The problem is that the value can be saved on different types of hardware memory. For example it can be CPU registers, CPU cache, RAM... СPU registers and CPU cache belong to CPU and can not share a data unlike of RAM which is on the rescue in multithreading envirompment
volatile keyword says that a variable will be read and written from/to RAM memory directly. It has some computation footprint
Java 5 extended volatile by supporting happens-before[About]
A write to a volatile field happens-before every subsequent read of that field.
Read is after write
volatile keyword does not cure a race condition[About] situation to sove it use synchronized keyword[About]
As a result it safety only when one thread writes and others just read the volatile value
In my opinion, two important scenarios other than stopping thread in which volatile keyword is used are:
Double-checked locking mechanism. Used often in Singleton design
pattern. In this the singleton object needs to be declared volatile.
Spurious Wakeups. Thread may sometimes wake up from wait call even if no notify call has been issued. This behavior is called spurious wakeup. This can be countered by using a conditional variable (boolean flag). Put the wait() call in a while loop as long as the flag is true. So if thread wakes up from wait call due to any reasons other than Notify/NotifyAll then it encounters flag is still true and hence calls wait again. Prior to calling notify set this flag to true. In this case the boolean flag is declared as volatile.
Assume that a thread modifies the value of a shared variable, if you didn't use volatile modifier for that variable. When other threads want to read this variable's value, they don't see the updated value because they read the variable's value from the CPU's cache instead of RAM memory. This problem also known as Visibility Problem.
By declaring the shared variable volatile, all writes to the counter variable will be written back to main memory immediately. Also, all reads of the counter variable will be read directly from main memory.
public class SharedObject {
public volatile int sharedVariable = 0;
}
With non-volatile variables there are no guarantees about when the Java Virtual Machine (JVM) reads data from main memory into CPU caches, or writes data from CPU caches to main memory. This can cause several problems which I will explain in the following sections.
Example:
Imagine a situation in which two or more threads have access to a shared object which contains a counter variable declared like this:
public class SharedObject {
public int counter = 0;
}
Imagine too, that only Thread 1 increments the counter variable, but both Thread 1 and Thread 2 may read the counter variable from time to time.
If the counter variable is not declared volatile there is no guarantee about when the value of the counter variable is written from the CPU cache back to main memory. This means, that the counter variable value in the CPU cache may not be the same as in main memory. This situation is illustrated here:
The problem with threads not seeing the latest value of a variable because it has not yet been written back to main memory by another thread, is called a "visibility" problem. The updates of one thread are not visible to other threads.
You'll need to use 'volatile' keyword, or 'synchronized' and any other concurrency control tools and techniques you might have at your disposal if you are developing a multithreaded application. Example of such application is desktop apps.
If you are developing an application that would be deployed to application server (Tomcat, JBoss AS, Glassfish, etc) you don't have to handle concurrency control yourself as it already addressed by the application server. In fact, if I remembered correctly the Java EE standard prohibit any concurrency control in servlets and EJBs, since it is part of the 'infrastructure' layer which you supposed to be freed from handling it. You only do concurrency control in such app if you're implementing singleton objects. This even already addressed if you knit your components using frameworkd like Spring.
So, in most cases of Java development where the application is a web application and using IoC framework like Spring or EJB, you wouldn't need to use 'volatile'.
volatile only guarantees that all threads, even themselves, are incrementing. For example: a counter sees the same face of the variable at the same time. It is not used instead of synchronized or atomic or other stuff, it completely makes the reads synchronized. Please do not compare it with other java keywords. As the example shows below volatile variable operations are also atomic they fail or succeed at once.
package io.netty.example.telnet;
import java.util.ArrayList;
import java.util.List;
public class Main {
public static volatile int a = 0;
public static void main(String args[]) throws InterruptedException{
List<Thread> list = new ArrayList<Thread>();
for(int i = 0 ; i<11 ;i++){
list.add(new Pojo());
}
for (Thread thread : list) {
thread.start();
}
Thread.sleep(20000);
System.out.println(a);
}
}
class Pojo extends Thread{
int a = 10001;
public void run() {
while(a-->0){
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
Main.a++;
System.out.println("a = "+Main.a);
}
}
}
Even you put volatile or not results will always differ. But if you use AtomicInteger as below results will be always same. This is same with synchronized also.
package io.netty.example.telnet;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
public class Main {
public static volatile AtomicInteger a = new AtomicInteger(0);
public static void main(String args[]) throws InterruptedException{
List<Thread> list = new ArrayList<Thread>();
for(int i = 0 ; i<11 ;i++){
list.add(new Pojo());
}
for (Thread thread : list) {
thread.start();
}
Thread.sleep(20000);
System.out.println(a.get());
}
}
class Pojo extends Thread{
int a = 10001;
public void run() {
while(a-->0){
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
Main.a.incrementAndGet();
System.out.println("a = "+Main.a);
}
}
}
While I see many good Theoretical explanations in the answers mentioned here, I am adding a practical example with an explanation here:
1.
CODE RUN WITHOUT VOLATILE USE
public class VisibilityDemonstration {
private static int sCount = 0;
public static void main(String[] args) {
new Consumer().start();
try {
Thread.sleep(100);
} catch (InterruptedException e) {
return;
}
new Producer().start();
}
static class Consumer extends Thread {
#Override
public void run() {
int localValue = -1;
while (true) {
if (localValue != sCount) {
System.out.println("Consumer: detected count change " + sCount);
localValue = sCount;
}
if (sCount >= 5) {
break;
}
}
System.out.println("Consumer: terminating");
}
}
static class Producer extends Thread {
#Override
public void run() {
while (sCount < 5) {
int localValue = sCount;
localValue++;
System.out.println("Producer: incrementing count to " + localValue);
sCount = localValue;
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
return;
}
}
System.out.println("Producer: terminating");
}
}
}
In the above code, there are two threads - Producer and Consumer.
The producer thread iterates over the loop 5 times (with a sleep of 1000 milliSecond or 1 Sec) in between. In every iteration, the producer thread increases the value of sCount variable by 1. So, the producer changes the value of sCount from 0 to 5 in all iterations
The consumer thread is in a constant loop and print whenever the value of sCount changes until the value reaches 5 where it ends.
Both the loops are started at the same time. So both the producer and consumer should print the value of sCount 5 times.
OUTPUT
Consumer: detected count change 0
Producer: incrementing count to 1
Producer: incrementing count to 2
Producer: incrementing count to 3
Producer: incrementing count to 4
Producer: incrementing count to 5
Producer: terminating
ANALYSIS
In the above program, when the producer thread updates the value of sCount, it does update the value of the variable in the main memory(memory from where every thread is going to initially read the value of variable). But the consumer thread reads the value of sCount only the first time from this main memory and then caches the value of that variable inside its own memory. So, even if the value of original sCount in main memory has been updated by the producer thread, the consumer thread is reading from its cached value which is not updated. This is called VISIBILITY PROBLEM .
2.
CODE RUN WITH VOLATILE USE
In the above code, replace the line of code where sCount is declared by the following :
private volatile static int sCount = 0;
OUTPUT
Consumer: detected count change 0
Producer: incrementing count to 1
Consumer: detected count change 1
Producer: incrementing count to 2
Consumer: detected count change 2
Producer: incrementing count to 3
Consumer: detected count change 3
Producer: incrementing count to 4
Consumer: detected count change 4
Producer: incrementing count to 5
Consumer: detected count change 5
Consumer: terminating
Producer: terminating
ANALYSIS
When we declare a variable volatile, it means that all reads and all writes to this variable or from this variable will go directly into the main memory. The values of these variables will never be cached.
As the value of the sCount variable is never cached by any thread, the consumer always reads the original value of sCount from the main memory(where it is being updated by producer thread). So, In this case the output is correct where both the threads prints the different values of sCount 5 times.
In this way, the volatile keyword solves the VISIBILITY PROBLEM .
Yes, I use it quite a lot - it can be very useful for multi-threaded code. The article you pointed to is a good one. Though there are two important things to bear in mind:
You should only use volatile if you
completely understand what it does
and how it differs to synchronized.
In many situations volatile appears,
on the surface, to be a simpler more
performant alternative to
synchronized, when often a better
understanding of volatile would make
clear that synchronized is the only
option that would work.
volatile doesn't actually work in a
lot of older JVMs, although
synchronized does. I remember seeing a document that referenced the various levels of support in different JVMs but unfortunately I can't find it now. Definitely look into it if you're using Java pre 1.5 or if you don't have control over the JVMs that your program will be running on.
Absolutely, yes. (And not just in Java, but also in C#.) There are times when you need to get or set a value that is guaranteed to be an atomic operation on your given platform, an int or boolean, for example, but do not require the overhead of thread locking. The volatile keyword allows you to ensure that when you read the value that you get the current value and not a cached value that was just made obsolete by a write on another thread.
Every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value.
Only member variable can be volatile or transient.
There are two different uses of volatile keyword.
Prevents JVM from reading values from register (assume as cache), and forces its value to be read from memory.
Reduces the risk of memory in-consistency errors.
Prevents JVM from reading values in register, and forces its
value to be read from memory.
A busy flag is used to prevent a thread from continuing while the device is busy and the flag is not protected by a lock:
while (busy) {
/* do something else */
}
The testing thread will continue when another thread turns off the busy flag:
busy = 0;
However, since busy is accessed frequently in the testing thread, the JVM may optimize the test by placing the value of busy in a register, then test the contents of the register without reading the value of busy in memory before every test. The testing thread would never see busy change and the other thread would only change the value of busy in memory, resulting in deadlock. Declaring the busy flag as volatile forces its value to be read before each test.
Reduces the risk of memory consistency errors.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a
"happens-before" relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads.
The technique of reading, writing without memory consistency errors is called atomic action.
An atomic action is one that effectively happens all at once. An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete.
Below are actions you can specify that are atomic:
Reads and writes are atomic for reference variables and for most
primitive variables (all types except long and double).
Reads and writes are atomic for all variables declared volatile
(including long and double variables).
Cheers!
Volatile does following.
1> Read and write of volatile variables by different threads are always from memory, not from thread's own cache or cpu register. So each thread always deals with the latest value.
2> When 2 different threads work with same instance or static variables in heap, one may see other's actions as out of order. See jeremy manson's blog on this. But volatile helps here.
Following fully running code shows how a number of threads can execute in predefined order and print outputs without using synchronized keyword.
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
To achieve this we may use the following full fledged running code.
public class Solution {
static volatile int counter = 0;
static int print = 0;
public static void main(String[] args) {
// TODO Auto-generated method stub
Thread[] ths = new Thread[4];
for (int i = 0; i < ths.length; i++) {
ths[i] = new Thread(new MyRunnable(i, ths.length));
ths[i].start();
}
}
static class MyRunnable implements Runnable {
final int thID;
final int total;
public MyRunnable(int id, int total) {
thID = id;
this.total = total;
}
#Override
public void run() {
// TODO Auto-generated method stub
while (true) {
if (thID == counter) {
System.out.println("thread " + thID + " prints " + print);
print++;
if (print == total)
print = 0;
counter++;
if (counter == total)
counter = 0;
} else {
try {
Thread.sleep(30);
} catch (InterruptedException e) {
// log it
}
}
}
}
}
}
The following github link has a readme, which gives proper explanation.
https://github.com/sankar4git/volatile_thread_ordering
From oracle documentation page, the need for volatile variable arises to fix memory consistency issues:
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable.
This means that changes to a volatile variable are always visible to other threads. It also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
As explained in Peter Parker answer, in absence of volatile modifier, each thread's stack may have their own copy of variable. By making the variable as volatile, memory consistency issues have been fixed.
Have a look at jenkov tutorial page for better understanding.
Have a look at related SE question for some more details on volatile & use cases to use volatile:
Difference between volatile and synchronized in Java
One practical use case:
You have many threads, which need to print current time in a particular format for example : java.text.SimpleDateFormat("HH-mm-ss"). Yon can have one class, which converts current time into SimpleDateFormat and updated the variable for every one second. All other threads can simply use this volatile variable to print current time in log files.
Volatile Variables are light-weight synchronization. When visibility of latest data among all threads is requirement and atomicity can be compromised , in such situations Volatile Variables must be preferred. Read on volatile variables always return most recent write done by any thread since they are neither cached in registers nor in caches where other processors can not see. Volatile is Lock-Free. I use volatile, when scenario meets criteria as mentioned above.
volatile variable is basically used for instant update (flush) in main shared cache line once it updated, so that changes reflected to all worker threads immediately.
If you have a multithread system and these multiple threads work on some shared data, those threads will load data in their own cache. If we do not lock the resource, any change made in one thread is NOT gonna be available in another thread.
With a locking mechanism, we add read/write access to the data source. If one thread modifies the data source, that data will be stored in the main memory instead of in its cache. When others threads need this data, they will read it from the main memory. This will increase the latency dramatically.
To reduce the latency, we declare variables as volatile. It means that whenever the value of the variable is modified in any of the processors, the other threads will be forced to read it. It still has some delays but better than reading from the main memory.
Below is a very simple code to demonstrate the requirement of volatile for variable which is used to control the Thread execution from other thread (this is one scenario where volatile is required).
// Code to prove importance of 'volatile' when state of one thread is being mutated from another thread.
// Try running this class with and without 'volatile' for 'state' property of Task class.
public class VolatileTest {
public static void main(String[] a) throws Exception {
Task task = new Task();
new Thread(task).start();
Thread.sleep(500);
long stoppedOn = System.nanoTime();
task.stop(); // -----> do this to stop the thread
System.out.println("Stopping on: " + stoppedOn);
}
}
class Task implements Runnable {
// Try running with and without 'volatile' here
private volatile boolean state = true;
private int i = 0;
public void stop() {
state = false;
}
#Override
public void run() {
while(state) {
i++;
}
System.out.println(i + "> Stopped on: " + System.nanoTime());
}
}
When volatile is not used: you'll never see 'Stopped on: xxx' message even after 'Stopping on: xxx', and the program continues to run.
Stopping on: 1895303906650500
When volatile used: you'll see the 'Stopped on: xxx' immediately.
Stopping on: 1895285647980000
324565439> Stopped on: 1895285648087300
Demo: https://repl.it/repls/SilverAgonizingObjectcode
The volatile key when used with a variable, will make sure that threads reading this variable will see the same value . Now if you have multiple threads reading and writing to a variable, making the variable volatile will not be enough and data will be corrupted . Image threads have read the same value but each one has done some chages (say incremented a counter) , when writing back to the memory, data integrity is violated . That is why it is necessary to make the varible synchronized (diffrent ways are possible)
If the changes are done by 1 thread and the others need just to read this value, the volatile will be suitable.
At work today, I came across the volatile keyword in Java. Not being very familiar with it, I found this explanation.
Given the detail in which that article explains the keyword in question, do you ever use it or could you ever see a case in which you could use this keyword in the correct manner?
volatile has semantics for memory visibility. Basically, the value of a volatile field becomes visible to all readers (other threads in particular) after a write operation completes on it. Without volatile, readers could see some non-updated value.
To answer your question: Yes, I use a volatile variable to control whether some code continues a loop. The loop tests the volatile value and continues if it is true. The condition can be set to false by calling a "stop" method. The loop sees false and terminates when it tests the value after the stop method completes execution.
The book "Java Concurrency in Practice," which I highly recommend, gives a good explanation of volatile. This book is written by the same person who wrote the IBM article that is referenced in the question (in fact, he cites his book at the bottom of that article). My use of volatile is what his article calls the "pattern 1 status flag."
If you want to learn more about how volatile works under the hood, read up on the Java memory model. If you want to go beyond that level, check out a good computer architecture book like Hennessy & Patterson and read about cache coherence and cache consistency.
“… the volatile modifier guarantees that any thread that reads a field will see the most recently written value.” - Josh Bloch
If you are thinking about using volatile, read up on the package java.util.concurrent which deals with atomic behaviour.
The Wikipedia post on a Singleton Pattern shows volatile in use.
Volatile(vɒlətʌɪl): Easily evaporated at normal temperatures
Important point about volatile:
Synchronization in Java is possible by using Java keywords synchronized and volatile and locks.
In Java, we can not have synchronized variable. Using synchronized keyword with a variable is illegal and will result in compilation error. Instead of using the synchronized variable in Java, you can use the java volatile variable, which will instruct JVM threads to read the value of volatile variable from main memory and don’t cache it locally.
If a variable is not shared between multiple threads then there is no need to use the volatile keyword.
source
Example usage of volatile:
public class Singleton {
private static volatile Singleton _instance; // volatile variable
public static Singleton getInstance() {
if (_instance == null) {
synchronized (Singleton.class) {
if (_instance == null)
_instance = new Singleton();
}
}
return _instance;
}
}
We are creating instance lazily at the time the first request comes.
If we do not make the _instance variable volatile then the Thread which is creating the instance of Singleton is not able to communicate to the other thread. So if Thread A is creating Singleton instance and just after creation, the CPU corrupts etc, all other threads will not be able to see the value of _instance as not null and they will believe it is still assigned null.
Why does this happen? Because reader threads are not doing any locking and until the writer thread comes out of a synchronized block, the memory will not be synchronized and value of _instance will not be updated in main memory. With the Volatile keyword in Java, this is handled by Java itself and such updates will be visible by all reader threads.
Conclusion: volatile keyword is also used to communicate the content of memory between threads.
Example usage of without volatile:
public class Singleton {
private static Singleton _instance; //without volatile variable
public static Singleton getInstance() {
if (_instance == null) {
synchronized(Singleton.class) {
if (_instance == null)
_instance = new Singleton();
}
}
return _instance;
}
}
The code above is not thread-safe. Although it checks the value of instance once again within the synchronized block (for performance reasons), the JIT compiler can rearrange the bytecode in a way that the reference to the instance is set before the constructor has finished its execution. This means the method getInstance() returns an object that may not have been initialized completely. To make the code thread-safe, the keyword volatile can be used since Java 5 for the instance variable. Variables that are marked as volatile get only visible to other threads once the constructor of the object has finished its execution completely.
Source
volatile usage in Java:
The fail-fast iterators are typically implemented using a volatile counter on the list object.
When the list is updated, the counter is incremented.
When an Iterator is created, the current value of the counter is embedded in the Iterator object.
When an Iterator operation is performed, the method compares the two counter values and throws a ConcurrentModificationException if they are different.
The implementation of fail-safe iterators is typically light-weight. They typically rely on properties of the specific list implementation's data structures. There is no general pattern.
volatile is very useful to stop threads.
Not that you should be writing your own threads, Java 1.6 has a lot of nice thread pools. But if you are sure you need a thread, you'll need to know how to stop it.
The pattern I use for threads is:
public class Foo extends Thread {
private volatile boolean close = false;
public void run() {
while(!close) {
// do work
}
}
public void close() {
close = true;
// interrupt here if needed
}
}
In the above code segment, the thread reading close in the while loop is different from the one that calls close(). Without volatile, the thread running the loop may never see the change to close.
Notice how there's no need for synchronization
A variable declared with volatile keyword, has two main qualities which make it special.
If we have a volatile variable, it cannot be cached into the computer's(microprocessor) cache memory by any thread. Access always happened from main memory.
If there is a write operation going on a volatile variable, and suddenly a read operation is requested, it is guaranteed that the write operation will be finished prior to the read operation.
Two above qualities deduce that
All the threads reading a volatile variable will definitely read the latest value. Because no cached value can pollute it. And also the read request will be granted only after the completion of the current write operation.
And on the other hand,
If we further investigate the #2 that I have mentioned, we can see that volatile keyword is an ideal way to maintain a shared variable which has 'n' number of reader threads and only one writer thread to access it. Once we add the volatile keyword, it is done. No any other overhead about thread safety.
Conversly,
We can't make use of volatile keyword solely, to satisfy a shared variable which has more than one writer thread accessing it.
One common example for using volatile is to use a volatile boolean variable as a flag to terminate a thread. If you've started a thread, and you want to be able to safely interrupt it from a different thread, you can have the thread periodically check a flag. To stop it, set the flag to true. By making the flag volatile, you can ensure that the thread that is checking it will see it has been set the next time it checks it without having to even use a synchronized block.
No one has mentioned the treatment of read and write operation for long and double variable type. Reads and writes are atomic operations for reference variables and for most primitive variables, except for long and double variable types, which must use the volatile keyword to be atomic operations. #link
Yes, volatile must be used whenever you want a mutable variable to be accessed by multiple threads. It is not very common usecase because typically you need to perform more than a single atomic operation (e.g. check the variable state before modifying it), in which case you would use a synchronized block instead.
Volatile
volatile -> synchronized[About]
volatile says for a programmer that the value always will be up to date. The problem is that the value can be saved on different types of hardware memory. For example it can be CPU registers, CPU cache, RAM... СPU registers and CPU cache belong to CPU and can not share a data unlike of RAM which is on the rescue in multithreading envirompment
volatile keyword says that a variable will be read and written from/to RAM memory directly. It has some computation footprint
Java 5 extended volatile by supporting happens-before[About]
A write to a volatile field happens-before every subsequent read of that field.
Read is after write
volatile keyword does not cure a race condition[About] situation to sove it use synchronized keyword[About]
As a result it safety only when one thread writes and others just read the volatile value
In my opinion, two important scenarios other than stopping thread in which volatile keyword is used are:
Double-checked locking mechanism. Used often in Singleton design
pattern. In this the singleton object needs to be declared volatile.
Spurious Wakeups. Thread may sometimes wake up from wait call even if no notify call has been issued. This behavior is called spurious wakeup. This can be countered by using a conditional variable (boolean flag). Put the wait() call in a while loop as long as the flag is true. So if thread wakes up from wait call due to any reasons other than Notify/NotifyAll then it encounters flag is still true and hence calls wait again. Prior to calling notify set this flag to true. In this case the boolean flag is declared as volatile.
Assume that a thread modifies the value of a shared variable, if you didn't use volatile modifier for that variable. When other threads want to read this variable's value, they don't see the updated value because they read the variable's value from the CPU's cache instead of RAM memory. This problem also known as Visibility Problem.
By declaring the shared variable volatile, all writes to the counter variable will be written back to main memory immediately. Also, all reads of the counter variable will be read directly from main memory.
public class SharedObject {
public volatile int sharedVariable = 0;
}
With non-volatile variables there are no guarantees about when the Java Virtual Machine (JVM) reads data from main memory into CPU caches, or writes data from CPU caches to main memory. This can cause several problems which I will explain in the following sections.
Example:
Imagine a situation in which two or more threads have access to a shared object which contains a counter variable declared like this:
public class SharedObject {
public int counter = 0;
}
Imagine too, that only Thread 1 increments the counter variable, but both Thread 1 and Thread 2 may read the counter variable from time to time.
If the counter variable is not declared volatile there is no guarantee about when the value of the counter variable is written from the CPU cache back to main memory. This means, that the counter variable value in the CPU cache may not be the same as in main memory. This situation is illustrated here:
The problem with threads not seeing the latest value of a variable because it has not yet been written back to main memory by another thread, is called a "visibility" problem. The updates of one thread are not visible to other threads.
You'll need to use 'volatile' keyword, or 'synchronized' and any other concurrency control tools and techniques you might have at your disposal if you are developing a multithreaded application. Example of such application is desktop apps.
If you are developing an application that would be deployed to application server (Tomcat, JBoss AS, Glassfish, etc) you don't have to handle concurrency control yourself as it already addressed by the application server. In fact, if I remembered correctly the Java EE standard prohibit any concurrency control in servlets and EJBs, since it is part of the 'infrastructure' layer which you supposed to be freed from handling it. You only do concurrency control in such app if you're implementing singleton objects. This even already addressed if you knit your components using frameworkd like Spring.
So, in most cases of Java development where the application is a web application and using IoC framework like Spring or EJB, you wouldn't need to use 'volatile'.
volatile only guarantees that all threads, even themselves, are incrementing. For example: a counter sees the same face of the variable at the same time. It is not used instead of synchronized or atomic or other stuff, it completely makes the reads synchronized. Please do not compare it with other java keywords. As the example shows below volatile variable operations are also atomic they fail or succeed at once.
package io.netty.example.telnet;
import java.util.ArrayList;
import java.util.List;
public class Main {
public static volatile int a = 0;
public static void main(String args[]) throws InterruptedException{
List<Thread> list = new ArrayList<Thread>();
for(int i = 0 ; i<11 ;i++){
list.add(new Pojo());
}
for (Thread thread : list) {
thread.start();
}
Thread.sleep(20000);
System.out.println(a);
}
}
class Pojo extends Thread{
int a = 10001;
public void run() {
while(a-->0){
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
Main.a++;
System.out.println("a = "+Main.a);
}
}
}
Even you put volatile or not results will always differ. But if you use AtomicInteger as below results will be always same. This is same with synchronized also.
package io.netty.example.telnet;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
public class Main {
public static volatile AtomicInteger a = new AtomicInteger(0);
public static void main(String args[]) throws InterruptedException{
List<Thread> list = new ArrayList<Thread>();
for(int i = 0 ; i<11 ;i++){
list.add(new Pojo());
}
for (Thread thread : list) {
thread.start();
}
Thread.sleep(20000);
System.out.println(a.get());
}
}
class Pojo extends Thread{
int a = 10001;
public void run() {
while(a-->0){
try {
Thread.sleep(1);
} catch (InterruptedException e) {
e.printStackTrace();
}
Main.a.incrementAndGet();
System.out.println("a = "+Main.a);
}
}
}
While I see many good Theoretical explanations in the answers mentioned here, I am adding a practical example with an explanation here:
1.
CODE RUN WITHOUT VOLATILE USE
public class VisibilityDemonstration {
private static int sCount = 0;
public static void main(String[] args) {
new Consumer().start();
try {
Thread.sleep(100);
} catch (InterruptedException e) {
return;
}
new Producer().start();
}
static class Consumer extends Thread {
#Override
public void run() {
int localValue = -1;
while (true) {
if (localValue != sCount) {
System.out.println("Consumer: detected count change " + sCount);
localValue = sCount;
}
if (sCount >= 5) {
break;
}
}
System.out.println("Consumer: terminating");
}
}
static class Producer extends Thread {
#Override
public void run() {
while (sCount < 5) {
int localValue = sCount;
localValue++;
System.out.println("Producer: incrementing count to " + localValue);
sCount = localValue;
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
return;
}
}
System.out.println("Producer: terminating");
}
}
}
In the above code, there are two threads - Producer and Consumer.
The producer thread iterates over the loop 5 times (with a sleep of 1000 milliSecond or 1 Sec) in between. In every iteration, the producer thread increases the value of sCount variable by 1. So, the producer changes the value of sCount from 0 to 5 in all iterations
The consumer thread is in a constant loop and print whenever the value of sCount changes until the value reaches 5 where it ends.
Both the loops are started at the same time. So both the producer and consumer should print the value of sCount 5 times.
OUTPUT
Consumer: detected count change 0
Producer: incrementing count to 1
Producer: incrementing count to 2
Producer: incrementing count to 3
Producer: incrementing count to 4
Producer: incrementing count to 5
Producer: terminating
ANALYSIS
In the above program, when the producer thread updates the value of sCount, it does update the value of the variable in the main memory(memory from where every thread is going to initially read the value of variable). But the consumer thread reads the value of sCount only the first time from this main memory and then caches the value of that variable inside its own memory. So, even if the value of original sCount in main memory has been updated by the producer thread, the consumer thread is reading from its cached value which is not updated. This is called VISIBILITY PROBLEM .
2.
CODE RUN WITH VOLATILE USE
In the above code, replace the line of code where sCount is declared by the following :
private volatile static int sCount = 0;
OUTPUT
Consumer: detected count change 0
Producer: incrementing count to 1
Consumer: detected count change 1
Producer: incrementing count to 2
Consumer: detected count change 2
Producer: incrementing count to 3
Consumer: detected count change 3
Producer: incrementing count to 4
Consumer: detected count change 4
Producer: incrementing count to 5
Consumer: detected count change 5
Consumer: terminating
Producer: terminating
ANALYSIS
When we declare a variable volatile, it means that all reads and all writes to this variable or from this variable will go directly into the main memory. The values of these variables will never be cached.
As the value of the sCount variable is never cached by any thread, the consumer always reads the original value of sCount from the main memory(where it is being updated by producer thread). So, In this case the output is correct where both the threads prints the different values of sCount 5 times.
In this way, the volatile keyword solves the VISIBILITY PROBLEM .
Yes, I use it quite a lot - it can be very useful for multi-threaded code. The article you pointed to is a good one. Though there are two important things to bear in mind:
You should only use volatile if you
completely understand what it does
and how it differs to synchronized.
In many situations volatile appears,
on the surface, to be a simpler more
performant alternative to
synchronized, when often a better
understanding of volatile would make
clear that synchronized is the only
option that would work.
volatile doesn't actually work in a
lot of older JVMs, although
synchronized does. I remember seeing a document that referenced the various levels of support in different JVMs but unfortunately I can't find it now. Definitely look into it if you're using Java pre 1.5 or if you don't have control over the JVMs that your program will be running on.
Absolutely, yes. (And not just in Java, but also in C#.) There are times when you need to get or set a value that is guaranteed to be an atomic operation on your given platform, an int or boolean, for example, but do not require the overhead of thread locking. The volatile keyword allows you to ensure that when you read the value that you get the current value and not a cached value that was just made obsolete by a write on another thread.
Every thread accessing a volatile field will read its current value before continuing, instead of (potentially) using a cached value.
Only member variable can be volatile or transient.
There are two different uses of volatile keyword.
Prevents JVM from reading values from register (assume as cache), and forces its value to be read from memory.
Reduces the risk of memory in-consistency errors.
Prevents JVM from reading values in register, and forces its
value to be read from memory.
A busy flag is used to prevent a thread from continuing while the device is busy and the flag is not protected by a lock:
while (busy) {
/* do something else */
}
The testing thread will continue when another thread turns off the busy flag:
busy = 0;
However, since busy is accessed frequently in the testing thread, the JVM may optimize the test by placing the value of busy in a register, then test the contents of the register without reading the value of busy in memory before every test. The testing thread would never see busy change and the other thread would only change the value of busy in memory, resulting in deadlock. Declaring the busy flag as volatile forces its value to be read before each test.
Reduces the risk of memory consistency errors.
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a
"happens-before" relationship with subsequent reads of that same variable. This means that changes to a volatile variable are always visible to other threads.
The technique of reading, writing without memory consistency errors is called atomic action.
An atomic action is one that effectively happens all at once. An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete.
Below are actions you can specify that are atomic:
Reads and writes are atomic for reference variables and for most
primitive variables (all types except long and double).
Reads and writes are atomic for all variables declared volatile
(including long and double variables).
Cheers!
Volatile does following.
1> Read and write of volatile variables by different threads are always from memory, not from thread's own cache or cpu register. So each thread always deals with the latest value.
2> When 2 different threads work with same instance or static variables in heap, one may see other's actions as out of order. See jeremy manson's blog on this. But volatile helps here.
Following fully running code shows how a number of threads can execute in predefined order and print outputs without using synchronized keyword.
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
thread 0 prints 0
thread 1 prints 1
thread 2 prints 2
thread 3 prints 3
To achieve this we may use the following full fledged running code.
public class Solution {
static volatile int counter = 0;
static int print = 0;
public static void main(String[] args) {
// TODO Auto-generated method stub
Thread[] ths = new Thread[4];
for (int i = 0; i < ths.length; i++) {
ths[i] = new Thread(new MyRunnable(i, ths.length));
ths[i].start();
}
}
static class MyRunnable implements Runnable {
final int thID;
final int total;
public MyRunnable(int id, int total) {
thID = id;
this.total = total;
}
#Override
public void run() {
// TODO Auto-generated method stub
while (true) {
if (thID == counter) {
System.out.println("thread " + thID + " prints " + print);
print++;
if (print == total)
print = 0;
counter++;
if (counter == total)
counter = 0;
} else {
try {
Thread.sleep(30);
} catch (InterruptedException e) {
// log it
}
}
}
}
}
}
The following github link has a readme, which gives proper explanation.
https://github.com/sankar4git/volatile_thread_ordering
From oracle documentation page, the need for volatile variable arises to fix memory consistency issues:
Using volatile variables reduces the risk of memory consistency errors, because any write to a volatile variable establishes a happens-before relationship with subsequent reads of that same variable.
This means that changes to a volatile variable are always visible to other threads. It also means that when a thread reads a volatile variable, it sees not just the latest change to the volatile, but also the side effects of the code that led up the change.
As explained in Peter Parker answer, in absence of volatile modifier, each thread's stack may have their own copy of variable. By making the variable as volatile, memory consistency issues have been fixed.
Have a look at jenkov tutorial page for better understanding.
Have a look at related SE question for some more details on volatile & use cases to use volatile:
Difference between volatile and synchronized in Java
One practical use case:
You have many threads, which need to print current time in a particular format for example : java.text.SimpleDateFormat("HH-mm-ss"). Yon can have one class, which converts current time into SimpleDateFormat and updated the variable for every one second. All other threads can simply use this volatile variable to print current time in log files.
Volatile Variables are light-weight synchronization. When visibility of latest data among all threads is requirement and atomicity can be compromised , in such situations Volatile Variables must be preferred. Read on volatile variables always return most recent write done by any thread since they are neither cached in registers nor in caches where other processors can not see. Volatile is Lock-Free. I use volatile, when scenario meets criteria as mentioned above.
volatile variable is basically used for instant update (flush) in main shared cache line once it updated, so that changes reflected to all worker threads immediately.
If you have a multithread system and these multiple threads work on some shared data, those threads will load data in their own cache. If we do not lock the resource, any change made in one thread is NOT gonna be available in another thread.
With a locking mechanism, we add read/write access to the data source. If one thread modifies the data source, that data will be stored in the main memory instead of in its cache. When others threads need this data, they will read it from the main memory. This will increase the latency dramatically.
To reduce the latency, we declare variables as volatile. It means that whenever the value of the variable is modified in any of the processors, the other threads will be forced to read it. It still has some delays but better than reading from the main memory.
Below is a very simple code to demonstrate the requirement of volatile for variable which is used to control the Thread execution from other thread (this is one scenario where volatile is required).
// Code to prove importance of 'volatile' when state of one thread is being mutated from another thread.
// Try running this class with and without 'volatile' for 'state' property of Task class.
public class VolatileTest {
public static void main(String[] a) throws Exception {
Task task = new Task();
new Thread(task).start();
Thread.sleep(500);
long stoppedOn = System.nanoTime();
task.stop(); // -----> do this to stop the thread
System.out.println("Stopping on: " + stoppedOn);
}
}
class Task implements Runnable {
// Try running with and without 'volatile' here
private volatile boolean state = true;
private int i = 0;
public void stop() {
state = false;
}
#Override
public void run() {
while(state) {
i++;
}
System.out.println(i + "> Stopped on: " + System.nanoTime());
}
}
When volatile is not used: you'll never see 'Stopped on: xxx' message even after 'Stopping on: xxx', and the program continues to run.
Stopping on: 1895303906650500
When volatile used: you'll see the 'Stopped on: xxx' immediately.
Stopping on: 1895285647980000
324565439> Stopped on: 1895285648087300
Demo: https://repl.it/repls/SilverAgonizingObjectcode
The volatile key when used with a variable, will make sure that threads reading this variable will see the same value . Now if you have multiple threads reading and writing to a variable, making the variable volatile will not be enough and data will be corrupted . Image threads have read the same value but each one has done some chages (say incremented a counter) , when writing back to the memory, data integrity is violated . That is why it is necessary to make the varible synchronized (diffrent ways are possible)
If the changes are done by 1 thread and the others need just to read this value, the volatile will be suitable.
This question already has answers here:
What is the difference between atomic / volatile / synchronized?
(7 answers)
Closed 3 years ago.
I know volatile allows for visibility, AtomicInteger allows for atomicity.
So if I use a volatile AtomicInteger, does it mean I don't have to use any more synchronization mechanisms?
Eg.
class A {
private volatile AtomicInteger count;
void someMethod(){
// do something
if(count.get() < 10) {
count.incrementAndGet();
}
}
Is this threadsafe?
I believe that Atomic* actually gives both atomicity and volatility. So when you call (say) AtomicInteger.get(), you're guaranteed to get the latest value. This is documented in the java.util.concurrent.atomic package documentation:
The memory effects for accesses and updates of atomics generally follow the rules for volatiles, as stated in section 17.4 of The Java™ Language Specification.
get has the memory effects of reading a volatile variable.
set has the memory effects of writing (assigning) a volatile variable.
lazySet has the memory effects of writing (assigning) a volatile variable except that it permits reorderings with subsequent (but not previous) memory actions that do not themselves impose reordering constraints with ordinary non-volatile writes. Among other usage contexts, > - lazySet may apply when nulling out, for the sake of garbage collection, a reference that is never accessed again.
weakCompareAndSet atomically reads and conditionally writes a variable but does not create any happens-before orderings, so provides no guarantees with respect to previous or subsequent reads and writes of any variables other than the target of the weakCompareAndSet.
compareAndSet and all other read-and-update operations such as getAndIncrement have the memory effects of both reading and writing volatile variables.
Now if you have
volatile AtomicInteger count;
the volatile part means that each thread will use the latest AtomicInteger reference, and the fact that it's an AtomicInteger means that you'll also see the latest value for that object.
It's not common (IME) to need this - because normally you wouldn't reassign count to refer to a different object. Instead, you'd have:
private final AtomicInteger count = new AtomicInteger();
At that point, the fact that it's a final variable means that all threads will be dealing with the same object - and the fact that it's an Atomic* object means they'll see the latest value within that object.
I'd say no, it's not thread-safe, if you define thread-safe as having the same result under single threaded mode and multithreaded mode. In single threaded mode, the count will never go greater than 10, but in multithreaded mode it can.
The issue is that get and incrementAndGet is atomic but an if is not. Keep in mind that a non-atomic operation can be paused at any time. For example:
count = 9 currently.
Thread A runs if(count.get() <10) and gets true and stopped there.
Thread B runs if(count.get() <10) and gets true too so it runs count.incrementAndGet() and finishes. Now count = 10.
Thread A resumes and runs count.incrementAndGet(), now count = 11 which will never happen in single threaded mode.
If you want to make it thread-safe without using synchronized which is slower, try this implementation instead:
class A{
final AtomicInteger count;
void someMethod(){
// do something
if(count.getAndIncrement() <10){
// safe now
} else count.getAndDecrement(); // rollback so this thread did nothing to count
}
To maintain the original semantics, and support multiple threads, you could do something like:
public class A {
private AtomicInteger count = new AtomicInteger(0);
public void someMethod() {
int i = count.get();
while (i < 10 && !count.compareAndSet(i, i + 1)) {
i = count.get();
}
}
}
This avoids any thread ever seeing count reach 10.
Answer is there in this code
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/concurrent/atomic/AtomicInteger.java
This is source code of AtomicInteger.
The value is Volatile.
So,AtomicInteger uses Volatile inside.
Your query can be answered in 2 parts, because there are 2 questions in your query :
1)
Referring to Oracle's tutorial documentation for Atomic variables :
https://docs.oracle.com/javase/tutorial/essential/concurrency/atomicvars.html
The java.util.concurrent.atomic package defines classes that support atomic operations on single variables. All classes have get and set methods that work like reads and writes on volatile variables. That is, a set has a happens-before relationship with any subsequent get on the same variable. The atomic compareAndSet method also has these memory consistency features, as do the simple atomic arithmetic methods that apply to integer atomic variables.
So atomic integer does use volatile inside, as other answers here have mentioned. So there's no point in making your atomic integer volatile. You need to synchronize your method.
You should watch John Purcell's free video on Udemy , where he shows the failure of volatile keyword when multiple threads are trying to modify it. Simple and beautiful example.
https://www.udemy.com/course/java-multithreading/learn/lecture/108950#overview
If you change the volatile counter in John's example into an atomic variable, his code is guaranteed to succeed without using sunchronized keyword like he has done in his tutorial
2) Coming to your code :
Say thread 1 kicks into action and "someMethod" does a get and checks for size. It is possible that before getAndIncrement executes(say, by thread 1) , another thread (say thread 2)kicks in and increases the count to 10, and gets out; after which, your thread 1 will resume and increase count to 11. This is erroneous output. This is because your "someMethod" is not protected in anyway from synhronization problems.
I would still recommend you to watch john purcell's videos to see where volatile fails , so that you have a better understanding of the keyword volatile. Replace it with atomicinteger in his example and see the magic.
When Synchronization is used there is a performance impact. Can volatile be used in combination with synchronized to reduce the performance overhead ? For example, instance of Counter will be shared among many threads and each thread can access Counter's public methods. In the below code volatile is used for getter and synchronized is used for setter
public class Counter
{
private volatile int count;
public Counter()
{
count = 0;
}
public int getCount()
{
return count;
}
public synchronized void increment()
{
++count;
}
}
Please let me know in which scenario this might break ?
Yes, you definitely can. In fact, if you look at the source code of AtomicInteger, it's essentially what they do. AtomicInteger.get simply returns value, which is a volatile int (link). The only real difference from what you've done and what they do is that they use a CAS for the increment instead of synchronization. On modern hardware, a CAS can eliminate any mutual exclusion; on older hardware, the JVM will put some sort of mutex around the increment.
Volatile reads are about as fast as non-volatile ones, so the reads will be quite fast.
Not only that, but volatile fields are guaranteed not to tear: see JLS 17.7, which specifies that volatile longs and doubles are not subject to word tearing. So your code would work with a long just as well as an int.
As Diego Frehner points out, you might not see the result of an increment if you get the value "right as" the increment happens -- you'll either see the before or the after. Of course, if get were synchronized you'd have exactly the same behavior from the read thread -- you'd either see the before-increment or post-increment value. So it's really the same either way. In other words, it doesn't make sense to say that you won't see the value as it's happening -- unless you meant word tearing, which (a) you won't get and (b) you would never want.
1. I have personally used this mechanism of volatile combined with synchronized.
2. You can alone use synchronized, and you will always get a consistent result, but using
only volatile alone will Not yield the same result always.
3. This is because volatile keyword is not a synchronization primitive. It merely prevents caching of the value on the thread, but it does not prevent two threads from modifying the same value and writing it back concurrently.
4. volatile give concurrent access to threads without lock, but then using synchronized will allow only one thread to get access to this and all the synchronized methods in the class.
5. And using both volatile and synchronized will do this....
volatile - will reflect the changed values to thread, and prevent caching,
synchronized - But using synchronized keyword, will make sure that only one thread gets the access to the synchronized methods of the class.
You will not always get the most actual count when calling getCount(). An AtomicInteger could be appropriate for you.
There wouldn't be a performance gain from using both. Volatile guarantees that the value of a variable will be consistent when reading/writing to the variable across threads executing in parallel by preventing caching. Synchronized, when applied to a method (as you do in your example), only allows a single thread to enter that method at a time and blocks others until execution is complete.
Sorry this is such a long question.
Ive been doing lots of research lately into multi-threading as I slowly implement it into a personal project. However, probably due to an abundance of slightly incorrect examples, the use of synchronized blocks and volatility in certain situations is still a bit unclear to me.
My core question is this: Are changes to references and primitives automatically volatile (that is, performed on the main memory and not a cache) when a thread is inside a synchronized block, or does the read also have to be synchronized for it to work properly?
If so What is the purpose of synchronizing a simple getter method? (see example 1 ) Also, are ALL changes sent to main memory as long as the thread has synchronized on anything? eg if it is sent off to do loads of work all over the place inside a very high level sync will every single change then made be to main memory, and nothing ever to cache, until its unlocked again?
If not Does the change have to be explicitly inside a synchronized block, or can java actually pick up on, for example, uses of the Lock object? (see example 3)
If either Does the synchronized object need to be related to the reference/primitive being changed in any way (eg the immediate object that contains it)? Can I write by syncing on one object and read with another if its otherwise safe? (see example 2)
(please note for the following examples that I know that synchronized methods and synchronized(this) are frowned upon and why, but discussion about that is beyond the scope of my question)
Example 1:
class Counter{
int count = 0;
public synchronized void increment(){
count++;
}
public int getCount(){
return count;
}
}
In this example, increment() needs to be synchronized since ++ is not an atomic operation. As such, two threads incremending at the same time may result in a overall increase of 1 to the count. The count primitive needs to be atomic (eg not long/double/reference), and it is so thats fine.
Does getCount() need to be synchronized here and why exactly? The explanation I have heard the most is that I will have no guarantee whether the count returned will be the pre- or post-increment. However, this seems like the explanation for something slightly different, thats found itself in the wrong place. I mean if I were to synchronize getCount(), then I still see no guarantee - its now down to not knowing the locking order, insead of not knowing whether the actual read happens to be before/after the actual write.
Example 2:
Is the following example threadsafe, if you assume that through trickery not shown here that none of these methods will never be called at the same time? Will count increment in an expected way if its done so using a random method each time, and then be read properly, or does the lock have to be the same object? (btw I fully realise how rediculous this example is but Im more interested in theory than practice)
class Counter{
private final Object lock1 = new Object();
private final Object lock2 = new Object();
private final Object lock3 = new Object();
int count = 0;
public void increment1(){
synchronized(lock1){
count++;
}
}
public void increment2(){
synchronized(lock2){
count++;
}
}
public int getCount(){
synchronized(lock3){
return count;
}
}
}
Example 3:
Is the happens-before relationship simply a java concept, or is it an actual thing built into the JVM? Even though I can guarantee a conceptual happens-before relationship for this next example, is java smart enough to pick it up if its a built in thing? I am assuming it is not, but is this example actually threadsafe? If its threadsafe, what about if getCount() did no locking?
class Counter{
private final Lock lock = new Lock();
int count = 0;
public void increment(){
lock.lock();
count++;
lock.unlock();
}
public int getCount(){
lock.lock();
int count = this.count;
lock.unlock();
return count;
}
}
Yes, the read has to be synchronized as well. This page says:
The results of a write by one thread are guaranteed to be visible to a
read by another thread only if the write operation happens-before the
read operation.
[...]
An unlock (synchronized block or method exit) of a monitor
happens-before every subsequent lock (synchronized block or method
entry) of that same monitor
The same page says:
Actions prior to "releasing" synchronizer methods such as Lock.unlock,
Semaphore.release, and CountDownLatch.countDown happen-before actions
subsequent to a successful "acquiring" method such as Lock.lock
So locks offer the same visibility guarantees as synchronized blocks.
Whether you use synchronized blocks or locks, the visibility is only guaranteed if the reader thread uses the same monitor or lock as the writer thread.
Your Example 1 is incorrect: the getter must be synchronized as well if you want to see the latest value of the count.
Your example 2 is incorrect because it uses different locks to guard the same count.
Your example 3 is OK. If the getter did not lock, you could see an older value of the count. The happens-before is something that is guaranteed by the JVM. The JVM has to respect the rules specified, by flushing caches to the main memory for example.
Try to view it in terms of two distinct, simple operations:
Locking (mutual exclusion),
Memory barrier (cache sync, instruction reordering barrier).
Entering a synchronized block entails both locking and memory barrier; leaving the synchronized block entails unlocking + memory barrier; reading/writing a volatile field entails memory barrier only. Thinking in these terms I think you can clarify for yourself all the question above.
As for Example 1, the reading thread will not have any kind of memory barrier. It's not just between seeing the value before/after read, it's about never observing any change to the var after a thread is started.
Example 2. is the most interesting issue you raise. You are indeed given no guarantees by the JLS in this case. In practice you won't be given any ordering guarantees (it's as if the locking aspect wasn't there at all), but you'll still have the benefit of the memory barriers so you will observe changes, unlike the first example. Basically, this is exactly the same as removing synchronized and tagging the int as volatile (apart from the runtime costs of acquiring locks).
Regarding Example 3, by "just a Java thing" I feel you have generics with erasure in mind, something that only the static code checking is aware of. This is not like that -- both locks and memory barriers are pure runtime artifacts. In fact, the compiler can't reason about them at all.