Let's say I have a static int that affects behaviour of the class.
class A{
public static int classFlag = 0;
private int myFlag = 0;
public void doSomething(){
if(myFlag != classFlag){
myFlag = classFlag:
}
/*myFlag-dependent behaviour*/
}
}
There's exactly one thread in the system that changes classFlag, and /*myFlag-dependent behaviour*/ does not require that an update to classFlag is immediately visible to all threads.
I would therefore like to keep the classFlag non-volatile to avoid introducting a costly and completely unnecessary memory barrier.
Can I rely on an update to classFlag being eventually visible?
The reader thread that executes a piece of code based on myFlag will not be able to see your update, which can result in
1) very uncertain behavior
2) a missed update (you never know when will this ::doSomething be called again)
I think cost of volatile is low enough to warrant it's correct usage rather than leaving code with such bugs.
Related
I need to create a class that has a shared-between-threads Object (lets call is SharedObject). The special thing about SharedObject is that it holds a String that will be returned in multithreaded environment, and sometimes the entire SharedObject will be written to by changing field reference to newly created object.
I do not want to make the read and write both synchronised on the same monitor because the write scenario is happening rarely while read scenario is quite common. Therefore I did the following:
public class ObjectHolder {
private volatile SharedObject sharedObject;
public String getSharedObjectString() {
if (!isObjectStillValid()) {
obtainNewSharedObject()
}
return sharedObject.getCommonString()
}
public synchronized void obtainNewSharedObject() {
/* This is in case multiple threads wait on this lock,
after first one obtains new object the others can just
use it and should not obtain a new one */
if(!isObjectStillValid()) {
sharedObject = new SharedObject(/*some parameters from somewhere*/)
}
}
}
From what I have read from documentation and on stackoverflow, the synchronized keyword will assure only one thread can access the synchronised block on the same object instance(therefore write race/multiple unnecessary writes is a non-issue) while volatile keyword on the field reference will assure the reference value is written directly to the main program memory (not cached locally).
Are there any other pitfalls I am missing?
I want to be sure that within synchronized block when sharedObject is written to, the new value of sharedObject is present for any other thread at latest when lock for obtainNewSharedObject() is released. Should this not be guaranteed, I could encounter scenarios of unnecessary writes and replacing correct values which are a big problem for this case.
I know to be absolutely safe I could just make getSharedObjectString() synchronized by itself however as stated previously I do not want to block reading if not needed.
This way reading is non-blocking, when a write scenario occurs it is blocking.
I should probably mention method isObjectStillValid() is thread independant (entirely SharedObject and System clock based) therefore a valid Thread-free check to be used for write scenarios.
Edit: Please note I could not find a similar topic on stackoverflow, but it may exist. Sorry if that is the case.
Edit2: Thank you for all the comments. Edit because apparently I cannot upvote yet (I can, but it does not show). While my solution is functional as long as isObjectStillValid is thread-safe, it can suffer from decreased performance due to multiple accesses to volatile field. I will improve it most likely using the upgraded double-checked locking solution. I will also in-depth analyse all the other possibilities mentioned here.
Why don't you use AtomicReference. It uses optimistic locking, meaning that no actual thread locking is involved. Internally it uses Compare and Swap. If you look at the implementation it uses volatile in its implementation and I would trust Doug Lea to implement it correctly :)
Apart from this, there many more ways for synchronization between lot of readers and some writers - ReadWriteLock
This looks like a classic double-checked locking pattern. While your implementation is logically correct - thanks to the use of volatile on sharedObject - it might not be the most performant.
The recommended pattern for Java 1.5 on is shown on the Wikipedia page linked.
// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized(this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the use of a localRef for accessing the helper field. This limits access to the volatile field in the simple case to a single read instead of potentially twice; once for the check and once for the return. See the Wikipedia page again, just after the recommended pattern sample.
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 25 percent.[7]
Depending on how isObjectStillValid() accesses sharedObject, you might benefit from a similar pattern.
This sounds like the use of a ReadWriteLock would be appropiate.
The basic idea is that there can be multiple readers simultaniously or one writer exclusively. Here can you find an Example how to use it in a List implementation.
Copy paste in case the side goes down:
import java.util.*;
import java.util.concurrent.locks.*;
/**
* ReadWriteList.java
* This class demonstrates how to use ReadWriteLock to add concurrency
* features to a non-threadsafe collection
* #author www.codejava.net
*/
public class ReadWriteList<E> {
private List<E> list = new ArrayList<>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public ReadWriteList(E... initialElements) {
list.addAll(Arrays.asList(initialElements));
}
public void add(E element) {
Lock writeLock = rwLock.writeLock();
writeLock.lock();
try {
list.add(element);
} finally {
writeLock.unlock();
}
}
public E get(int index) {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.get(index);
} finally {
readLock.unlock();
}
}
public int size() {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.size();
} finally {
readLock.unlock();
}
}
}
I am having difficulties understanding memory barriers and cache coherence in Java, and how these concepts relate to arrays.
I have the following scenario, where one thread modifies an array (both the reference to it and one of its internal values) and another thread reads from it.
int[] integers;
volatile boolean memoryBarrier;
public void resizeAndAddLast(int value) {
integers = Arrays.copyOf(integers, integers.size + 1);
integers[integers.size - 1] = value;
memoryBarrier = true;
}
public int read(int index) {
boolean memoryBarrier = this.memoryBarrier;
return integers[index];
}
My question is, does this do what I think it does, i.e. does "publishing" to memoryBarrier and subsequently reading the variable force a cache-coherence action and make sure that the reader thread will indeed get both the latest array reference and the correct underlying value at the specified index?
My understanding is that the array reference does not have to be declared volatile, it should be enough to force a cache-coherence action using any volatile field. Is this reasoning correct?
EDIT: there is precisely one writer thread and many reader threads.
Nope, your code is thread-unsafe. A variation which would make it safe is as follows:
void raiseFlag() {
if (memoryBarrier == true)
throw new IllegalStateException("Flag already raised");
memoryBarrier = true;
}
public int read(int index) {
if (memoryBarrier == false)
throw IllegalStateException("Flag not raised yet");
return integers[index];
}
You only get to raise the flag once and you don't get to publish more than one integers array. This would be quite useless for your use case, though.
Now, as to the why... You do not guarantee that between the first and second line of read() there wasn't an intervening write to integers which was observed by the second line. The lack of a memory barrier does not prevent another thread from observing an action. It makes the result unspecified.
There is a simple idiom that would make your code thread-safe (specialized for the assumption that a single thread calls resizeAndAddLast, otherwise more code is necessary and an AtomicReference):
volatile int[] integers;
public void resizeAndAddLast(int value) {
int[] copy = Arrays.copyOf(integers, integers.length + 1);
copy[copy.length - 1] = value;
integers = copy;
}
public int read(int index) {
return integers[index];
}
In this code you never touch an array once it got published, therefore whatever you dereference from read will be observed as intended, with the index updated.
There are multiple reasons why it wont work in general:
Java doesnt say anything about memory barriers or about the ordering
of unrelated variables. Global Memory barriers is a side effect of
x86
Even with global memory barriers: The write-order of array-reference and indexed array-value is undefined. It is guarantied that both happen-before the memory barrier, but in which order? An unsynchronized read may see the reference but not the array-value. Your read-barrier doesnt help here in case of multiple read/writes.
Beware of arrays of references: Visibility of referenced values requires special attention
A slightly better approach would be to declare the array itself as volatile and treat its values as immutable:
volatile int[] integers; // volatile (or maybe better AtomicReference)
public void resizeAndAddLast(int value) {
// enforce exactly one volatile read!
int[] copy = integers;
copy = Arrays.copyOf(copy, copy.size + 1);
copy[copy.size - 1] = value;
// may lose concurrent updates. Add synchronization or a compareExchange-loop!
integers = copy;
}
public int read(int index) {
return integers[index];
}
Unless you declare a variable volatile there is no guarantee that the thread will get the correct value. Volatile guarantees change in the variable is visible meaning instead of using the CPU cache it will write/read from main memory.
You will also need synchronization so that the reading thread does not read before the write is complete. Any reason for going with array rather than an ArrayList object because you are already using Arrays.copyOf and resizing?
I am new to the volatile variable but I was going through article which states 2) Volatile variable can be used as an alternative way of achieving synchronization in Java in some cases, like Visibility. with volatile variable its guaranteed that all reader thread will see updated value of volatile variable once write operation completed, without volatile keyword different reader thread may see different values.
I request you guys could you please show this with me a small java program , so technically also it is clear to me.
what I come from my understanding is...
Volatile means each Thread Access the variable will have its own private copy which is same as original one.But if the Thread is going to change that private copy,then original one will not get reflected.
public class Test1 {
volatile int i=0,j=0;
public void add1()
{
i++;
j++;
}
public void printing(){
System.out.println("i=="+i+ "j=="+j);
}
public static void main(String[] args) {
Test1 t1=new Test1();
Test1 t2=new Test1();
t1.add1();//for t1 ,i=1,j=1
t2.printing();//for t2 value of i and j is still,i=0,j=0
t1.printing();//prints the value of i and j for t1,i.e i=1,j=1
t2.add1();////for t2 value of i and j is changed to i=1;j=1
t2.printing();//prints the value of i and j for t2i=1;j=1
}
}
I request you guys could you please show a small program of volatile functionality, so technically also it is clear to me
Volatile variable as you have read guarantees visibility but doesn't guarantee atomicity - another important aspect of thread safety. I will try to explain by an example
public class Counter {
private volatile int counter;
public int increment() {
System.out.println("Counter:"+counter); // reading always gives the correct value
return counter++; // atomicity isn't guaranteed, this will eventually lead to skew/error in the expected value of counter.
}
public int decrement() {
System.out.println("Counter:"+counter);
return counter++;
}
}
In the example, you can see that the read operation will always give the correct value of counter at an instant of time, however atomic operations (like evaluate a condition and do something and read and write on the basis of read value) thread safety is not guaranteed.
You can refer this answer for additional details.
Volatile means each Thread Access the variable will have its own
private copy which is same as original one.But if the Thread is going
to change that private copy,then original one will not get reflected.
I am not sure I understand you correctly, but volatile fields imply they are read and written from the main memory accessible to all threads - there are no thread specific copies (caching) of the variable.
From JLS,
A field may be declared volatile, in which case the Java Memory Model
ensures that all threads see a consistent value for the variable
Intro:
I want to create a multithreaded android app. My problem is the communication between the threads. I read about communication between threads and I came across stuff like Looper/Handler design, which seemed quite involved and Atomic Variables like AtomicInteger. For now, I used AtomicInteger as a communication but since I am not very experienced in Java, I am not sure if that is bad in my case/ if there is a better solution for my particular purpose. Also I got a little suspicious of my method, when I noticed I need actually something like AtomicFloat, but it's not existing. I felt like I am missusing the concept. I also found that you can make yourself an AtomicFloat, but I am just not sure if I am on the right way or if there is a better technique.
Question:
Is it ok/good to use Atomic Variables and implement also AtomicFloat for my particular purpose (described below) or is there a better way of handling the communication?
Purpose/Architecture of the App using AtomicVariables so far:
I have 4 Threads with the following purpose:
1.SensorThread: Reads sensor data and saves the most recent values in AtomicVariables like
AtomicFloat gyro_z,AtomicFloat gyro_y, ...
2.CommunicationThread: Communication with the PC, interprets commands which come form the socket and set the state of the app in terms of a AtomicInteger:
AtomicInteger state;
3.UIThread: Displays current sensor values from
AtomicFloat gyro_z,AtomicFloat gyro_y,
4.ComputationThread: uses sensor values AtomicFloat gyro_z,AtomicFloat gyro_y, ... and state AtomicInteger state to perform calculation and send commands over USB.
You basically have a readers writers problem, with two readers and (for the moment) only one writer. If you just want to pass simple types between threads, an AtomicInteger or a similarly implemented AtomicFloat will be just fine.
However, a more accommodating solution, which would enable you to work with more complex data types would be a ReadWriteLock protecting the code where you read or write your object data:
e.g.:
private ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); //the reentrant impl
....
public void readMethod() {
readWriteLock.readLock().lock();
try {
//code that simply _reads_ your object
} finally {
readWriteLock.readLock().unlock();
}
}
public void writeMethod() {
readWriteLock.writeLock().lock();
try {
//... code that modifies your shared object / objects
} finally {
readWriteLock.writeLock().unlock();
}
}
This will only enable "one writer-only" or "multiple reader" scenarios for access to your shared objects.
This would enable you for example to work with a complex type that looks like this:
public class SensorRead {
public java.util.Date dateTimeForSample;
public float value;
}
While using this data type you should care if the two fields are set and modified safely and atomically. The AtomicXXX type objects are not useful anymore.
You have to first ask yourself if you truly need the functionality of a theoretical AtomicFloat. The only benefit you could have over a simple volatile float is the compareAndSet and the addAndGet operations (since I guess increment and decrement don't really make sense in the case of floats).
If you really need those, you could probably implement them by studying the code of AtomicInteger e.g.:
public final int addAndGet(int delta) {
for (;;) {
int current = get();
int next = current + delta;
if (compareAndSet(current, next))
return next;
}
}
Now the only problem here is that compareAndSet uses platform-specific calls that don't exist for floats, so you'll probably need to emulate it by using the Float.floatToIntBits method to obtain an int, then use the CAS of AtomicInteger, something like:
private volatile float value;
public final boolean compareAndSet(float expect, float next) {
AtomicInteger local = new AtomicInteger();
for(;;) {
local.set(Float.floatToIntBits(value));
if(local.compareAndSet(Float.floatToIntBits(expect),
Float.floatToIntBits(next)) {
set(Float.intBitsToFloat(local.get()));
return true;
}
}
}
public final float addAndGet(float delta) {
for (;;) {
float current = get();
float next = current + delta;
if (compareAndSet(current, next))
return next;
}
}
A warning is showing every time I synchronize on a non-final class field. Here is the code:
public class X
{
private Object o;
public void setO(Object o)
{
this.o = o;
}
public void x()
{
synchronized (o) // synchronization on a non-final field
{
}
}
}
so I changed the coding in the following way:
public class X
{
private final Object o;
public X()
{
o = new Object();
}
public void x()
{
synchronized (o)
{
}
}
}
I am not sure the above code is the proper way to synchronize on a non-final class field. How can I synchronize a non final field?
First of all, I encourage you to really try hard to deal with concurrency issues on a higher level of abstraction, i.e. solving it using classes from java.util.concurrent such as ExecutorServices, Callables, Futures etc.
That being said, there's nothing wrong with synchronizing on a non-final field per se. You just need to keep in mind that if the object reference changes, the same section of code may be run in parallel. I.e., if one thread runs the code in the synchronized block and someone calls setO(...), another thread can run the same synchronized block on the same instance concurrently.
Synchronize on the object which you need exclusive access to (or, better yet, an object dedicated to guarding it).
It's really not a good idea - because your synchronized blocks are no longer really synchronized in a consistent way.
Assuming the synchronized blocks are meant to be ensuring that only one thread accesses some shared data at a time, consider:
Thread 1 enters the synchronized block. Yay - it has exclusive access to the shared data...
Thread 2 calls setO()
Thread 3 (or still 2...) enters the synchronized block. Eek! It think it has exclusive access to the shared data, but thread 1 is still furtling with it...
Why would you want this to happen? Maybe there are some very specialized situations where it makes sense... but you'd have to present me with a specific use case (along with ways of mitigating the sort of scenario I've given above) before I'd be happy with it.
I agree with one of John's comment: You must always use a final lock dummy while accessing a non-final variable to prevent inconsistencies in case of the variable's reference changes. So in any cases and as a first rule of thumb:
Rule#1: If a field is non-final, always use a (private) final lock dummy.
Reason #1: You hold the lock and change the variable's reference by yourself. Another thread waiting outside the synchronized lock will be able to enter the guarded block.
Reason #2: You hold the lock and another thread changes the variable's reference. The result is the same: Another thread can enter the guarded block.
But when using a final lock dummy, there is another problem: You might get wrong data, because your non-final object will only be synchronized with RAM when calling synchronize(object). So, as a second rule of thumb:
Rule#2: When locking a non-final object you always need to do both: Using a final lock dummy and the lock of the non-final object for the sake of RAM synchronisation. (The only alternative will be declaring all fields of the object as volatile!)
These locks are also called "nested locks". Note that you must call them always in the same order, otherwise you will get a dead lock:
public class X {
private final LOCK;
private Object o;
public void setO(Object o){
this.o = o;
}
public void x() {
synchronized (LOCK) {
synchronized(o){
//do something with o...
}
}
}
}
As you can see I write the two locks directly on the same line, because they always belong together. Like this, you could even do 10 nesting locks:
synchronized (LOCK1) {
synchronized (LOCK2) {
synchronized (LOCK3) {
synchronized (LOCK4) {
//entering the locked space
}
}
}
}
Note that this code won't break if you just acquire an inner lock like synchronized (LOCK3) by another threads. But it will break if you call in another thread something like this:
synchronized (LOCK4) {
synchronized (LOCK1) { //dead lock!
synchronized (LOCK3) {
synchronized (LOCK2) {
//will never enter here...
}
}
}
}
There is only one workaround around such nested locks while handling non-final fields:
Rule #2 - Alternative: Declare all fields of the object as volatile. (I won't talk here about the disadvantages of doing this, e.g. preventing any storage in x-level caches even for reads, aso.)
So therefore aioobe is quite right: Just use java.util.concurrent. Or begin to understand everything about synchronisation and do it by yourself with nested locks. ;)
For more details why synchronisation on non-final fields breaks, have a look into my test case: https://stackoverflow.com/a/21460055/2012947
And for more details why you need synchronized at all due to RAM and caches have a look here: https://stackoverflow.com/a/21409975/2012947
I'm not really seeing the correct answer here, that is, It's perfectly alright to do it.
I'm not even sure why it's a warning, there is nothing wrong with it. The JVM makes sure that you get some valid object back (or null) when you read a value, and you can synchronize on any object.
If you plan on actually changing the lock while it's in use (as opposed to e.g. changing it from an init method, before you start using it), you have to make the variable that you plan to change volatile. Then all you need to do is to synchronize on both the old and the new object, and you can safely change the value
public volatile Object lock;
...
synchronized (lock) {
synchronized (newObject) {
lock = newObject;
}
}
There. It's not complicated, writing code with locks (mutexes) is actally quite easy. Writing code without them (lock free code) is what's hard.
EDIT: So this solution (as suggested by Jon Skeet) might have an issue with atomicity of implementation of "synchronized(object){}" while object reference is changing. I asked separately and according to Mr. erickson it is not thread safe - see: Is entering synchronized block atomic?. So take it as example how to NOT do it - with links why ;)
See the code how it would work if synchronised() would be atomic:
public class Main {
static class Config{
char a='0';
char b='0';
public void log(){
synchronized(this){
System.out.println(""+a+","+b);
}
}
}
static Config cfg = new Config();
static class Doer extends Thread {
char id;
Doer(char id) {
this.id = id;
}
public void mySleep(long ms){
try{Thread.sleep(ms);}catch(Exception ex){ex.printStackTrace();}
}
public void run() {
System.out.println("Doer "+id+" beg");
if(id == 'X'){
synchronized (cfg){
cfg.a=id;
mySleep(1000);
// do not forget to put synchronize(cfg) over setting new cfg - otherwise following will happend
// here it would be modifying different cfg (cos Y will change it).
// Another problem would be that new cfg would be in parallel modified by Z cos synchronized is applied on new object
cfg.b=id;
}
}
if(id == 'Y'){
mySleep(333);
synchronized(cfg) // comment this and you will see inconsistency in log - if you keep it I think all is ok
{
cfg = new Config(); // introduce new configuration
// be aware - don't expect here to be synchronized on new cfg!
// Z might already get a lock
}
}
if(id == 'Z'){
mySleep(666);
synchronized (cfg){
cfg.a=id;
mySleep(100);
cfg.b=id;
}
}
System.out.println("Doer "+id+" end");
cfg.log();
}
}
public static void main(String[] args) throws InterruptedException {
Doer X = new Doer('X');
Doer Y = new Doer('Y');
Doer Z = new Doer('Z');
X.start();
Y.start();
Z.start();
}
}
AtomicReference suits for your requirement.
From java documentation about atomic package:
A small toolkit of classes that support lock-free thread-safe programming on single variables. In essence, the classes in this package extend the notion of volatile values, fields, and array elements to those that also provide an atomic conditional update operation of the form:
boolean compareAndSet(expectedValue, updateValue);
Sample code:
String initialReference = "value 1";
AtomicReference<String> someRef =
new AtomicReference<String>(initialReference);
String newReference = "value 2";
boolean exchanged = someRef.compareAndSet(initialReference, newReference);
System.out.println("exchanged: " + exchanged);
In above example, you replace String with your own Object
Related SE question:
When to use AtomicReference in Java?
If o never changes for the lifetime of an instance of X, the second version is better style irrespective of whether synchronization is involved.
Now, whether there's anything wrong with the first version is impossible to answer without knowing what else is going on in that class. I would tend to agree with the compiler that it does look error-prone (I won't repeat what the others have said).
Just adding my two cents: I had this warning when I used component that is instantiated through designer, so it's field cannot really be final, because constructor cannot takes parameters. In other words, I had quasi-final field without the final keyword.
I think that's why it is just warning: you are probably doing something wrong, but it might be right as well.