How a thread can see stale reference of safely initialized object - java

I have been trying to figure out that how immutable objects which are safely published could be observed with stale reference.
public final class Helper {
private final int n;
public Helper(int n) {
this.n = n;
}
}
class Foo {
private Helper helper;
public Helper getHelper() {
return helper;
}
public void setHelper(int num) {
helper = new Helper(num);
}
}
So far I could understand that Helper is immutable and can be safely published. A reading thread either reads null or fully initialized Helper object as it won't be available until fully constructed. The solution is to put volatile in Foo class which I don't understand.

The fact that you are publishing a reference to an immutable object is irrelevant here.
If you are reading the value of a reference from multiple threads, you need to ensure that the write happens before a read if you care about all threads using the most up-to-date value.
Happens before is a precisely-defined term in the language spec, specifically the part about the Java Memory Model, which allows threads to make optimisations for example by not always updating things in main memory (which is slow), instead holding them in their local cache (which is much faster, but can lead to threads holding different values for the "same" variable). Happens-before is a relation that helps you to reason about how multiple threads interact when using these optimisations.
Unless you actually create a happens-before relationship, there is no guarantee that you will see the most recent value. In the code you have shown, there is no such relationship between writes and reads of helper, so your threads are not guaranteed to see "new" values of helper. They might, but they likely won't.
The easiest way to make sure that the write happens before the read would be to make the helper member variable final: the writes to values of final fields are guaranteed to happen before the end of the constructor, so all threads always see the correct value of the field (provided this wasn't leaked in the constructor).
Making it final isn't an option here, apparently, because you have a setter. So you have to employ some other mechanism.
Taking the code at face value, the simplest option would be to use a (final) AtomicInteger instead of the Helper class: writes to AtomicInteger are guaranteed to happen before subsequent reads. But I guess your actual helper class is probably more complicated.
So, you have to create that happens-before relationship yourself. Three mechanisms for this are:
Using AtomicReference<Helper>: this has similar semantics to AtomicInteger, but allows you to store a reference-typed value. (Thanks for pointing this out, #Thilo).
Making the field volatile: this guarantees visibility of the most recently-written value, because it causes writes to flush to main memory (as opposed to reading from a thread's cache), and reads to read from main memory. It effectively stops the JVM making this particular optimization.
Accessing the field in a synchronized block. The easiest thing to do would be to make the getter and setter methods synchronized. Significantly, you should not synchronize on helper, since this field is being changed.

Cite from Volatile vs Static in Java
This means that if two threads update a variable of the same Object concurrently, and the variable is not declared volatile, there could be a case in which one of the thread has in cache an old value.
Given your code, the following can happen:
Thread 1 calls getHelper() and gets null
Thread 2 calls getHelper() and gets null
Thread 1 calls setHelper(42)
Thread 2 calls setHelper(24)
And in this case your trouble starts regarding which Helper object will be used in which thread. The keyword volatile will at least solve the caching problem.

The variable helper is being read by multiple threads simultaneously. At the least, you have to make it volatile or the compiler will begin caching it in registers local to threads and any updates to the variable may not reflect in the main memory. Using volatile, when a thread starts reading a shared variable, it will clear its cache and fetch a fresh value from the global memory. When it finishes reading it, it will flush the contents of its cache into the main memory so that other threads may get the updated value.

Related

Do i need a lock at all if only 1 thread updates the value? [duplicate]

public class Test{
private MyObj myobj = new MyObj(); //it is not volatile
public class Updater extends Thred{
myobje = getNewObjFromDb() ; //not am setting new object
}
public MyObj getData(){
//getting stale date is fine for
return myobj;
}
}
Updated regularly updates myobj
Other classes fetch data using getData
IS this code thread safe without using volatile keyword?
I think yes. Can someone confirm?
No, this is not thread safe. (What makes you think it is?)
If you are updating a variable in one thread and reading it from another, you must establish a happens-before relationship between the write and the subsequent read.
In short, this basically means making both the read and write synchronized (on the same monitor), or making the reference volatile.
Without that, there are no guarantees that the reading thread will see the update - and it wouldn't even be as simple as "well, it would either see the old value or the new value". Your reader threads could see some very odd behaviour with the data corruption that would ensue. Look at how lack of synchronization can cause infinite loops, for example (the comments to that article, especially Brian Goetz', are well worth reading):
The moral of the story: whenever mutable data is shared across threads, if you don’t use synchronization properly (which means using a common lock to guard every access to the shared variables, read or write), your program is broken, and broken in ways you probably can’t even enumerate.
No, it isn't.
Without volatile, calling getData() from a different thread may return a stale cached value.
volatile forces assignments from one thread to be visible on all other threads immediately.
Note that if the object itself is not immutable, you are likely to have other problems.
You may get a stale reference. You may not get an invalid reference.
The reference you get is the value of the reference to an object that the variable points to or pointed to or will point to.
Note that there are no guarantees how much stale the reference may be, but it's still a reference to some object and that object still exists. In other words, writing a reference is atomic (nothing can happen during the write) but not synchronized (it is subject to instruction reordering, thread-local cache et al.).
If you declare the reference as volatile, you create a synchronization point around the variable. Simply speaking, that means that all cache of the accessing thread is flushed (writes are written and reads are forgotten).
The only types that don't get atomic reads/writes are long and double because they are larger than 32-bits on 32-bit machines.
If MyObj is immutable (all fields are final), you don't need volatile.
The big problem with this sort of code is the lazy initialization. Without volatile or synchronized keywords, you could assign a new value to myobj that had not been fully initialized. The Java memory model allows for part of an object construction to be executed after the object constructor has returned. This re-ordering of memory operations is why the memory-barrier is so critical in multi-threaded situations.
Without a memory-barrier limitation, there is no happens-before guarantee so you do not know if the MyObj has been fully constructed. This means that another thread could be using a partially initialized object with unexpected results.
Here are some more details around constructor synchronization:
Constructor synchronization in Java
Volatile would work for boolean variables but not for references. Myobj seems to perform like a cached object it could work with an AtomicReference. Since your code extracts the value from the DB I'll let the code stay as is and add the AtomicReference to it.
import java.util.concurrent.atomic.AtomicReference;
public class AtomicReferenceTest {
private AtomicReference<MyObj> myobj = new AtomicReference<MyObj>();
public class Updater extends Thread {
public void run() {
MyObj newMyobj = getNewObjFromDb();
updateMyObj(newMyobj);
}
public void updateMyObj(MyObj newMyobj) {
myobj.compareAndSet(myobj.get(), newMyobj);
}
}
public MyObj getData() {
return myobj.get();
}
}
class MyObj {
}

Do I have to extend class to ConcurrentHashMap or can I have variable ConcurrentHashMap for threadSafety

I am creating Socket based Server-Client reservation service, and have problem about class which will be accessed by multiple threads, does it need to Extend ConcurrentHashMap or is it enough to create variable ConcurrentHashMap to be thread safe?
I have two ideas but I am not sure if first one will work, so the first one would be creating class which only implements Serializable has variable date and then variable ConcurrentHashMap on which threads want to operate, second idea is to have class which extends Concurrent Hash Map and just is CHP but with addiontal variable to make sure it is distinguishable from others
public class Day implements Serializable {
private LocalDate date;
private ConcurrentHashMap<String, Boolean> schedule;
public Day(LocalDate date){
this.date = date;
this.schedule = new ConcurrentHashMap<>();
IntStream.range(10, 18).forEachOrdered(
n -> this.schedule.put(LocalTime.of(n, 0).toString(), TRUE));
}
public void changeaval(String key,Boolean status) {
this.schedule.replace(key,status);
}
public boolean aval(String key){
return this.schedule.get(key);
}
public LocalDate getDate(){return this.date;}
public ConcurrentHashMap getSchedule(){return this.schedule;}
}
I just want to have Class/Object which can be accessed by multiple threads and can be distinguishable from others/comparable and has ConcurrentHashMap which maps Int -> Boolean
This is the first time I am using Stack and It is my first project in Java so I don't know much sorry if something is not right.
There are basically two things to look out for when dealing with objects accessed by multiple threads:
Race condition - Due to thread scheduling by the operating system and instruction reordering optimizations by the compiler, the instructions are executed in a order not intended by the programmer causing bugs
Memory visibility - In a multi processor system, changes made by one processor is not always immediately visible to other processors. Processors keep things in their local registers and caches for performance reasons and therefore not visible to threads being executed by other processors.
Luckily we can handle both these situation using proper synchronizations.
Let's talk about this particular program.
Localdate by itself is an immutable and thread safe class. If we look at the source code of this class, we'd see that all the fields of this class are final. This means that as soon as the constructor of Localdate finishes initializing the object, the object itself will be visible across threads. But when it is assigned to a reference variable in a different object, whether the assignment (in other words, the content of the reference variable) would be visible to other threads or not is what we need to look out for.
Given the constructor in your case, we can ensure the visibility of the field date across threads provided date is either final or volatile. Since you are not modifying the date field in your class, you can very well make it final and that ensures safe initialization. If you later decide to have a setter method for this field (depending on your business logic and your design), you should make the field volatile instead of final. volatile creates a happens-before relationship which means that any instruction that is executed in the particular thread before writing to the volatile variable would be immediately visible to the other threads as soon as they read the same volatile variable.
Same goes for ConcurrentHashMap. You should make the field schedule final. Since ConcurrentHashMap by itself has all the necessary synchronizations in it, any value you set against a key would be visible to the other threads when they try to read it.
Note, however, that if you had some mutable objects as ConcurrentHashMap values instead of Boolean, you would have to design it in the same way as mentioned above.
Also, it may be good to know that there is a concept called piggy-backing which means that if one thread writes to all its fields and then writes to a volatile variable, everything written by the thread before writing to the volatile variable would be visible to the other threads, provided the other threads first read value of the volatile variable after it is written by the first thread. But when you do this you have to ensure very carefully the sequence of reading and writing and it is error prone. So, this is done when you want to squeeze out the last drop of performance from the piece of code which is rare. Favor safety, maintainability, readability before performance.
Finally, there is no race condition in the code. The only write that is happening is on the ConcurrentHashMap which is thread safe by itself.
Basically, both approaches are equivalent. From architectural point of view, making a variable inside dedicated class is preferred because of better control of which methods are accessible to the user. When extending, a user can access many methods of underlying ConcurrentHashMap and misuse them.

What is difference between getXXXVolatile vs getXXX in java unsafe?

I am trying to understand the two methods here in java unsafe:
public native short getShortVolatile(Object var1, long var2);
vs
public native short getShort(Object var1, long var2);
What is the real difference here? What does volatile here really work for? I found API doc here: http://www.docjar.com/docs/api/sun/misc/Unsafe.html#getShortVolatile(Object,%20long)
But it does not really explain anything for the difference between the two functions.
My understanding is that, for volatile, it only matters when we do write. To me, it should make sense that we call putShortVolatile and then for reading, we can simply call getShort() since volatile write already guarantee the new value has been flushed into main memory.
Please kindly correct me if anything is wrong. Thanks!
Here there is an article: http://mydailyjava.blogspot.it/2013/12/sunmiscunsafe.html
Unsafe supports all primitive values and can even write values without hitting thread-local caches by using the volatile forms of the methods
getXXX(Object target, long offset): Will read a value of type XXX from target's address at the specified offset.
getXXXVolatile(Object target, long offset): Will read a value of type XXX from target's address at the specified offset and not hit any thread local caches.
putXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offset.
putXXXVolatile(Object target, long offset, XXX value): Will place value at target's address at the specified offset and not hit any thread local caches.
UPDATE:
You can find more information about memory management and volatile fields on this article: http://cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html (it contains also some example of reordering).
In multiprocessor systems, processors generally have one or more layers of memory cache, which improves performance both by speeding access to data (because the data is closer to the processor) and reducing traffic on the shared memory bus (because many memory operations can be satisfied by local caches.) Memory caches can improve performance tremendously, but they present a host of new challenges. What, for example, happens when two processors examine the same memory location at the same time? Under what conditions will they see the same value?
Some processors exhibit a strong memory model, where all processors see exactly the same value for any given memory location at all times. Other processors exhibit a weaker memory model, where special instructions, called memory barriers, are required to flush or invalidate the local processor cache in order to see writes made by other processors or make writes by this processor visible to others.
The issue of when a write becomes visible to another thread is compounded by the compiler's reordering of code. If a compiler defers an operation, another thread will not see it until it is performed; this mirrors the effect of caching. Moreover, writes to memory can be moved earlier in a program; in this case, other threads might see a write before it actually "occurs" in the program.
Java includes several language constructs, including volatile, final, and synchronized, which are intended to help the programmer describe a program's concurrency requirements to the compiler. The Java Memory Model defines the behavior of volatile and synchronized, and, more importantly, ensures that a correctly synchronized Java program runs correctly on all processor architectures.
As you can see in the section What does volatile do?
Volatile fields are special fields which are used for communicating state between threads. Each read of a volatile will see the last write to that volatile by any thread; in effect, they are designated by the programmer as fields for which it is never acceptable to see a "stale" value as a result of caching or reordering. The compiler and runtime are prohibited from allocating them in registers. They must also ensure that after they are written, they are flushed out of the cache to main memory, so they can immediately become visible to other threads. Similarly, before a volatile field is read, the cache must be invalidated so that the value in main memory, not the local processor cache, is the one seen.
There are also additional restrictions on reordering accesses to volatile variables. Accesses to volatile variables could not be reordered with each other. Is now no longer so easy to reorder normal field accesses around them. Writing to a volatile field has the same memory effect as a monitor release, and reading from a volatile field has the same memory effect as a monitor acquire. In effect, because the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f.
So the difference is that the setXXX() and getXXX() could be reorded or could use cached values not yet synchronized between the threads, while the setXXXVolatile() and the getXXXVolatile() won't be reordered and will use always the last value.
The thread local cache is a temporary storage used from java to improve performances: the data will be written/read into/from the cache before to be flushed on the memory.
In a single thread context you can use both the not-volatile than the volatile version of those methods, there will be no difference. When you write something, it doesn't matter if it is written immediately on memory or just in the thread local cache: when you'll try to read it, you'll be in the same thread, so you'll get the last value for sure (the thread local cache contain the last value).
In a multi thread context, instead, the cache could give you some throubles.
If you init an unsafe object, and you share it between two or more threads, each of those threads will have a copy of it into its local cache (the two threads could be runned on different processors, each one with its cache).
If you use the setXXX() method on a thread, the new value could be written in the thread local cache, but not yet in the memory. So it could happens that just one of the multiple thread contains the new value, while the memory and the other threadds local cache contain the old value. This could bring to unexpected results. The setXXXVolatile() method will write the new value directly on memory, so also the other threadds will be able to access to the new value (if they use the getXXXVolatile() methods).
If you use the getXXX() method, you'll get the local cache value. So if another thread has changed the value on the memory, the current thread local cache could still contains the old value, and you'll get unexpeted results. If you use the getXXXVolatile() method, you'll access directly to the memory, and you'll get the last value for sure.
Using the example of the previous link:
class DirectIntArray {
private final static long INT_SIZE_IN_BYTES = 4;
private final long startIndex;
public DirectIntArray(long size) {
startIndex = unsafe.allocateMemory(size * INT_SIZE_IN_BYTES);
unsafe.setMemory(startIndex, size * INT_SIZE_IN_BYTES, (byte) 0);
}
}
public void setValue(long index, int value) {
unsafe.putInt(index(index), value);
}
public int getValue(long index) {
return unsafe.getInt(index(index));
}
private long index(long offset) {
return startIndex + offset * INT_SIZE_IN_BYTES;
}
public void destroy() {
unsafe.freeMemory(startIndex);
}
}
This class use the putInt and the getInt to write the values into an array allocated on the memory (so outside the heap space).
As said before, those methods write the data in the thread local cache, not immediately in the memory. So when you use the setValue() method, the local cache will be updated immediately, the allocated memory will be updated after a while (it depends from the JVM implementation).
In a single thread context that class will work without problem.
In a multi threads context it could fails.
DirectIntArray directIntArray = new DirectIntArray(maximum);
Runnable t1 = new MyThread(directIntArray);
Runnable t2 = new MyThread(directIntArray);
new Thread(t1).start();
new Thread(t2).start();
Where MyThread is:
public class MyThread implements Runnable {
DirectIntArray directIntArray;
public MyThread(DirectIntArray parameter) {
directIntArray = parameter;
}
public void run() {
call();
}
public void call() {
synchronized (this) {
assertEquals(0, directIntArray.getValue(0L)); //the other threads could have changed that value, this assert will fails if the local thread cache is already updated, will pass otherwise
directIntArray.setValue(0L, 10);
assertEquals(10, directIntArray.getValue(0L));
}
}
}
With putIntVolatile() and getIntVolatile(), one of the two threads will fails for sure (the second threads will get 10 instead of 0).
With putInt() and getInt(), both the threads could finish with success (because the local cache of both threads could still contains 0 if the writer cache wasn't been flushed or the reader cache wasn't been refreshed).
I think that getShortVolatile is reading a plain short from an Object, but treats it as a volatile; it's like reading a plain variable and inserting the needed barriers (if any) yourself.
Much simplified (and to some degree wrong, but just to get the idea). Release/Acquire semantics:
Unsafe.weakCompareAndSetIntAcquire // Acquire
update some int here
Unsafe.weakCompareAndSetIntRelease // Release
As to why this is needed (this is for getIntVolatile, but the case still stands), is to probably enforce non-reorderings. Again, this is a bit beyond me and Gil Tene explaining this is FAR more suited.

Two threads have references to unshared instances of unsynchronized class. Threading issues?

I haven't done any threading in years, need a bit of a reset:
If I have multiple instances of a class will two threads need synchronization even if they talk to different instances?
Example
Let's say I have a class with a method. The method increments a counter and returns the current count.
There are two threads. Each thread has its own instance of the counter class and calls the method repeatedly. There is no locking or synchronization. Will the threads step on each other?
There are two threads. Each thread has its own instance of the counter class and calls the method repeatedly. There is no locking or synchronization. Will the threads step on each other?
no they won't as long as the data written to in one thread is not read from another thread.
That specific multithreading strategy is called Thread confinement: you don't share anything across threads. That is one of the simplest way to make your program thread safe.
There is no need for any locking or synchronization unless both of the threads update the same instance of the counter. If they both have a counter instance, and they only read/write their own counter instance, there will be no problems.
If only 1 Thread is accessing a given Object / field in an object, it will be thread-safe.
Example:
public class ThreadSafe {
int counter;
public void increment() {...}
}
public class NotThreadSafe {
static int counter;
public static void increment() {...}
}
Well this is bit complicated but seeing your rating on stack-exchange i assume you would be able to digest it:
Well as we both know that a object comprises of data and methods.
when ever we create an object the "data part/instance variable/fields (they are all same name for data that is global in a class) " of the object is stored in a data structure in the memory that we call heap space. This data structure can be referenced by "this pointer".
So it means that if we have two different object they will have different this pointer.
Methods do not have any independent allocation.
When ever we execute a method like
obj1.feature()
it means apply the "feature" on obj1. During this process the run time system will pass "this pointer" of obj1 to "feature".
Internal variable in a method are allocated space on a data structure called as frame that is pushed on the stack. any variable that is not declared inside the method is assumed to be in the global data structure and the pointer to it is appended automatically thus, globalvaraiabe becomes this.globalvariable
So we can see clearly that if we pass different "this pointer" to the method they will access completely different memory locations and hence do not require thread synchronization.
If there aren't any static fields (which are inherently shared between class instances) nor concurrent external data access (files, streams, DDE, databases etc) you shouldn't encounter any threading issues.
Since both static fields and all concurrent external data objects are "unique", you'd have to synchronize on access with them. That's exactly why it's advised to use immutables & don't use static data in multi-threaded runs, unless you're having the data mutable/static for some synchronization-related reasons (e.g. as locks etc).
Note that a counter is essentially mutable by definition (it changes its state) - but in most real-life cases you can safely use immutable objects (Strings, immutable collections etc)
Further reading: https://docs.oracle.com/javase/tutorial/essential/concurrency/immutable.html

Immutable objects are thread safe, but why?

Lets say for example, a thread is creating and populating the reference variable of an immutable class by creating its object and another thread kicks in before the first one completes and creates another object of the immutable class, won't the immutable class usage be thread unsafe?
Creating an immutable object also means that all fields has to be marked as final.
it may be necessary to ensure correct behavior if a reference to
a newly created instance is passed from one thread to another without
synchronization
Are they trying to say that the other thread may re-point the reference variable to some other object of the immutable class and that way the threads will be pointing to different objects leaving the state inconsistent?
Actually immutable objects are always thread-safe, but its references may not be.
Confused?? you shouldn't be:-
Going back to basic:
Thread-safe simply means that two or more threads must work in coordination on the shared resource or object. They shouldn't over-ride the changes done by any other thread.
Now String is an immutable class, whenever a thread tries to change it, it simply end up creating a new object. So simply even the same thread can't make any changes to the original object & talking about the other thread would be like going to Sun but the catch here is that generally we use the same old reference to point that newly created object.
When we do code, we evaluate any change in object with the reference only.
Statement 1:
String str = "123"; // initially string shared to two threads
Statement 2:
str = str+"FirstThread"; // to be executed by thread one
Statement 3:
str=str+"SecondThread"; // to be executed by thread two
Now since there is no synchronize, volatile or final keywords to tell compiler to skip using its intelligence for optimization (any reordering or caching things), this code can be run in following manner.
Load Statement2, so str = "123"+"FirstThread"
Load Statement3, so str = "123"+"SecondThread"
Store Statement3, so str = "123SecondThread"
Store Statement2, so str = "123FirstThread"
and finally the value in reference str="123FirstThread" and for sometime if we assume that luckily our GC thread is sleeping, that our immutable objects still exist untouched in our string pool.
So, Immutable objects are always thread-safe, but their references may not be. To make their references thread-safe, we may need to access them from synchronized blocks/methods.
In addition to other answers posted already, immutable objects once created, they cannot be modified further. Hence they are essentially read-only.
And as we all know, read-only things are always thread-safe. Even in databases, multiple queries can read same rows simultaneously, but if you want to modify something, you need exclusive lock for that.
Immutable objects are thread safe, but why?
An immutable object is an object that is no longer modified once it has been constructed. If in addition, the immutable object is only made accessible to other thread after it has been constructed, and this is done using proper synchronization, all threads will see the same valid state of the object.
If one thread is creating populating the reference variable of the immutable class by creating its object and at the second time the other thread kicks in before the first thread completes and creates another object of the immutable class, won't the immutable class usage be thread unsafe?
No. What makes you think so? An object's thread safety is completely unaffected by what you do to other objects of the same class.
Are they trying to say that the other thread may re-point the reference variable to some other object of the immutable class and that way the threads will be pointing to different objects leaving the state inconsistent?
They are trying to say that whenever you pass something from one thread to another, even if it is just a reference to an immutable object, you need to synchronize the threads. (For instance, if you pass the reference from one thread to another by storing it in an object or a static field, that object or field is accessed by several threads, and must be thread-safe)
Thread safety is data sharing safety, And because in your code you make decisions based on the data your objects hold, the integrity and deterministic behaviour of it is vital. i.e
Imagine we have a shared boolean instance variable across two threads that are about to execute a method with the following logic
If flag is false, then I print "false" and then I set the flag back to true.
If flag is true, then I print "true" and then I set the flag back to false.
If you run continuously in a single thread loop, you will have a deterministic output which will look like:
false - true - false - true - false - true - false ...
But, if you ran the same code with two threads, then, the output of your output is not deterministic anymore, the reason is that the thread A can wake up, read the flag, see that is false, but before it can do anything, thread B wakes up and reads the flag, which is also false!! So both will print false... And this is only one problematic scenario I can think of... As you can see, this is bad.
If you take out the updates of the equation the problem is gone, just because you are eliminating all the risks associated with data sync. that's why we say that immutable objects are thread safe.
It is important to note though, that immutable objects are not always the solution, you may have a case of data that you need to share among different threads, in this cases there are many techniques that go beyond the plain synchronization and that can make a whole lot of difference in the performance of your application, but this is a complete different subject.
Immutable objects are important to guarantee that the areas of the application that we are sure that don't need to be updated, are not updated, so we know for sure that we are not going to have multithreading issues
You probably might be interested in taking a look at a couple of books:
This is the most popular: http://www.amazon.co.uk/Java-Concurrency-Practice-Brian-Goetz/dp/0321349601/ref=sr_1_1?ie=UTF8&qid=1329352696&sr=8-1
But I personally prefer this one: http://www.amazon.co.uk/Concurrency-State-Models-Java-Programs/dp/0470093552/ref=sr_1_3?ie=UTF8&qid=1329352696&sr=8-3
Be aware that multithreading is probably the trickiest aspect of any application!
Immutability doesn't imply thread safety.In the sense, the reference to an immutable object can be altered, even after it is created.
//No setters provided
class ImmutableValue
{
private final int value = 0;
public ImmutableValue(int value)
{
this.value = value;
}
public int getValue()
{
return value;
}
}
public class ImmutableValueUser{
private ImmutableValue currentValue = null;//currentValue reference can be changed even after the referred underlying ImmutableValue object has been constructed.
public ImmutableValue getValue(){
return currentValue;
}
public void setValue(ImmutableValue newValue){
this.currentValue = newValue;
}
}
Two threads will not be creating the same object, so no problem there.
With regards to 'it may be necessary to ensure...', what they are saying is that if you DON'T make all fields final, you will have to ensure correct behavior yourself.

Categories

Resources