How do I reduce the granularity of locking while maintaining thread safety? - java

I have a simple, managed group of Stacks that need to be accessed in a thread-safe manner. My first implementation is working correctly but uses synchronized methods for all access, ie. locking is at the most coarse level. I'd like to make locking as granular as possible but I'm unsure of the best way to go about it.
Here's the basics of my Stack manager class (with some details elided for brevity):
public class StackManager {
private final Map<String, Deque<String>> myStacks;
public StackManager() {
myStacks = new ConcurrentHashMap<String, Deque<String>>();
}
public synchronized void addStack(String name) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
public synchronized void removeStack(String name) {
if (!myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.remove(name);
}
public synchronized void push(String stack, String payload) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
myStacks.get(stack).push(payload);
}
public synchronized String pop(String stack) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
return myStacks.get(stack).pop();
}
}
The stack-level methods (addStack(), removeStack()) are not used that often. However I'd like to know if their level of locking can be reduced. For example, if these methods were unsynchronized and established a lock on myStacks would this reduce contention? For example,
public void addStack(String name) {
synchronized(myStacks) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
}
The per-stack methods (push(), pop()) are where I feel the most gains can be made. I'd like to achieve per-stack locking if I could. That is, only lock the single stack within the stack manager that is being operated on. However I cannot see a good way to do this. Any suggestions?
While we're here, is it necessary to use the concurrent implementations of both Map and Deque?

Both data structures are thread safe. So, every isolated operation on the is thread safe.
The problem is performing more than one operation when there's a dependency between them.
In your case, checking for existance must be atomic with the actual operation to avoid race conditions.
To add a new stack, you can use the method putIfAbsent, which is atomic and not synchronized.
To remove a stack, you don't need to check for existance. If you want to know whether it existed, just return remove method return value. If it's null, it didn't exist.
To perform push and pop, you just have to get the stack first and assign to a local variable. If it's null, it didn't exist. If it's nonnull, you can safely push or pop.
The attribute myStacks must be either final or volatile to be thread safe.
Now you don't need any synchronization. And I would choose a solution without exceptions. Only to add a new stack it seems more necessary. If it can happen in a correct program, it should be a checked exception. Runtime exception is more suitable when it is supposed to be a bug.
Oh, and triplecheck and test it, as concurrent programming is tricky.

Related

Java Concurrency volatile for reading synchronization for writing

I need to create a class that has a shared-between-threads Object (lets call is SharedObject). The special thing about SharedObject is that it holds a String that will be returned in multithreaded environment, and sometimes the entire SharedObject will be written to by changing field reference to newly created object.
I do not want to make the read and write both synchronised on the same monitor because the write scenario is happening rarely while read scenario is quite common. Therefore I did the following:
public class ObjectHolder {
private volatile SharedObject sharedObject;
public String getSharedObjectString() {
if (!isObjectStillValid()) {
obtainNewSharedObject()
}
return sharedObject.getCommonString()
}
public synchronized void obtainNewSharedObject() {
/* This is in case multiple threads wait on this lock,
after first one obtains new object the others can just
use it and should not obtain a new one */
if(!isObjectStillValid()) {
sharedObject = new SharedObject(/*some parameters from somewhere*/)
}
}
}
From what I have read from documentation and on stackoverflow, the synchronized keyword will assure only one thread can access the synchronised block on the same object instance(therefore write race/multiple unnecessary writes is a non-issue) while volatile keyword on the field reference will assure the reference value is written directly to the main program memory (not cached locally).
Are there any other pitfalls I am missing?
I want to be sure that within synchronized block when sharedObject is written to, the new value of sharedObject is present for any other thread at latest when lock for obtainNewSharedObject() is released. Should this not be guaranteed, I could encounter scenarios of unnecessary writes and replacing correct values which are a big problem for this case.
I know to be absolutely safe I could just make getSharedObjectString() synchronized by itself however as stated previously I do not want to block reading if not needed.
This way reading is non-blocking, when a write scenario occurs it is blocking.
I should probably mention method isObjectStillValid() is thread independant (entirely SharedObject and System clock based) therefore a valid Thread-free check to be used for write scenarios.
Edit: Please note I could not find a similar topic on stackoverflow, but it may exist. Sorry if that is the case.
Edit2: Thank you for all the comments. Edit because apparently I cannot upvote yet (I can, but it does not show). While my solution is functional as long as isObjectStillValid is thread-safe, it can suffer from decreased performance due to multiple accesses to volatile field. I will improve it most likely using the upgraded double-checked locking solution. I will also in-depth analyse all the other possibilities mentioned here.
Why don't you use AtomicReference. It uses optimistic locking, meaning that no actual thread locking is involved. Internally it uses Compare and Swap. If you look at the implementation it uses volatile in its implementation and I would trust Doug Lea to implement it correctly :)
Apart from this, there many more ways for synchronization between lot of readers and some writers - ReadWriteLock
This looks like a classic double-checked locking pattern. While your implementation is logically correct - thanks to the use of volatile on sharedObject - it might not be the most performant.
The recommended pattern for Java 1.5 on is shown on the Wikipedia page linked.
// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized(this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the use of a localRef for accessing the helper field. This limits access to the volatile field in the simple case to a single read instead of potentially twice; once for the check and once for the return. See the Wikipedia page again, just after the recommended pattern sample.
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 25 percent.[7]
Depending on how isObjectStillValid() accesses sharedObject, you might benefit from a similar pattern.
This sounds like the use of a ReadWriteLock would be appropiate.
The basic idea is that there can be multiple readers simultaniously or one writer exclusively. Here can you find an Example how to use it in a List implementation.
Copy paste in case the side goes down:
import java.util.*;
import java.util.concurrent.locks.*;
/**
* ReadWriteList.java
* This class demonstrates how to use ReadWriteLock to add concurrency
* features to a non-threadsafe collection
* #author www.codejava.net
*/
public class ReadWriteList<E> {
private List<E> list = new ArrayList<>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public ReadWriteList(E... initialElements) {
list.addAll(Arrays.asList(initialElements));
}
public void add(E element) {
Lock writeLock = rwLock.writeLock();
writeLock.lock();
try {
list.add(element);
} finally {
writeLock.unlock();
}
}
public E get(int index) {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.get(index);
} finally {
readLock.unlock();
}
}
public int size() {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.size();
} finally {
readLock.unlock();
}
}
}

Java synchronized method around parameter value

Consider the following method:
public void upsert(int customerId, int somethingElse) {
// some code which is prone to race conditions
}
I want to protect this method from race conditions, but this can only occur if two threads with the same customerId are calling it at the same time. If I make the whole method synchronized it will reduce the efficiency and it's not really needed. What I really want is to synchronize it around the customerId. Is this possible somehow with Java? Are there any built-in tools for that or I'd need a Map of Integers to use as locks?
Also feel free to advice if you think I'm doing something wrong here :)
Thanks!
The concept you're looking for is called segmented locking or striped locking. It is too wasteful to have a separate lock for each customer (locks are quite heavyweight). Instead you want to partition your customer ID space into a reasonable number of partitions, matching the desired degree of parallelism. Typically 8-16 would be enough, but this depends on the amount of work the method does.
This outlines a simple approach:
private final Object[] locks = new Object[8];
synchronized (locks[customerId % locks.length]) {
...implementation...
}
private static final Set<Integer> lockedIds = new HashSet<>();
private void lock(Integer id) throws InterruptedException {
synchronized (lockedIds) {
while (!lockedIds.add(id)) {
lockedIds.wait();
}
}
}
private void unlock(Integer id) {
synchronized (lockedIds) {
lockedIds.remove(id);
lockedIds.notifyAll();
}
}
public void upsert(int customerId) throws InterruptedException {
try {
lock(customerId);
//Put your code here.
//For different ids it is executed in parallel.
//For equal ids it is executed synchronously.
} finally {
unlock(customerId);
}
}
id can be not only an 'Integer' but any class with correctly overridden 'equals' and 'hashCode' methods.
try-finally - is very important - you must guarantee to unlock waiting threads after your operation even if your operation threw exception.
It will not work if your back-end is distributed across multiple servers/JVMs.

Better solution instead of nested synchronized blocks in Java?

I have a Bank class with a list of Account. The bank has a transfer() method to transfer a value from one account to another. The idea is to lock both the from and to accounts within a transfer.
To solve this issue I have the following code (please bear in mind that this is a very trivial example because it's just that, an example):
public class Account {
private int mBalance;
public Account() {
mBalance = 0;
}
public void withdraw(int value) {
mBalance -= value;
}
public void deposit(int value) {
mBalance += value;
}
}
public class Bank {
private List<Account> mAccounts;
private int mSlots;
public Bank(int slots) {
mAccounts = new ArrayList<Account>(Collections.nCopies(slots, new Account()));
mSlots = slots;
}
public void transfer(int fromId, int toId, int value) {
synchronized(mAccounts.get(fromId, toId)) {
synchronized(mAccounts.get(toId)) {
mAccounts.get(fromId).withdraw(value);
mAccounts.get(toId).deposit(value);
}
}
}
}
This works, but does not prevent deadlocks. To fix that, we need to change the synchronization to the following:
synchronized(mAccounts.get(Math.min(fromId, toId))) {
synchronized(mAccounts.get(Math.max(fromId, toId))) {
mAccounts.get(fromId).withdraw(value);
mAccounts.get(toId).deposit(value);
}
}
But the compiler warns me about nested synchronization blocks and I trust that that is a bad thing to do? Also, I'm not very fond of the max/min solution (I was not the one who came up with that idea) and I would like to avoid that if possible.
How would one fix those 2 problems above? If we could lock on more than one object, we would lock both the from and to account, but we can't do that (as far as I know). What's the solution then?
I personally prefer to avoid any but the most trivial synchronization scenario. In a case like yours I would probably use a synchronized queue collection to funnel deposits and withdraws into a single-threaded process that manipulates your unprotected variable. The "Fun" thing about these queues is when you put all the code into the object that you drop into the queue so the code pulling the object from the queue is absolutely trivial and generic (commandQueue.getNext().execute();)--yet the code being executed can be arbitrarily flexible or complex because it has an entire "Command" object for it's implementation--this is the kind of pattern that OO-style programming excels at.
This is a great general-purpose solution and can solve quite a few threading problems without explicit synchronization (synchronization still exists inside your queue but is usually minimal and deadlock-free, often only the "put" method needs to be synchronized at all, and that's internal).
Another solution to some threading problems is to ensure that every shared variable you might possibly write to can only be "Written" to by a single process, then you can generally leave off synchronization altogether (although you may need to scatter a few transients around)
Lock ordering is indeed the solution, so you're right. The compiler warns you because it cannot make sure all your locking is ordered—it's not smart enough to check your code, and smart enough to know there may be more.
An alternative solution could be locking on an enclosing object, e.g. for transfers within one user's account you could lock on user. Not so with transfers between users.
Having said that, you are not probably going to rely on Java locking in order to make a transfer: you need some data storage, usually a database. In case of using a database, the locking moves to the storage. Still, the same principles apply: you order locks to avoid deadlocks; you escalate locks to make locking simpler.
I would advise you to look into Lock Objects in java. Have a look at condition objects too. Each of your account object can expose a condition on which a thread waits. Once a transaction is complete, condition objects await or notify is called.
If you haven't already you may want to look at the more advanced locking packages in java.util.concurrent.
While you still have to take care to avoid with deadlock, the ReadWriteLocks in particular are useful to allow multi-thread read access while still locking for object modification.
Make this easy with Polyglot programming, use Software Transactional Memory with Clojure but in Java.
Software Transactional Memory (STM) is a concurrency control technique
analogous to database transactions for controlling access to shared
memory in concurrent computing. It is an alternative to lock based synchronization.
Example solution
Account.java
import clojure.lang.Ref;
public class Account {
private Ref mBalance;
public Account() {
mBalance = new Ref(0);
}
public void withdraw(int value) {
mBalance.set(getBalance() - value);
}
public void deposit(int value) {
mBalance.set(getBalance() + value);
}
private int getBalance() {
return (int) mBalance.deref();
}
}
Bank.java
import clojure.lang.LockingTransaction;
import java.util.*
import java.util.concurrent.Callable;
public class Bank {
private List<Account> mAccounts;
private int mSlots;
public Bank(int slots) {
mAccounts = new ArrayList<>(Collections.nCopies(slots, new Account()));
mSlots = slots;
}
public void transfer(int fromId, int toId, int value) {
try {
LockingTransaction.runInTransaction(
new Callable() {
#Override
public Object call() throws Exception {
mAccounts.get(fromId).withdraw(value);
mAccounts.get(toId).deposit(value);
return null;
}
});
} catch (Exception e) {
e.printStackTrace();
}
}
}
Dependencies
<dependency>
<groupId>org.clojure</groupId>
<artifactId>clojure</artifactId>
<version>1.6.0</version>
</dependency>

How to correctly create a SynchronizedStack class?

I made a simple synchronized Stack object in Java, just for training purposes.
Here is what I did:
public class SynchronizedStack {
private ArrayDeque<Integer> stack;
public SynchronizedStack(){
this.stack = new ArrayDeque<Integer>();
}
public synchronized Integer pop(){
return this.stack.pop();
}
public synchronized int forcePop(){
while(isEmpty()){
System.out.println(" Stack is empty");
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
return this.stack.pop();
}
public synchronized void push(int i){
this.stack.push(i);
notifyAll();
}
public boolean isEmpty(){
return this.stack.isEmpty();
}
public synchronized void pushAll(int[] d){
for(int i = 0; i < d.length; i++){
this.stack.push(i);
}
notifyAll();
}
public synchronized String toString(){
String s = "[";
Iterator<Integer> it = this.stack.iterator();
while(it.hasNext()){
s += it.next() + ", ";
}
s += "]";
return s;
}
}
Here are my questions:
Is it OK not to synchronize the isEmtpy() method? I figured it was because even if another Thread is modifying the stack at the same time, it would still return a coherent result (there is no operation that goes into a isEmpty state that is neither initial or final). Or is it a better design to have all the methods of a synchronized object synchronized?
I don't like the forcePop() method. I just wanted to create a thread that was able to wait until an item was pushed into the stack before poping an element, and I thought the best option was to do the loop with the wait() in the run() method of the thread, but I can't because it throws an IllegalMonitorStatException. What is the proper method to do something like this?
Any other comment/suggestion?
Thank you!
Stack itself is already synchronized, so it doesn't make sense to apply synchronization again (use ArrayDeque if you want a non-synchronized stack implementation)
It's NOT OK (aside from the fact from the previous point), because lack of synchronization may cause memory visibility effects.
forcePop() is pretty good. Though it should pass InterruptedException without catching it to follow the contract of interruptable blocking method. It would allow you to interrupt a thread blocked at forcePop() call by calling Thread.interrupt().
Assuming that stack.isEmpty() won't need synchronization might be true, but you are relying on an implementation detail of a class that you have no control over.
The javadocs of Stack state that the class is not thread-safe, so you should synchronize all access.
I think you're mixing idioms a little. You are backing your SynchronizedStack with java.util.Stack, which in turn is backed by java.util.Vector, which is synchronized. I think you should encapsulate the wait() and notify() behaivor in another class.
The only problem with not synchronizing isEmpty() is that you don't know what's happening underneath. While your reasoning is, well, reasonable, it assumes that the underlying Stack is also behaving in a reasonable manner. Which it probably is in this case, but you can't rely on it in general.
And the second part of your question, there's nothing wrong with a blocking pop operation, see this for a complete implementation of all the possible strategies.
And one other suggestion: if you're creating a class that is likely to be re-used in several parts of an application (or even several applications), don't use synchronized methods. Do this instead:
public class Whatever {
private Object lock = new Object();
public void doSomething() {
synchronized( lock ) {
...
}
}
}
The reason for this is that you don't really know if users of your class want to synchronize on your Whatever instances or not. If they do, they might interfere with the operation of the class itself. This way you've got your very own private lock which nobody can interfere with.

How do I create a thread-safe write-once read-many value in Java?

This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. The systems have to target Java 4 at the latest, so answers that suggest support available only in later JVMs are not useful either.
By way of example, let's say I have a class that is structured to fit into an application framework like so:
public class MyClass
{
private /*ideally-final*/ SomeObject someObject;
MyClass() {
someObject=null;
}
public void startup() {
someObject=new SomeObject(...arguments from environment which are not available until startup is called...);
}
public void shutdown() {
someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly
}
}
I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect its write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization.
The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so (UPDATE - corrected threading sematics of this class):
public class WormRef<T>
{
private volatile T reference; // wrapped reference
public WormRef() {
reference=null;
}
public WormRef<T> init(T val) {
if(reference!=null) { throw new IllegalStateException("The WormRef container is already initialized"); }
reference=val;
return this;
}
public T get() {
if(reference==null) { throw new IllegalStateException("The WormRef container is not initialized"); }
return reference;
}
}
and then in MyClass, above, do:
private final WormRef<SomeObject> someObject;
MyClass() {
someObject=new WormRef<SomeObject>();
}
public void startup() {
someObject.init(new SomeObject(...));
}
public void sometimeLater() {
someObject.get().doSomething();
}
Which raises some questions for me:
Is there a better way, or existing Java object (would have to be available in Java 4)?
Secondarily, in terms of thread safety:
Is this thread-safe provided that no other thread accesses someObject.get() until after its set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this.
Given the completely unsynchronized WormReference container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does the JMM always guarantee that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated. I believe the answer is "Yes" because allocation explicitly zeroes the allocated memory - but can CPU caching result in something else being observed at a given memory location?
Is it sufficient to make WormRef.reference volatile to ensure proper multithreaded semantics?
Note the primary thrust of this question is how to express and enforce the finalness of someObject without being able to actually mark it final; secondary is what is necessary for thread-safety. That is, don't get too hung up on the thread-safety aspect of this.
I would start by declaring your someObject volatile.
private volatile SomeObject someObject;
Volatile keyword creates a memory barrier, which means separate threads will always see updated memory when referencing someObject.
In your current implementation some threads may still see someObject as null even after startup has been called.
Actually this volatile technique is used a lot by collections declared in java.util.concurrent package.
And as some other posters suggest here, if all else fails fall back to full synchronization.
I would remove the setter method in WoRmObject, and provide a synchronised init() method which throws an exception if (object != null)
Consider using AtomicReference as a delegate in this object-container you're trying to create. For example:
public class Foo<Bar> {
private final AtomicReference<Bar> myBar = new AtomicReference<Bar>();
public Bar get() {
if (myBar.get()==null) myBar.compareAndSet(null,init());
return myBar.get();
}
Bar init() { /* ... */ }
//...
}
EDITED: That will set once, with some lazy-initialization method. It's not perfect for blocking multiple calls to a (presumably expensive) init(), but it could be worse. You could stick the instantiation of myBar into constructor, and then later add a constructor that allows assignment as well, if provided the correct info.
There's some general discussion of thread-safe, singleton instantiation (which is pretty similar to your problem) at, for example, this site.
In theory it would be sufficient to rewrite startup() as follows:
public synchronized void startup() {
if (someObject == null) someObject = new SomeObject();
}
By the way, although the WoRmObject is final, threads can still invoke set() multiple times. You'll really need to add some synchronization.
update: I played a bit round it and created an SSCCE, you may find it useful to play a bit around with it :)
package com.stackoverflow.q2428725;
import java.util.concurrent.Callable;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class Test {
public static void main(String... args) throws Exception {
Bean bean = new Bean();
ScheduledExecutorService executor = Executors.newScheduledThreadPool(4);
executor.schedule(new StartupTask(bean), 2, TimeUnit.SECONDS);
executor.schedule(new StartupTask(bean), 2, TimeUnit.SECONDS);
Future<String> result1 = executor.submit(new GetTask(bean));
Future<String> result2 = executor.submit(new GetTask(bean));
System.out.println("Result1: " + result1.get());
System.out.println("Result2: " + result2.get());
executor.shutdown();
}
}
class Bean {
private String property;
private CountDownLatch latch = new CountDownLatch(1);
public synchronized void startup() {
if (property == null) {
System.out.println("Setting property.");
property = "foo";
latch.countDown();
} else {
System.out.println("Property already set!");
}
}
public String get() {
try {
latch.await();
} catch (InterruptedException e) {
// handle.
}
return property;
}
}
class StartupTask implements Runnable {
private Bean bean;
public StartupTask(Bean bean) {
this.bean = bean;
}
public void run() {
System.out.println("Starting up bean...");
bean.startup();
System.out.println("Bean started!");
}
}
class GetTask implements Callable<String> {
private Bean bean;
public GetTask(Bean bean) {
this.bean = bean;
}
public String call() {
System.out.println("Getting bean property...");
String property = bean.get();
System.out.println("Bean property got!");
return property;
}
}
The CountDownLatch will cause all await() calls to block until the countdown reaches zero.
It is most likely thread safe, from your description of the framework. There must have been a memory barrier somewhere between calling myobj.startup() and making myobj available to other threads. That guarantees that the writes in startup() will be visible to other threads. Therefore you don't have to worry about thread safety because the framework does it. There is no free lunch though; everytime another thread obtains access to myobj through the framework, it must involve sync or volatile read.
If you look into the framework and list the code in the path, you'll see sync/volatile in proper places that make your code thread safe. That is, if the framework is correctly implemented.
Let's examine a typical swing example, where a worker threads does some calculation, saves the results in a global variable x, then sends a repaint event. The GUI thread upon receiving the repaint event, reads the results from the global variable x, and repaints accordingly.
Neither the worker thread nor the repaint code does any synchronization or volatile read/write on anything. There must be tens of thousands of implementations like this. Luckily they are all thread safe even though the programmers paid no special attention. Why? Because the event queue is synchronized; we have a nice happends-before chain:
write x - insert event - read event - read x
Therefore write x and read x are properly synchronized, implicitly via event framework.
how about synchronization?
No it is not thread safe. Without synchronization, the new state of your variable might never get communicated to other threads.
Yes, as far as I know references are atomic so you will see either null or the reference. Note that the state of the referenced object is a completely different story
Could you use a ThreadLocal that only allows each thread's value to be set once?
There are a LOT of wrong ways to do lazy instantiation, especially in Java.
In short, the naive approach is to create a private object, a public synchronized init method, and a public unsynchronized get method that performs a null check on your object and calls init if necessary. The intricacies of the problem come in performing the null check in a thread safe way.
This article should be of use: http://en.wikipedia.org/wiki/Double-checked_locking
This specific topic, in Java, is discussed in depth in Doug Lea's 'Concurrent Programming in Java' which is somewhat out of date, and in 'Java Concurrency in Practice' coauthored by Lea and others. In particular, CPJ was published before the release of Java 5, which significantly improved Java's concurrency controls.
I can post more specifics when I get home and have access to said books.
This is my final answer, Regis1 :
/**
* Provides a simple write-one, read-many wrapper for an object reference for those situations
* where you have an instance variable which you would like to declare as final but can't because
* the instance initialization extends beyond construction.
* <p>
* An example would be <code>java.awt.Applet</code> with its constructor, <code>init()</code> and
* <code>start()</code> methods.
* <p>
* Threading Design : [ ] Single Threaded [x] Threadsafe [ ] Immutable [ ] Isolated
*
* #since Build 2010.0311.1923
*/
public class WormRef<T>
extends Object
{
private volatile T reference; // wrapped reference
public WormRef() {
super();
reference=null;
}
public WormRef<T> init(T val) {
// Use synchronization to prevent a race-condition whereby the following interation could happen between three threads
//
// Thread 1 Thread 2 Thread 3
// --------------- --------------- ---------------
// init-read null
// init-read null
// init-write A
// get A
// init-write B
// get B
//
// whereby Thread 3 sees A on the first get and B on subsequent gets.
synchronized(this) {
if(reference!=null) { throw new IllegalStateException("The WormRef container is already initialized"); }
reference=val;
}
return this;
}
public T get() {
if(reference==null) { throw new IllegalStateException("The WormRef container is not initialized"); }
return reference;
}
} // END PUBLIC CLASS
(1) Confer the game show "So you want to be a millionaire", hosted by Regis Philburn.
Just my little version based on AtomicReference. It's probably not the best, but I believe it to be clean and easy to use:
public static class ImmutableReference<V> {
private AtomicReference<V> ref = new AtomicReference<V>(null);
public boolean trySet(V v)
{
if(v == null)
throw new IllegalArgumentException("ImmutableReference cannot hold null values");
return ref.compareAndSet(null, v);
}
public void set(V v)
{
if(!trySet(v)) throw new IllegalStateException("Trying to modify an immutable reference");
}
public V get()
{
V v = ref.get();
if(v == null)
throw new IllegalStateException("Not initialized immutable reference.");
return v;
}
public V tryGet()
{
return ref.get();
}
}
First question: Why can't you just make start up a private method, called in the constructor, then it can be final. This would ensure thread safety after the constructor is called, as it is invisible before and only read after the constructor returns. Or re-factor your class structure so that the start-up method can create the MyClass object as part of its constructor. In may ways this particular case seems like a case of poor structure, where you really just want to make it final and immutable.
The easy Approach, if the class is immutable, and is read only after it is created, then wrap it in an Immutable List from guava. You can also make your own immutable wrapper which defensively copies when asked to return the reference, so this prevents a client from changing the reference. If it is immutable internally, then no further synchronization is needed, and unsynchronized reads are permissible. You can set your wrapper to defensively copy on request, so even attempts to write to it fail cleanly (they just don't do anything). You may need a memory barrier, or you may be able to do lazy initialisation, although note that lazy initialisation may require further synchronization, as you may get several unsynchronized read requests while the object is being constructed.
The slightly more involved approach would involve using an enumeration. Since enumerations are guaranteed singleton, then as soon as the enumeration is created it is fixed for ever. You still have to make sure that the object is internally immutable, but it does guarantee its singleton status. Without much effort.
The following class could answer your question. Some thread-safety achieved by using a volatile intermediate variable in conjunction with final value keeper in the provided generic. You may consider further increase of it by using synchronized setter/getter. Hope it helps.
https://stackoverflow.com/a/38290652/6519864

Categories

Resources