I need to create a class that has a shared-between-threads Object (lets call is SharedObject). The special thing about SharedObject is that it holds a String that will be returned in multithreaded environment, and sometimes the entire SharedObject will be written to by changing field reference to newly created object.
I do not want to make the read and write both synchronised on the same monitor because the write scenario is happening rarely while read scenario is quite common. Therefore I did the following:
public class ObjectHolder {
private volatile SharedObject sharedObject;
public String getSharedObjectString() {
if (!isObjectStillValid()) {
obtainNewSharedObject()
}
return sharedObject.getCommonString()
}
public synchronized void obtainNewSharedObject() {
/* This is in case multiple threads wait on this lock,
after first one obtains new object the others can just
use it and should not obtain a new one */
if(!isObjectStillValid()) {
sharedObject = new SharedObject(/*some parameters from somewhere*/)
}
}
}
From what I have read from documentation and on stackoverflow, the synchronized keyword will assure only one thread can access the synchronised block on the same object instance(therefore write race/multiple unnecessary writes is a non-issue) while volatile keyword on the field reference will assure the reference value is written directly to the main program memory (not cached locally).
Are there any other pitfalls I am missing?
I want to be sure that within synchronized block when sharedObject is written to, the new value of sharedObject is present for any other thread at latest when lock for obtainNewSharedObject() is released. Should this not be guaranteed, I could encounter scenarios of unnecessary writes and replacing correct values which are a big problem for this case.
I know to be absolutely safe I could just make getSharedObjectString() synchronized by itself however as stated previously I do not want to block reading if not needed.
This way reading is non-blocking, when a write scenario occurs it is blocking.
I should probably mention method isObjectStillValid() is thread independant (entirely SharedObject and System clock based) therefore a valid Thread-free check to be used for write scenarios.
Edit: Please note I could not find a similar topic on stackoverflow, but it may exist. Sorry if that is the case.
Edit2: Thank you for all the comments. Edit because apparently I cannot upvote yet (I can, but it does not show). While my solution is functional as long as isObjectStillValid is thread-safe, it can suffer from decreased performance due to multiple accesses to volatile field. I will improve it most likely using the upgraded double-checked locking solution. I will also in-depth analyse all the other possibilities mentioned here.
Why don't you use AtomicReference. It uses optimistic locking, meaning that no actual thread locking is involved. Internally it uses Compare and Swap. If you look at the implementation it uses volatile in its implementation and I would trust Doug Lea to implement it correctly :)
Apart from this, there many more ways for synchronization between lot of readers and some writers - ReadWriteLock
This looks like a classic double-checked locking pattern. While your implementation is logically correct - thanks to the use of volatile on sharedObject - it might not be the most performant.
The recommended pattern for Java 1.5 on is shown on the Wikipedia page linked.
// Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized(this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the use of a localRef for accessing the helper field. This limits access to the volatile field in the simple case to a single read instead of potentially twice; once for the check and once for the return. See the Wikipedia page again, just after the recommended pattern sample.
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 25 percent.[7]
Depending on how isObjectStillValid() accesses sharedObject, you might benefit from a similar pattern.
This sounds like the use of a ReadWriteLock would be appropiate.
The basic idea is that there can be multiple readers simultaniously or one writer exclusively. Here can you find an Example how to use it in a List implementation.
Copy paste in case the side goes down:
import java.util.*;
import java.util.concurrent.locks.*;
/**
* ReadWriteList.java
* This class demonstrates how to use ReadWriteLock to add concurrency
* features to a non-threadsafe collection
* #author www.codejava.net
*/
public class ReadWriteList<E> {
private List<E> list = new ArrayList<>();
private ReadWriteLock rwLock = new ReentrantReadWriteLock();
public ReadWriteList(E... initialElements) {
list.addAll(Arrays.asList(initialElements));
}
public void add(E element) {
Lock writeLock = rwLock.writeLock();
writeLock.lock();
try {
list.add(element);
} finally {
writeLock.unlock();
}
}
public E get(int index) {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.get(index);
} finally {
readLock.unlock();
}
}
public int size() {
Lock readLock = rwLock.readLock();
readLock.lock();
try {
return list.size();
} finally {
readLock.unlock();
}
}
}
I have a simple, managed group of Stacks that need to be accessed in a thread-safe manner. My first implementation is working correctly but uses synchronized methods for all access, ie. locking is at the most coarse level. I'd like to make locking as granular as possible but I'm unsure of the best way to go about it.
Here's the basics of my Stack manager class (with some details elided for brevity):
public class StackManager {
private final Map<String, Deque<String>> myStacks;
public StackManager() {
myStacks = new ConcurrentHashMap<String, Deque<String>>();
}
public synchronized void addStack(String name) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
public synchronized void removeStack(String name) {
if (!myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.remove(name);
}
public synchronized void push(String stack, String payload) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
myStacks.get(stack).push(payload);
}
public synchronized String pop(String stack) {
if (!myStacks.containsKey(stack)) {
throw new IllegalArgumentException();
}
return myStacks.get(stack).pop();
}
}
The stack-level methods (addStack(), removeStack()) are not used that often. However I'd like to know if their level of locking can be reduced. For example, if these methods were unsynchronized and established a lock on myStacks would this reduce contention? For example,
public void addStack(String name) {
synchronized(myStacks) {
if (myStacks.containsKey(name)) {
throw new IllegalArgumentException();
}
myStacks.put(name, new ConcurrentLinkedDeque<String>());
}
}
The per-stack methods (push(), pop()) are where I feel the most gains can be made. I'd like to achieve per-stack locking if I could. That is, only lock the single stack within the stack manager that is being operated on. However I cannot see a good way to do this. Any suggestions?
While we're here, is it necessary to use the concurrent implementations of both Map and Deque?
Both data structures are thread safe. So, every isolated operation on the is thread safe.
The problem is performing more than one operation when there's a dependency between them.
In your case, checking for existance must be atomic with the actual operation to avoid race conditions.
To add a new stack, you can use the method putIfAbsent, which is atomic and not synchronized.
To remove a stack, you don't need to check for existance. If you want to know whether it existed, just return remove method return value. If it's null, it didn't exist.
To perform push and pop, you just have to get the stack first and assign to a local variable. If it's null, it didn't exist. If it's nonnull, you can safely push or pop.
The attribute myStacks must be either final or volatile to be thread safe.
Now you don't need any synchronization. And I would choose a solution without exceptions. Only to add a new stack it seems more necessary. If it can happen in a correct program, it should be a checked exception. Runtime exception is more suitable when it is supposed to be a bug.
Oh, and triplecheck and test it, as concurrent programming is tricky.
Consider the following method:
public void upsert(int customerId, int somethingElse) {
// some code which is prone to race conditions
}
I want to protect this method from race conditions, but this can only occur if two threads with the same customerId are calling it at the same time. If I make the whole method synchronized it will reduce the efficiency and it's not really needed. What I really want is to synchronize it around the customerId. Is this possible somehow with Java? Are there any built-in tools for that or I'd need a Map of Integers to use as locks?
Also feel free to advice if you think I'm doing something wrong here :)
Thanks!
The concept you're looking for is called segmented locking or striped locking. It is too wasteful to have a separate lock for each customer (locks are quite heavyweight). Instead you want to partition your customer ID space into a reasonable number of partitions, matching the desired degree of parallelism. Typically 8-16 would be enough, but this depends on the amount of work the method does.
This outlines a simple approach:
private final Object[] locks = new Object[8];
synchronized (locks[customerId % locks.length]) {
...implementation...
}
private static final Set<Integer> lockedIds = new HashSet<>();
private void lock(Integer id) throws InterruptedException {
synchronized (lockedIds) {
while (!lockedIds.add(id)) {
lockedIds.wait();
}
}
}
private void unlock(Integer id) {
synchronized (lockedIds) {
lockedIds.remove(id);
lockedIds.notifyAll();
}
}
public void upsert(int customerId) throws InterruptedException {
try {
lock(customerId);
//Put your code here.
//For different ids it is executed in parallel.
//For equal ids it is executed synchronously.
} finally {
unlock(customerId);
}
}
id can be not only an 'Integer' but any class with correctly overridden 'equals' and 'hashCode' methods.
try-finally - is very important - you must guarantee to unlock waiting threads after your operation even if your operation threw exception.
It will not work if your back-end is distributed across multiple servers/JVMs.
This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. The systems have to target Java 4 at the latest, so answers that suggest support available only in later JVMs are not useful either.
By way of example, let's say I have a class that is structured to fit into an application framework like so:
public class MyClass
{
private /*ideally-final*/ SomeObject someObject;
MyClass() {
someObject=null;
}
public void startup() {
someObject=new SomeObject(...arguments from environment which are not available until startup is called...);
}
public void shutdown() {
someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly
}
}
I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect its write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization.
The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so (UPDATE - corrected threading sematics of this class):
public class WormRef<T>
{
private volatile T reference; // wrapped reference
public WormRef() {
reference=null;
}
public WormRef<T> init(T val) {
if(reference!=null) { throw new IllegalStateException("The WormRef container is already initialized"); }
reference=val;
return this;
}
public T get() {
if(reference==null) { throw new IllegalStateException("The WormRef container is not initialized"); }
return reference;
}
}
and then in MyClass, above, do:
private final WormRef<SomeObject> someObject;
MyClass() {
someObject=new WormRef<SomeObject>();
}
public void startup() {
someObject.init(new SomeObject(...));
}
public void sometimeLater() {
someObject.get().doSomething();
}
Which raises some questions for me:
Is there a better way, or existing Java object (would have to be available in Java 4)?
Secondarily, in terms of thread safety:
Is this thread-safe provided that no other thread accesses someObject.get() until after its set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this.
Given the completely unsynchronized WormReference container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does the JMM always guarantee that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated. I believe the answer is "Yes" because allocation explicitly zeroes the allocated memory - but can CPU caching result in something else being observed at a given memory location?
Is it sufficient to make WormRef.reference volatile to ensure proper multithreaded semantics?
Note the primary thrust of this question is how to express and enforce the finalness of someObject without being able to actually mark it final; secondary is what is necessary for thread-safety. That is, don't get too hung up on the thread-safety aspect of this.
I would start by declaring your someObject volatile.
private volatile SomeObject someObject;
Volatile keyword creates a memory barrier, which means separate threads will always see updated memory when referencing someObject.
In your current implementation some threads may still see someObject as null even after startup has been called.
Actually this volatile technique is used a lot by collections declared in java.util.concurrent package.
And as some other posters suggest here, if all else fails fall back to full synchronization.
I would remove the setter method in WoRmObject, and provide a synchronised init() method which throws an exception if (object != null)
Consider using AtomicReference as a delegate in this object-container you're trying to create. For example:
public class Foo<Bar> {
private final AtomicReference<Bar> myBar = new AtomicReference<Bar>();
public Bar get() {
if (myBar.get()==null) myBar.compareAndSet(null,init());
return myBar.get();
}
Bar init() { /* ... */ }
//...
}
EDITED: That will set once, with some lazy-initialization method. It's not perfect for blocking multiple calls to a (presumably expensive) init(), but it could be worse. You could stick the instantiation of myBar into constructor, and then later add a constructor that allows assignment as well, if provided the correct info.
There's some general discussion of thread-safe, singleton instantiation (which is pretty similar to your problem) at, for example, this site.
In theory it would be sufficient to rewrite startup() as follows:
public synchronized void startup() {
if (someObject == null) someObject = new SomeObject();
}
By the way, although the WoRmObject is final, threads can still invoke set() multiple times. You'll really need to add some synchronization.
update: I played a bit round it and created an SSCCE, you may find it useful to play a bit around with it :)
package com.stackoverflow.q2428725;
import java.util.concurrent.Callable;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
public class Test {
public static void main(String... args) throws Exception {
Bean bean = new Bean();
ScheduledExecutorService executor = Executors.newScheduledThreadPool(4);
executor.schedule(new StartupTask(bean), 2, TimeUnit.SECONDS);
executor.schedule(new StartupTask(bean), 2, TimeUnit.SECONDS);
Future<String> result1 = executor.submit(new GetTask(bean));
Future<String> result2 = executor.submit(new GetTask(bean));
System.out.println("Result1: " + result1.get());
System.out.println("Result2: " + result2.get());
executor.shutdown();
}
}
class Bean {
private String property;
private CountDownLatch latch = new CountDownLatch(1);
public synchronized void startup() {
if (property == null) {
System.out.println("Setting property.");
property = "foo";
latch.countDown();
} else {
System.out.println("Property already set!");
}
}
public String get() {
try {
latch.await();
} catch (InterruptedException e) {
// handle.
}
return property;
}
}
class StartupTask implements Runnable {
private Bean bean;
public StartupTask(Bean bean) {
this.bean = bean;
}
public void run() {
System.out.println("Starting up bean...");
bean.startup();
System.out.println("Bean started!");
}
}
class GetTask implements Callable<String> {
private Bean bean;
public GetTask(Bean bean) {
this.bean = bean;
}
public String call() {
System.out.println("Getting bean property...");
String property = bean.get();
System.out.println("Bean property got!");
return property;
}
}
The CountDownLatch will cause all await() calls to block until the countdown reaches zero.
It is most likely thread safe, from your description of the framework. There must have been a memory barrier somewhere between calling myobj.startup() and making myobj available to other threads. That guarantees that the writes in startup() will be visible to other threads. Therefore you don't have to worry about thread safety because the framework does it. There is no free lunch though; everytime another thread obtains access to myobj through the framework, it must involve sync or volatile read.
If you look into the framework and list the code in the path, you'll see sync/volatile in proper places that make your code thread safe. That is, if the framework is correctly implemented.
Let's examine a typical swing example, where a worker threads does some calculation, saves the results in a global variable x, then sends a repaint event. The GUI thread upon receiving the repaint event, reads the results from the global variable x, and repaints accordingly.
Neither the worker thread nor the repaint code does any synchronization or volatile read/write on anything. There must be tens of thousands of implementations like this. Luckily they are all thread safe even though the programmers paid no special attention. Why? Because the event queue is synchronized; we have a nice happends-before chain:
write x - insert event - read event - read x
Therefore write x and read x are properly synchronized, implicitly via event framework.
how about synchronization?
No it is not thread safe. Without synchronization, the new state of your variable might never get communicated to other threads.
Yes, as far as I know references are atomic so you will see either null or the reference. Note that the state of the referenced object is a completely different story
Could you use a ThreadLocal that only allows each thread's value to be set once?
There are a LOT of wrong ways to do lazy instantiation, especially in Java.
In short, the naive approach is to create a private object, a public synchronized init method, and a public unsynchronized get method that performs a null check on your object and calls init if necessary. The intricacies of the problem come in performing the null check in a thread safe way.
This article should be of use: http://en.wikipedia.org/wiki/Double-checked_locking
This specific topic, in Java, is discussed in depth in Doug Lea's 'Concurrent Programming in Java' which is somewhat out of date, and in 'Java Concurrency in Practice' coauthored by Lea and others. In particular, CPJ was published before the release of Java 5, which significantly improved Java's concurrency controls.
I can post more specifics when I get home and have access to said books.
This is my final answer, Regis1 :
/**
* Provides a simple write-one, read-many wrapper for an object reference for those situations
* where you have an instance variable which you would like to declare as final but can't because
* the instance initialization extends beyond construction.
* <p>
* An example would be <code>java.awt.Applet</code> with its constructor, <code>init()</code> and
* <code>start()</code> methods.
* <p>
* Threading Design : [ ] Single Threaded [x] Threadsafe [ ] Immutable [ ] Isolated
*
* #since Build 2010.0311.1923
*/
public class WormRef<T>
extends Object
{
private volatile T reference; // wrapped reference
public WormRef() {
super();
reference=null;
}
public WormRef<T> init(T val) {
// Use synchronization to prevent a race-condition whereby the following interation could happen between three threads
//
// Thread 1 Thread 2 Thread 3
// --------------- --------------- ---------------
// init-read null
// init-read null
// init-write A
// get A
// init-write B
// get B
//
// whereby Thread 3 sees A on the first get and B on subsequent gets.
synchronized(this) {
if(reference!=null) { throw new IllegalStateException("The WormRef container is already initialized"); }
reference=val;
}
return this;
}
public T get() {
if(reference==null) { throw new IllegalStateException("The WormRef container is not initialized"); }
return reference;
}
} // END PUBLIC CLASS
(1) Confer the game show "So you want to be a millionaire", hosted by Regis Philburn.
Just my little version based on AtomicReference. It's probably not the best, but I believe it to be clean and easy to use:
public static class ImmutableReference<V> {
private AtomicReference<V> ref = new AtomicReference<V>(null);
public boolean trySet(V v)
{
if(v == null)
throw new IllegalArgumentException("ImmutableReference cannot hold null values");
return ref.compareAndSet(null, v);
}
public void set(V v)
{
if(!trySet(v)) throw new IllegalStateException("Trying to modify an immutable reference");
}
public V get()
{
V v = ref.get();
if(v == null)
throw new IllegalStateException("Not initialized immutable reference.");
return v;
}
public V tryGet()
{
return ref.get();
}
}
First question: Why can't you just make start up a private method, called in the constructor, then it can be final. This would ensure thread safety after the constructor is called, as it is invisible before and only read after the constructor returns. Or re-factor your class structure so that the start-up method can create the MyClass object as part of its constructor. In may ways this particular case seems like a case of poor structure, where you really just want to make it final and immutable.
The easy Approach, if the class is immutable, and is read only after it is created, then wrap it in an Immutable List from guava. You can also make your own immutable wrapper which defensively copies when asked to return the reference, so this prevents a client from changing the reference. If it is immutable internally, then no further synchronization is needed, and unsynchronized reads are permissible. You can set your wrapper to defensively copy on request, so even attempts to write to it fail cleanly (they just don't do anything). You may need a memory barrier, or you may be able to do lazy initialisation, although note that lazy initialisation may require further synchronization, as you may get several unsynchronized read requests while the object is being constructed.
The slightly more involved approach would involve using an enumeration. Since enumerations are guaranteed singleton, then as soon as the enumeration is created it is fixed for ever. You still have to make sure that the object is internally immutable, but it does guarantee its singleton status. Without much effort.
The following class could answer your question. Some thread-safety achieved by using a volatile intermediate variable in conjunction with final value keeper in the provided generic. You may consider further increase of it by using synchronized setter/getter. Hope it helps.
https://stackoverflow.com/a/38290652/6519864
For a travel booking web application, where there are 100 concurrent users logged in,
should ticket booking and generating an "E-Ticket Number" be implemented by a "synchronized" or a "static synchronized" method?
Well, are you aware of the difference between a static method and an instance method in general?
The only difference that synchronized makes is that before the VM starts running that method, it has to acquire a monitor. For an instance method, the lock acquired is the one associated with the object you're calling the method on. For a static method, the lock acquired is associated with the type itself - so no other threads will be able to call any other synchronized static methods at the same time.
In other words, this:
class Test
{
static synchronized void Foo() { ... }
synchronized void Bar() { ... }
}
is roughly equivalent to:
class Test
{
static void Foo()
{
synchronized(Test.class)
{
...
}
}
void Bar()
{
synchronized(this)
{
...
}
}
}
Generally I tend not to use synchronized methods at all - I prefer to explicitly synchronize on a private lock reference:
private final Object lock = new Object();
...
void Bar()
{
synchronized(lock)
{
...
}
}
You haven't provided nearly enough information to determine whether your method should be a static or instance method, or whether it should be synchronized at all. Multithreading is a complex issue - I strongly suggest that you read up on it (through books, tutorials etc).
Jon's answer covers the difference hinted at in your question title.
However, I would say that neither should be used for generating a ticket number. On the assumption that these are being stored in a database, somewhere - the database should be responsible for generating the number when you insert the new record (presumably by an autoincrementing primary key, or something similar).
Failing that, if you must generate the number within Java code, I suspect that the synchronisation overhead might be quite noticeable with 100 concurrent users. If you are running on Java 1.5 or later, I'd use a java.util.concurrent.AtomicInteger to get the ticket number, which you can simply call as
private static final AtomicInteger ticketSequence;
static
{
final int greatestTicket = getHighestExistingTicketNumber(); // May not be needed if you can start from zero each time
ticketSequence = new AtomicInteger(greatestTicket + 1);
}
public /*static*/ int getNextTicketNumber()
{
return ticketSequence.incrementAndGet();
}
This gives you the concurrent global uniqueness you need in a much more efficient fashion than synchronizing every time you need an integer.