Options for synchronizing access to a Set in java - java

I am writing a multithreaded webcrawler, where there is one WebCrawler object which uses an ExecutorService to process WebPages and extract anchors from each page. I have a method defined in the WebCrawler class which can be called by WebPages to add extracted sublinks to the WebCrawler's Set of nextPagestoVisit, and the method currently looks like this:
public synchronized void addSublinks(Set<WebPage> sublinks) {
this.nextPagestoVisit.addAll(sublinks);
}
Currently I am using a synchronized method. However, I am considering other possible options.
Making the Set a synchronizedSet:
public Set<WebPage> nextPagestoVisit = Collections.synchronizedSet(new HashSet<WebPage>());
Making the Set volatile:
public volatile Set<WebPage> nextPagestoVisit = new HashSet<WebPage>();
Are both of these two alternatives sufficient on their own? (I am assuming that the synchronized method approach is sufficient). Or would I have to combine them with other safety measures? If they all work, which one would be the best approach? If one or both do not work, please provide a short explanation of why (ie. what kind of scenario would cause problems). Thanks
Edit: To be clear, my goal is to ensure that if two WebPages both try to add their sublinks at the same time, one write will not be overwritten by the other (ie. all sublinks will successfully be added to the Set).

Making the variable that holds the set volatile will do nothing for you. For a start this only affects the "pointer" to the set, not the set itself. Then it means the atomic updates to the pointer will be seen by all threads. It does nothing for the Set.
Making the Set a synchronizedSet does what you want. As would either synchronized blocks or Semaphores. However both would add more boilerplate than just using synchronizedSet and are an additional vector for bugs.

I am not sure that you know what the volatile keyword actually does. It does not ensure mutual exclusion. Quoting from here :
"Using volatile, on the other hand, forces all accesses (read or write) to the volatile variable to occur to main memory, effectively keeping the volatile variable out of CPU caches. This can be useful for some actions where it is simply required that visibility of the variable be correct and order of accesses is not important."
You do have however several alternatives:
Using a synchronized block
synchronized {
//synchronized code
}
Using alternatives like semaphores
Semaphore semaphore,
semaphore.aquire()
...
semaphore.release()
Again, note that you are saying you are trying to achieve synchronized access. If all you need is to ensure that the variable is the freshest possible always the volatile is a fairly simple solution.

Related

Emulating a memory barrier in Java to get rid of volatile reads

Assume I have a field that's accessed concurrently and it's read many times and seldom written to.
public Object myRef = new Object();
Let's say a Thread T1 will be setting myRef to another value, once a minute, while N other Threads will be reading myRef billions of times continuously and concurrently. I only need that myRef is eventually visible to all threads.
A simple solution would be to use an AtomicReference or simply volatile like this:
public volatile Object myRef = new Object();
However, afaik volatile reads do incur a performance cost. I know it's minuscule, this is more like something I wonder rather than something I actually need. So let's not be concerned with performance and assume this a purely theoretical question.
So the question boils down to: Is there way to safely bypass volatile reads for references that are only seldom written to, by doing something at the write site?
After some reading, it looks like memory barriers could be what I need. So if a construct like this existed, my problem would be solved:
Write
Invoke Barrier (sync)
Everything is synced and all threads will see the new value. (without a permanent cost at read sites, it can be stale or incur a one time cost as the caches are synced, but after that it's all back to regular field gets till next write).
Is there such a construct in Java, or in general? At this point I can't help but think if something like this existed, it would have been already incorporated into the atomic packages by the much smarter people maintaining those. (Disproportionately frequent read vs write might not have been a case to care for?) So maybe there is something wrong in my thinking and such a construct is not possible at all?
I have seen some code samples use 'volatile' for a similar purpose, exploiting it's happen-before contract. There is a separate sync field e.g.:
public Object myRef = new Object();
public volatile int sync = 0;
and at writing thread/site:
myRef = new Object();
sync += 1 //volatile write to emulate barrier
I am not sure this works, and some argue this works on x86 architecture only. After reading related sections in JMS, I think it's only guaranteed to work if that volatile write is coupled with a volatile read from the threads who need to see the new value of myRef. (So doesn't get rid of the volatile read).
Returning to my original question; is this possible at all? Is it possible in Java? Is it possible in one of the new APIs in Java 9 VarHandles?
So basically you want the semantics of a volatile without the runtime cost.
I don't think it is possible.
The problem is that the runtime cost of volatile is due the instructions that implement the memory barriers in the writer and the reader code. If you "optimize" the reader by getting rid of its memory barrier, then you are no longer guaranteed that the reader will see the "seldomly written" new value when it is actually written.
FWIW, some versions of the sun.misc.Unsafe class provide explicit loadFence, storeFence and fullFence methods, but I don't think that using them will give any performance benefit over using a volatile.
Hypothetically ...
what you want is for one processor in a multi-processor system to be able to tell all of the other processors:
"Hey! Whatever you are doing, invalidate your memory cache for address XYZ, and do it now."
Unfortunately, modern ISAs don't support this.
In practice, each processor controls its own cache.
Not quite sure if this is correct but I might solve this using a queue.
Create a class that wraps an ArrayBlockingQueue attribute. The class has an update method and a read method. The update method posts the new value onto the queue and removes all values except the last value. The read method returns the result of a peek operation on the queue, i.e. read but do not remove. Threads peeking the element at the front of the queue do so unimpeded. Threads updating the queue do so cleanly.
You can use ReentrantReadWriteLock which is designed for few writes many reads scenario.
You can use StampedLock which is designed for the same case of few writes many reads, but also reads can be attempted optimistically. Example:
private StampedLock lock = new StampedLock();
public void modify() { // write method
long stamp = lock.writeLock();
try {
modifyStateHere();
} finally {
lock.unlockWrite(stamp);
}
}
public Object read() { // read method
long stamp = lock.tryOptimisticRead();
Object result = doRead(); //try without lock, method should be fast
if (!lock.validate(stamp)) { //optimistic read failed
stamp = lock.readLock(); //acquire read lock and repeat read
try {
result = doRead();
} finally {
lock.unlockRead(stamp);
}
}
return result;
}
Make your state immutable and allow controlled modifications only by cloning the existing object and altering only necessary properties via constructor. Once the new state is constructed, you assign it to the reference being read by the many reading threads. This way reading threads incur zero cost.
X86 provides TSO; you get [LoadLoad][LoadStore][StoreStore] fences for free.
A volatile read requires release semantics.
r1=Y
[LoadLoad]
[LoadStore]
...
And as you can see, this is already provided by the X86 for free.
In your case most of the calls are a read and the cacheline will already be in the local cache.
There is a price to pay on compiler level optimizations, but on a hardware level, a volatile read is just as expensive as a regular read.
On the other hand the volatile write is more expensive because it requires a [StoreLoad] to guarantee sequential consistency (in the JVM this is done using a lock addl %(rsp),0 or an MFENCE). Since writes are very seldom in your situation, this isn't an issue.
I would be careful with optimizations on this level because it is very easy to make the code more complex than is actually needed. Best to guide your development efforts by some benchmarks e.g. using JMH and preferably test it on real hardware. Also there could be other nasty creatures hidden like false sharing.

java, when (and for how long) can a thread cache the value of a non-volatile variable?

From this post: http://www.javamex.com/tutorials/synchronization_volatile_typical_use.shtml
public class StoppableTask extends Thread {
private volatile boolean pleaseStop;
public void run() {
while (!pleaseStop) {
// do some stuff...
}
}
public void tellMeToStop() {
pleaseStop = true;
}
}
If the variable were not declared volatile (and without other
synchronization), then it would be legal for the thread running the
loop to cache the value of the variable at the start of the loop and
never read it again.
In Java 5 or later:
is the last paragraph correct?
So, exactly at what moment can a thread cache the value of the pleaseStop variable (and for how long)? just before calling one of StoppableTask's functions (run, tellMeTpStop) of the object? (and the thread must update the variable when exiting the function at the latest?)
can you point me to a documentation/tutorial reference about this (Java 5 or later)?
Update: here it is my compilation of answers posted on this question:
Without using volatile nor synchronized, there are actually two problems with the above program:
1- Threads can cache the variable pleaseStop since the very first moment that the thread starts and don't update it never again. so, the loop would keep going forever. this can be solved by either using volatile or synchronized. This thread cache mechanism does not exist in C.
2- The java compiler can optimise the code, and replace while(!pleaseStop) {...} to if (!pleaseStop) { while (true) {...}}. so, the loop would keep going forever. again, this can be solved by either using volatile or synchronized. This compiler optimisation exists also in C.
Some more info:
https://www.ibm.com/developerworks/library/j-5things15/
When can it cache?
As for your question about "when can it cache" the value, the answer to that is "always". To understand what that means, read on. Processors have storage called caches, which make it possible for the running thread to access values in memory by reading from the cache rather than from memory. The running thread can also write to this cache as if it were writing the value to memory. Thus, so long as the thread is running, it could be using the cache to store the data it's using. Something has to explicitly happen to flush the value from the cache to memory. For a single-threaded process, this is all well and dandy, but if you have another thread, it might be trying to read the data from memory while the other thread is plugging away reading and writing it to the processor cache without flushing to memory.
How long can it cache?
As for the "for how long" part- the answer is unfortunately forever unless you do something about it. Synchronizing on the data in question is one way to force a flush from the cache so that all threads see the updates to the value. For more detail about ways to cause a flush, see the next section.
Where's some Documentation?
As for the "where's the documentation" question, a good place to start is here. For specifically how you can force a flush, java refers to this by discussing whether one action (such as a data write) appears to "happen before" another (like a data read). For more about this, see here.
What about volatile?
volatile in essence prevents the type of processor caching described above. This ensures that all writes to a variable are visible from other threads. To learn more, the tutorial you linked to in your post seems like a good start.
The relevant documentation is on the volatile keyword (Java Language Specification, Chapter 8.3.1.4) here and the Java memory model (Java Language Specification, Chapter 17.4) here
Declaring the parameter volatile ensures that there is some synchronization of actions by other threads that might change its value. Without declaring volatile, Java can reorder operations taken on a parameter by different threads.
As the Spec says (see 8.3.1.4), for parameters declared volatile,"accesses ... occur exactly as many times, and in exactly the same order, as they appear to occur during execution of the program text by each thread..."
So the caching you speak of can happen anytime if the parameter is not volatile. But there is enforcement of consistent access to that parameter by the Java memory model if the parameter is declared volatile. But no such enforcement would take place if not (unless the threads are synchronized).
The official documentation is in section 17 of the Java Language Specification, especially 17.4 Memory Model.
The correct viewpoint is to start by assuming multi-threaded code won't work, and try to force it to work whether it likes it or not. Without the volatile declaration, or similar, there would be nothing forcing the read of pleaseStop to ever see the write if it happens in another thread.
I agree with the Java Concurrency in Practice recommendation. It is a good distillation of the implications of the JLS material for practical Java programming.

Synchronize file object

From what I know and researched, the synchronized keyword in Java lets synchronize a method or code block statement to handle multi-threaded access. If I want to lock a file for writing purposes on a multi-threaded environment, I must should use the classes in the Java NIO package to get the best results. Yesterday, I come up with a question about handling a shared servlet for file I/O operations, and BalusC comments are good to help with the solution, but the code in this answer confuses me. I'm not asking community "burn that post" or "let's downvote him" (note: I haven't downvoted it or anything, and I have nothing against the answer), I'm asking for an explanation if the code fragment can be considered a good practice
private static File theFile = new File("theonetoopen.txt");
private void someImportantIOMethod(Object stuff){
/*
This is the line that confuses me. You can use any object as a lock, but
is good to use a File object for this purpose?
*/
synchronized(theFile) {
//Your file output writing code here.
}
}
The problem is not about locking on a File object - you can lock on any object and it does not really matter (to some extent).
What strikes me is that you are using a non final monitor, so if another part of your code reallocates theFile: theFile = new File();, the next thread that comes around will lock with a different object and you don't have any guarantee that your code won't be executed by 2 threads simultaneously any more.
Had theFile been final, the code would be ok, although it is preferable to use private monitors, just to make sure there is not another piece of code that uses it for other locking purposes.
If you only need to lock the file within a single application then it's OK (assuming final is added).
Note that the solution won't work if you load the class more than once using different class loaders. For example, if you have a web application that is deployed twice in the same web server, each instance of the application will have its own lock object.
As you mention, if you want the locking to be robust and have the file locked from other programs too, you should use FileLock (see the docs, on some systems it is not guaranteed that all programs must respect the lock).
Had you seen: final Object lock = new Object() would you be asking?
As #assylias pointed out the problem is that the lock is not final here
Every object in Java can act as a lock for synchronization. They are called intrinsic locks. Only one thread at a time can execute a block of code guarded by a given lock.
More on that: http://docs.oracle.com/javase/tutorial/essential/concurrency/locksync.html
Using synchronized keyword for the whole method could have performance impact on your application. That's why you can sometimes use synchronized block.
You should remember that lock reference can't be changed. The best solution is to use final keyword.

In Java can I depend on reference assignment being atomic to implement copy on write?

If I have an unsynchronized java collection in a multithreaded environment, and I don't want to force readers of the collection to synchronize[1], is a solution where I synchronize the writers and use the atomicity of reference assignment feasible? Something like:
private Collection global = new HashSet(); // start threading after this
void allUpdatesGoThroughHere(Object exampleOperand) {
// My hypothesis is that this prevents operations in the block being re-ordered
synchronized(global) {
Collection copy = new HashSet(global);
copy.remove(exampleOperand);
// Given my hypothesis, we should have a fully constructed object here. So a
// reader will either get the old or the new Collection, but never an
// inconsistent one.
global = copy;
}
}
// Do multithreaded reads here. All reads are done through a reference copy like:
// Collection copy = global;
// for (Object elm: copy) {...
// so the global reference being updated half way through should have no impact
Rolling your own solution seems to often fail in these type of situations, so I'd be interested in knowing other patterns, collections or libraries I could use to prevent object creation and blocking for my data consumers.
[1] The reasons being a large proportion of time spent in reads compared to writes, combined with the risk of introducing deadlocks.
Edit: A lot of good information in several of the answers and comments, some important points:
A bug was present in the code I posted. Synchronizing on global (a badly named variable) can fail to protect the syncronized block after a swap.
You could fix this by synchronizing on the class (moving the synchronized keyword to the method), but there may be other bugs. A safer and more maintainable solution is to use something from java.util.concurrent.
There is no "eventual consistency guarantee" in the code I posted, one way to make sure that readers do get to see the updates by writers is to use the volatile keyword.
On reflection the general problem that motivated this question was trying to implement lock free reads with locked writes in java, however my (solved) problem was with a collection, which may be unnecessarily confusing for future readers. So in case it is not obvious the code I posted works by allowing one writer at a time to perform edits to "some object" that is being read unprotected by multiple reader threads. Commits of the edit are done through an atomic operation so readers can only get the pre-edit or post-edit "object". When/if the reader thread gets the update, it cannot occur in the middle of a read as the read is occurring on the old copy of the "object". A simple solution that had probably been discovered and proved to be broken in some way prior to the availability of better concurrency support in java.
Rather than trying to roll out your own solution, why not use a ConcurrentHashMap as your set and just set all the values to some standard value? (A constant like Boolean.TRUE would work well.)
I think this implementation works well with the many-readers-few-writers scenario. There's even a constructor that lets you set the expected "concurrency level".
Update: Veer has suggested using the Collections.newSetFromMap utility method to turn the ConcurrentHashMap into a Set. Since the method takes a Map<E,Boolean> my guess is that it does the same thing with setting all the values to Boolean.TRUE behind-the-scenes.
Update: Addressing the poster's example
That is probably what I will end up going with, but I am still curious about how my minimalist solution could fail. – MilesHampson
Your minimalist solution would work just fine with a bit of tweaking. My worry is that, although it's minimal now, it might get more complicated in the future. It's hard to remember all of the conditions you assume when making something thread-safe—especially if you're coming back to the code weeks/months/years later to make a seemingly insignificant tweak. If the ConcurrentHashMap does everything you need with sufficient performance then why not use that instead? All the nasty concurrency details are encapsulated away and even 6-months-from-now you will have a hard time messing it up!
You do need at least one tweak before your current solution will work. As has already been pointed out, you should probably add the volatile modifier to global's declaration. I don't know if you have a C/C++ background, but I was very surprised when I learned that the semantics of volatile in Java are actually much more complicated than in C. If you're planning on doing a lot of concurrent programming in Java then it'd be a good idea to familiarize yourself with the basics of the Java memory model. If you don't make the reference to global a volatile reference then it's possible that no thread will ever see any changes to the value of global until they try to update it, at which point entering the synchronized block will flush the local cache and get the updated reference value.
However, even with the addition of volatile there's still a huge problem. Here's a problem scenario with two threads:
We begin with the empty set, or global={}. Threads A and B both have this value in their thread-local cached memory.
Thread A obtains obtains the synchronized lock on global and starts the update by making a copy of global and adding the new key to the set.
While Thread A is still inside the synchronized block, Thread B reads its local value of global onto the stack and tries to enter the synchronized block. Since Thread A is currently inside the monitor Thread B blocks.
Thread A completes the update by setting the reference and exiting the monitor, resulting in global={1}.
Thread B is now able to enter the monitor and makes a copy of the global={1} set.
Thread A decides to make another update, reads in its local global reference and tries to enter the synchronized block. Since Thread B currently holds the lock on {} there is no lock on {1} and Thread A successfully enters the monitor!
Thread A also makes a copy of {1} for purposes of updating.
Now Threads A and B are both inside the synchronized block and they have identical copies of the global={1} set. This means that one of their updates will be lost! This situation is caused by the fact that you're synchronizing on an object stored in a reference that you're updating inside your synchronized block. You should always be very careful which objects you use to synchronize. You can fix this problem by adding a new variable to act as the lock:
private volatile Collection global = new HashSet(); // start threading after this
private final Object globalLock = new Object(); // final reference used for synchronization
void allUpdatesGoThroughHere(Object exampleOperand) {
// My hypothesis is that this prevents operations in the block being re-ordered
synchronized(globalLock) {
Collection copy = new HashSet(global);
copy.remove(exampleOperand);
// Given my hypothesis, we should have a fully constructed object here. So a
// reader will either get the old or the new Collection, but never an
// inconsistent one.
global = copy;
}
}
This bug was insidious enough that none of the other answers have addressed it yet. It's these kinds of crazy concurrency details that cause me to recommend using something from the already-debugged java.util.concurrent library rather than trying to put something together yourself. I think the above solution would work—but how easy would it be to screw it up again? This would be so much easier:
private final Set<Object> global = Collections.newSetFromMap(new ConcurrentHashMap<Object,Boolean>());
Since the reference is final you don't need to worry about threads using stale references, and since the ConcurrentHashMap handles all the nasty memory model issues internally you don't have to worry about all the nasty details of monitors and memory barriers!
According to the relevant Java Tutorial,
We have already seen that an increment expression, such as c++, does not describe an atomic action. Even very simple expressions can define complex actions that can decompose into other actions. However, there are actions you can specify that are atomic:
Reads and writes are atomic for reference variables and for most primitive variables (all types except long and double).
Reads and writes are atomic for all variables declared volatile (including long and double variables).
This is reaffirmed by Section §17.7 of the Java Language Specification
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
It appears that you can indeed rely on reference access being atomic; however, recognize that this does not ensure that all readers will read an updated value for global after this write -- i.e. there is no memory ordering guarantee here.
If you use an implicit lock via synchronized on all access to global, then you can forge some memory consistency here... but it might be better to use an alternative approach.
You also appear to want the collection in global to remain immutable... luckily, there is Collections.unmodifiableSet which you can use to enforce this. As an example, you should likely do something like the following...
private volatile Collection global = Collections.unmodifiableSet(new HashSet());
... that, or using AtomicReference,
private AtomicReference<Collection> global = new AtomicReference<>(Collections.unmodifiableSet(new HashSet()));
You would then use Collections.unmodifiableSet for your modified copies as well.
// ... All reads are done through a reference copy like:
// Collection copy = global;
// for (Object elm: copy) {...
// so the global reference being updated half way through should have no impact
You should know that making a copy here is redundant, as internally for (Object elm : global) creates an Iterator as follows...
final Iterator it = global.iterator();
while (it.hasNext()) {
Object elm = it.next();
}
There is therefore no chance of switching to an entirely different value for global in the midst of reading.
All that aside, I agree with the sentiment expressed by DaoWen... is there any reason you're rolling your own data structure here when there may be an alternative available in java.util.concurrent? I figured maybe you're dealing with an older Java, since you use raw types, but it won't hurt to ask.
You can find copy-on-write collection semantics provided by CopyOnWriteArrayList, or its cousin CopyOnWriteArraySet (which implements a Set using the former).
Also suggested by DaoWen, have you considered using a ConcurrentHashMap? They guarantee that using a for loop as you've done in your example will be consistent.
Similarly, Iterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
Internally, an Iterator is used for enhanced for over an Iterable.
You can craft a Set from this by utilizing Collections.newSetFromMap like follows:
final Set<E> safeSet = Collections.newSetFromMap(new ConcurrentHashMap<E, Boolean>());
...
/* guaranteed to reflect the state of the set at read-time */
for (final E elem : safeSet) {
...
}
I think your original idea was sound, and DaoWen did a good job getting the bugs out. Unless you can find something that does everything for you, it's better to understand these things than hope some magical class will do it for you. Magical classes can make your life easier and reduce the number of mistakes, but you do want to understand what they are doing.
ConcurrentSkipListSet might do a better job for you here. It could get rid of all your multithreading problems.
However, it is slower than a HashSet (usually--HashSets and SkipLists/Trees hard to compare). If you are doing a lot of reads for every write, what you've got will be faster. More importantly, if you update more than one entry at a time, your reads could see inconsistent results. If you expect that whenever there is an entry A there is an entry B, and vice versa, the skip list could give you one without the other.
With your current solution, to the readers, the contents of the map are always internally consistent. A read can be sure there's an A for every B. It can be sure that the size() method gives the precise number of elements that will be returned by the iterator. Two iterations will return the same elements in the same order.
In other words, allUpdatesGoThroughHere and ConcurrentSkipListSet are two good solutions to two different problems.
Can you use the Collections.synchronizedSet method? From HashSet Javadoc http://docs.oracle.com/javase/6/docs/api/java/util/HashSet.html
Set s = Collections.synchronizedSet(new HashSet(...));
Replace the synchronized by making global volatile and you'll be alright as far as the copy-on-write goes.
Although the assignment is atomic, in other threads it is not ordered with the writes to the object referenced. There needs to be a happens-before relationship which you get with a volatile or synchronising both reads and writes.
The problem of multiple updates happening at once is separate - use a single thread or whatever you want to do there.
If you used a synchronized for both reads and writes then it'd be correct but the performance may not be great with reads needing to hand-off. A ReadWriteLock may be appropriate, but you'd still have writes blocking reads.
Another approach to the publication issue is to use final field semantics to create an object that is (in theory) safe to be published unsafely.
Of course, there are also concurrent collections available.

How to Ensure Memory Visibility in Java when passing data across threads

I have a producer consumer like pattern where some threads are creating data and periodically passing putting chunks of that data to be consumed by some other threads.
Keeping the Java Memory Model in mind, how do i ensure that the data passed to the consumer thread has full 'visibility'?
I know there are data structures in java.util.concurrent like ConcurrentLinkedQueue that are built specifically for this, but I want to do this as low level as possible without utilizing those and have full transparency on what is going on under the covers to ensure the memory visibility part.
If you want "low level" then look into volatile and synchronized.
To transfer data, you need a field somewhere available to all threads. In your case it really needs to be some sort of collection to handle multiple entries. If you made the field final, referencing, say, a ConcurrentLinkedQueue, you'd pretty much be done. The field could be made public and everyone could see it, or you could make it available with a getter.
If you use an unsynchronized queue, you have more work to do, because you have to manually synchronize all access to it, which means you have to track down all usages; not easy when there's a getter method. Not only do you need to protect the queue from simultaneous access, you must make sure interdependent calls end up in the same synchronized block. For instance:
if (!queue.isEmpty()) obj = queue.remove();
If the whole thing is not synchronized, queue is perfectly capable of telling you it is not empty, then throwing a NoSuchElementException when you try to get the next element. (ConcurrentLinkedQueue's interface is specifically designed to let you do operations like this with one method call. Take a good look at it even if you don't want to use it.)
The simple solution is to wrap the queue in another object whose methods are carefully chosen and all synchronized. The wrapped class, even if it's LinkedList or ArrayList, will now act (if you do it right) like CLQ, and it can be freely released to the rest of the program.
So you would have what is really a global field with an immutable (final) reference to a wrapper class, which contains a LinkedList (for example) and has synchronized methods that use the LinkedList to store and access data. The wrapper class, like CLQ, would be thread-safe.
Some variants on this might be desirable. It might make sense to combine the wrapper with some other high-level class in your program. It might also make sense to create and make available instances of nested classes: perhaps one that only adds to the queue and one that only removes from it. (You couldn't do this with CLQ.)
A final note: having synchronized everything, the next step is to figure out how to unsynchronize (to keep threads from waiting too much) without breaking thread safety. Work really hard on this, and you'll end up rewriting ConcurrentLinkedQueue.

Categories

Resources