Is LinkedTransferQueue thread safe? - java

Java doc for concurrent linked queue clearly states that it is unbounded thread safe queue. Whereas, javadoc for linked transfer queue only mentions unbounded nature of the queue and says nothing about thread safety
I am not referring to transfer method.
Producer calling add method and Consumer calling poll method.

In short the answer is yes, class j.u.c.LinkedTransferQueue is thread safe. Since collection class is thread safe you can call any of its methods from any threads safely including add and poll.
The following words from javadoc should be considered as a proof of that:
Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a LinkedTransferQueue happen-before actions subsequent to the access or removal of that element from the LinkedTransferQueue in another thread.
Also j.u.c.BlockingQueue doesn't make lots of sense in a single-threaded environment. I mean you may use it, but there are more lightweight solutions like simple j.u.Queue interface. The main application area of BlockingQueue is producer-consumer applications where consumer is able to block waiting for the next element, which might come only from another thread because the current one is blocked . Since j.u.c.TransferQueue extends it then its implementations also supposed to be thread safe.

Related

Java's BlockingQueue Take Policy

In a multithreaded environment, what is the policy regarding the take() method (removing an Object) for the various implementations of Java's BlockingQueue (for example LinkedBlockingQueue)?
Does the thread that calls take() first get the first available Object, e.g. is it a first come, first served, random, or is there some other policy describing how the queue is accessed by multiple threads? I cannot seem to find anything in the docs.

Sychronizing run() method of thread object

Caution: Don't synchronize a thread object's run() method because
situations arise where multiple threads need to execute run(). Because
those threads attempt to synchronize on the same object, only one
thread at a time can execute run(). As a result, each thread must wait
for the previous thread to terminate before it can access run().
From : http://www.javaworld.com/article/2074318/java-concurrency/java-101--understanding-java-threads--part-2--thread-synchronization.html?page=2
How can different threads execute, run() of same Thread object?
Some general advice about when it comes to synchronizing that seems relevant here: don't put synchronization in your threads or runnables, put it in the data structures that the threads are accessing. If you guard the data structure with locks then you are assured that no threads can access it in an unsafe way, because the data structure is enforcing safe access. If you leave synchronization up to the threads then someone can write a new thread that doesn't do the appropriate locking and possibly corrupt the data structures being accessed. Remember the point of synchronization is to protect data from unsafe concurrent modification.
(If you look at the JavaWorld article, listings 2 and 3 illustrate this; listing 3 is noticeably saner than listing 2 in that the FinTrans data is protecting its own integrity where listing 2 the threads are doing the synchronizing. The author argues for listing 3 as having better granularity for locking, and doesn't address the point about having data structures protect their own integrity. Maybe that's because he's whipping out toy examples and isn't taking any of them so seriously; after all, at the top of the page he's showing using a string as a lock, which is a pretty bad idea.)
Also the Java API documentation discourages you from
locking on thread objects. The Java threading implementation locks on threads, for instance when joining a thread, so any manipulations you do may get tangled up with what the Java threading API does; for instance, if you try to lock on threads any notify calls you make may get consumed by other threads trying to join. Also you may see some strange things, for instance when a thread terminates it sends a notification to anything waiting on its monitor. If you make the run method of a Thread subclass synchronized then the running thread has to acquire its own lock. If another thread wants to join on it then (unless the subclassed Thread gives up the lock by waiting) that makes it impossible for any thread to join (since that involves waiting, which requires acquiring the lock on the Thread object), so instead of the joining thread briefly acquiring the lock and settling down to wait, the joining thread is likely to be hanging out in the waitset contending for the lock until the to-be-joined-on thread terminates.
Another point is it's better to implement your tasks as Runnables rather than as Thread objects. I cannot think of a situation where it would be preferable to implement the run method of a Thread object than to implement Runnable, unless I was trying to create a confusing situation on purpose, or was typing a fast-and-dirty demo. (I really wonder whether the reason for Thread implementing Runnable could be to make it more convenient for people to write fast-n-dirty demo code.) Making your task a Runnable makes it clear that you have some logic that isn't tied down to being run as a new thread but alternatively can be handed off to an executor which can be in charge of how that task is executed. (You can do this with a Thread object, but it's confusing.) So another reason not to be making a synchronized run method on Thread objects is because you shouldn't be subclassing Thread in order to override the run method (and in general it's usually preferable to use executors with Runnables over spinning up your own threads).

Synchronization of Queues

In my program I have a "Sender" and "Receiver" Thread, both of which act on one queue.
I have defined my queue in the class as:
static Queue<my_class> queue = new LinkedList<my_class>();
However, I think I am encountering problems because my Queues aren't synchronized. In my "Receiver" thread, I sometimes have to remove items from the Queue which will affect how the "Sender" Thread operates.
I was reading about BlockingQueues and was wondering whether that was what I need to use in my situation? If so, how do I change my declaration? Do I also need to declare the BlockingQueue in both the "Sender" and "Receiver" threads?
Would the BlockingQueue ensure that only one thread accessed the queue at any given time?
Sorry, I am quite new to the concept of synchronization and I find it quite confusing..
Thank you for your help.
The main advantage is that a BlockingQueue provides a correct, thread-safe implementation. This runtime implementation developed, reviewed, and maintained by concurrency experts.
A blocking queue is a queue that blocks when you try to dequeue from it and the queue is empty, or if you try to enqueue items to it and the queue is already full. A thread trying to dequeue from an empty queue is blocked until some other thread inserts an item into the queue. A thread trying to enqueue an item in a full queue is blocked until some other thread makes space in the queue, either by dequeuing one or more items or clearing the queue completely.
You will need to declare a BlockingQueue in the receiver so that it can use the take method; the sender can still use a Queue declaration with its offer method, but you'll need to declare a BlockingQueue if you want to use the offer(E e, long timeout, TimeUnit unit) method.
Most of the BlockingQueue implementations are actually lock-free, meaning that one thread can add to it while another simultaneously thread takes from it (lock-free implementations are usually more scalable than implementations that use locks). Regardless of the implementation, the BlockingQueue is thread-safe.

How to access the underlying queue of a ThreadpoolExecutor in a thread safe way

The getQueue() method provides access to the underlying blocking queue in the ThreadPoolExecutor, but this does not seem to be safe.
A traversal over the queue returned by this function might miss updates made to the queue by the ThreadPoolExecutor.
"Method getQueue() allows access to the work queue for purposes of monitoring and debugging. Use of this method for any other purpose is strongly discouraged."
What would you do if you wanted to traverse the workQueue used by the ThreadPoolExecutor? Or is there an alternate approach?
This is a continuation of..
Choosing a data structure for a variant of producer consumer problem
Now, I am trying the multiple producer multiple consumer, but I want to use some existing threadpool, since I don't want to manage the threadpool myself, and also I want a callback when ThreadPoolExecutor has finished executing some task alongwith the ability to examine in a thread safe way the "inprogress transactions" data structure.
You can override the beforeExecute and afterExecute methods to let you know that a task has started and finished. You can override execute() to know when a task is added.
The problem you have is that the Queue is not designed to be queried and a task can be consumed before you see it. One way around this is to create you own implementation of a Queue (perhaps overriding/wrapping a ConcurrentLinkedQueue)
BTW: The queue is thread-safe, however it is not guaranteed you will see every entry.
A ConcurrentLinkedQueue.iterator() is documented as
Returns an iterator over the elements in this queue in proper sequence. The returned iterator is a "weakly consistent" iterator that will never throw ConcurrentModificationException, and guarantees to traverse elements as they existed upon construction of the iterator, and may (but is not guaranteed to) reflect any modifications subsequent to construction.
If you wish to copy the items in the queue and ensure that what you have in the queue has not been executed, you might try this:
a) Introduce the ability to pause and resume execution. See: http://download.oracle.com/javase/1,5.0/docs/api/java/util/concurrent/ThreadPoolExecutor.html
b) first pause the queue, then copy the queue, then resume the queue.
And then i have my own question. The problem i see is that while you execute your "Runnable", that "Runnable" is not placed in the queue, but a FutureTask "wrapper", and i cannot find any way to determine just which one of my runnables i'm looking at. So, grabbing and examining the queue is pretty useless. Does anybody know aht i missed there?
If you are following Jon Skeet's advice in your accepted answer from your previous question, then you'll be controlling access to your queues via locks. If you acquire a lock on the in-progress queue then you can guarantee that a traversal will not miss any items in it.
The problem with this of course is that while you are doing the traverse all other operations on the queue (other producers and consumers trying to access it) will block, which could have a pretty dire effect on performance.

Do I need extra synchronization when using a BlockingQueue?

I have a simple bean #Entity Message.java that has some normal properties. The life-cycle of that object is as follows
Instantiation of Message happens on Thread A, which is then enqueued into a blockingQueue
Another thread from a pool obtains that object and do some stuff with it and changes the state of Message, after that, the object enters again into the blockingQueue. This step is repeated until a condition makes it stop. Each time the object gets read/write is potentially from a different thread, but with the guarantee that only one thread at a time will be reading/writing to it.
Given that circumstances, do I need to synchronize the getters/setters ? Perhaps make the properties volatile ? or can I just leave without synchronization ?
Thanks and hope I could clarify what I am having here.
No, you do not need to synchronize access to the object properties, or even use volatile on the member variables.
All actions performed by a thread before it queues an object on a BlockingQueue "happen-before" the object is dequeued. That means that any changes made by the first thread are visible to the second. This is common behavior for concurrent collections. See the last paragraph of the BlockingQueue class documentation:
Memory consistency effects: As with other concurrent collections, actions in a thread prior to placing an object into a BlockingQueue happen-before actions subsequent to the access or removal of that element from the BlockingQueue in another thread.
As long as the first thread doesn't make any modifications after queueing the object, it will be safe.
You don't need to do synchronization yourself, because the queue does it for you already.
Visibility is also guaranteed.
If you're sure that only one thread at a time will access your object, then you don't need synchronisation.
However, you can ensure that by using the synchronized keyword: each time you want to access this object and be sure that no other thread is using the same instance, wrap you code in a synchronized block:
Message myMessage = // ...
synchronized (myMessage) {
// You're the only one to have access to this instance, do what you want
}
The synchronized block will acquire an implicit lock on the myMessage object. So, no other synchronized block will have access to the same instance until you leave this block.
It would sound like you could leave of the synchronized off the methods. The synchronized simply locks the object to allow only a single thread to access it. You've already handled that with the blocking queue.
Volatile would be good to use, as that would ensure that each thread has the latest version, instead of a thread local cache value.

Categories

Resources