Ada rendezvous counterpart in Java - java

So in Ada programming language a rendezvous is a method of inter-process synchronization/message-passing. How do I implement this mechanism in java (along with task suspending and selective wait)? I was looking at java's remote method invocation and Exchanger class but I'm yet to find a suitable solution.

The hardest parts to implement in Java will be selective wait and entry queuing. A blocking queue is an approximate simulation of a protected entry in Ada, without a selective wait.
I do not believe there is any equivalent to the Ada select statement in Java. There is also no way to provide the equivalent to an entry queue with programmer selectable queuing policy. The Java wait/notify combination will activate a waiting thread, but you never know which one. The thread actually activated by a notify command is based upon race conditions, and has the effect of being apparently random. Analysis shows that every waiting thread can be expected to be activated through a notify at some point in program execution, but there is no guarantee in Java about the order of thread activation, or even if a given thread will ever activate from a wait state.

Not familiar with ada but a quick google on ada rendezvous suggests you may be looking for one of the BlockingQueue implementations, possibly SynchronousQueue.
Perhaps if you describe what you want to happen when a message is passed we could help more.

you would need an event queue type with a peekWait() function: (disclaimer: this is just a sketch. its not real code).
public class EventSemaphore<Event_T>
{
public void signal(Event_T t);
public Event_T wait();
public Event_T peekWait(); // returns the signalled value if there is one, otherwise returns null and skips.
// atomic signal requestQ and wait on responseQ
public static Return_T signalAndWait<Signal_T,Return_T> (EventSemaphore<Signal_T> requestQ, Signal_T sigVal, EventSemaphore<Return_T> responseQ);
}
an entry has two such events:
public class Entry<Param_T,Return_T>
{
EventSemaphore<Param_T> request_Q = new EventSemaphore<Param_T>();
EventSemaphore<Return_T> response_Q = new EventSemaphore<Return_T>();
}
so suppose that we have a task that looks like this:
task serverTask is
entry foo(x:integer) returns integer;
end serverTask;
task body serverTask is
...
begin
loop
select
when guardOnFoo() =>
accept foo(x) returns integer do
...
return z; -- not sure if thats the correct syntax
end foo;
may be implemented as
public class serverTask extends Thread
{
Entry<Integer,Integer> foo; // EDIT: add missing ;
public void execute()
{
while(true)
{
int t;
if (guardOnFoo() && null!=(t=foo.request_Q.peekWait()))
{
// an ada select- statement is actually just a series of if (peekWait()) statements.
...
foo.response_Q.signal(z);
}
so then a client entry call such as
r := serverTask.foo(x);
becomes
r = signalAndWait(serverTask.foo.request_Q,x,serverTask.foo.response_Q);
unfortunately, i dont know the exact implementation of EventSemaphore<> and peekWait() off the top of my head, but thats the general idea (at least as i understand it and i may be wrong :lol:).

Related

Incremental Future of list extensions

I essentially have a Future<List<T>> that is fetched in batches from the server. For some clients I'd like to provide incremental results while it loads in addition to the whole collection when future is fulfilled.
Is there a common Future extension defined somewhere for this? What are typical patterns/combinators exist for such futures?
I assume that given IncrementalListFuture<T> I can easily define map operation. What else comes to your mind?
Is there a common Future extension defined somewhere for this?
I assume you are talking about incremental results from an ExecutorService. You should consider using an ExecutorCompletionService which allows you to be informed as soon as one of the Future objects is get-able.
To quote from the javadocs:
CompletionService<Result> ecs = new ExecutorCompletionService<Result>(e);
for (Callable<Result> s : solvers) {
ecs.submit(s);
}
int n = solvers.size();
for (int i = 0; i < n; ++i) {
// this waits for one of the futures to finish and provide a result
Future<Result> future = ecs.take();
Result result = future.get();
if (result != null) {
// do something with the result
}
}
Sorry. I initially misread the question and thought that you were asking about a List<Future<?>>. It may be that you could refactor your code to actually return a number of Futures so I'll leave this for posterity.
I would not pass back the list in this case in a Future. You aren't going to be able to get the return until the job finishes.
If possible, I would pass in some sort of BlockingQueue so both the caller and the thread can access it:
final BlockingQueue<T> queue = new LinkedBlockingQueue<T>();
// build out job with the queue
threadPool.submit(new SomeJob(queue));
threadPool.shutdown();
// now we can consume from the queue as it is built:
while (true) {
T result = queue.take();
// you could some constant result object to mean that the job finished
if (result == SOME_END_OBJECT) {
break;
}
// provide intermediate results
}
You could also have some sort of SomeJob.take() method which calls through to a BlockingQueue defined inside of your job class.
// the blocking queue in this case is hidden inside your job object
T result = someJob.take();
...
Here's what I would do:
In the thread that populates the List, make it thread-safe by wrapping the list using Collections.synchronizedList
Make the list publically available, but not modifiable by adding a public method to the thread which returns the list, but wrapped by Collections.unmodifiableList
Instead of giving clients a Future>, give them a handle to the thread, or some kind of wrapper of it, so that they can call the public method above.
Alternatively, as Gray has suggested, BlockingQueues are great for thread coordination like this. This may require more changes to your client code, however.
To answer my own question: there has been lots of development in this area recently. Among most used are: Play iteratees (http://www.playframework.org/documentation/2.0/Iteratees) and Rx for .NET (http://msdn.microsoft.com/en-us/data/gg577609.aspx)
Instead of Future they define something like:
interface Observable<T> {
Disposable subscribe(Observer<T> observer);
}
interface Observer<T> {
void onCompleted();
void onError(Exception error);
void onNext(T value);
}
and lots of combinators.
Alternatively to Observables you can take a look at twitter's approach.
They use Spool, which is an asynchronous version of the Stream.
Basically it is a simple trait similar to the List
trait Spool[+A] {
def head: A
/**
* The (deferred) tail of the spool. Invalid for empty spools.
*/
def tail: Future[Spool[A]]
}
that allows you to do functional stuff like map, filter and foreach on top of it.
Future is really designed to return a single (atomic) result, not for communicating intermediate results in this manner. What you will really want to do is to use multiple futures, one per batch.
We have a similar requirement where we have a bunch of things that we need to get from different remote servers, and each will come return at different times. We don't want to wait until the last one has returned, but rather process them in the order they return. For this we created the AsyncCompleter which takes an Iterable<Callable<T>> and returns an Iterable<T> that blocks on iteration, completely abstracting usage of the Future interface.
If you look at how that class is implemented, you'll see how to use a CompletionService to receive results from an Executor in the order in which they become available, if you need to build this for yourself.
edit: just saw that the second half of Gray's answer is similar, basically using an ExecutorCompletionService

Java class as a Monitor

i need to write a java program but i need some advice before starting on my own.
The program i will be writing is to do the following:
Simulate a shop takes advanced order for donuts
The shop would not take further orders, once 5000 donuts have been ordered
Ok i am kind of stuck thinking if i should be writing the java-class to act as a Monitor or should i use Java-Semaphore class instead?
Please advice me. Thanks for the help.
Any java object can work as a monitor via the wait/notify methods inherited from Object:
Object monitor = new Object();
// thread 1
synchronized(monitor) {
monitor.wait();
}
// thread 2
synchronized(monitor) {
monitor.notify();
}
Just make sure to hold the lock on the monitor object when calling these methods (don't worry about the wait, the lock is released automatically to allow other threads to acquire it). This way, you have a convenient mechanism for signalling among threads.
It seems to me like you are implementing a bounded producer-consumer queue. In this case:
The producer will keep putting items in a shared queue.
If the queue size reaches 5000, it will call wait on a shared monitor and go to sleep.
When it puts an item, it will call notify on the monitor to wake up the consumer if it's waiting.
The consumer will keep taking items from the queue.
When it takes an item, it will call notify on the monitor to wake up the producer.
If the queue size reaches 0 the consumer calls wait and goes to sleep.
For an even more simplified approach, have a loop at the various implementation of BlockingQueue, which provides the above features out of the box!
It seems to me that the core of this exercise is updating a counter (number of orders taken), in a thread-safe and atomic fashion. If implemented incorrectly, your shop could end up taking more than 5000 pre-orders due to missed updates and possibly different threads seeing stale values of the counter.
The simplest way to update a counter atomically is to use synchronized methods to get and increment it:
class DonutShop {
private int ordersTaken = 0;
public synchronized int getOrdersTaken() {
return ordersTaken;
}
public synchronized void increaseOrdersBy(int n) {
ordersTaken += n;
}
// Other methods here
}
The synchronized methods mean that only one thread can be calling either method at any time (and they also provide a memory barrier to ensure that different threads see the same value rather than locally cached ones which may be outdated). This ensures a consistent view of the counter across all threads in your application.
(Note that I didn't have a "set" method but an "increment" method. The problem with "set" is that if client has to call shop.set(shop.get() + 1);, another thread could have incremented the value between the calls to get and set, so this update would be lost. By making the whole increment operation atomic - because it's in the synchronized block - this situation cannot occur.
In practice I would probably use an AtomicInteger instead, which is basically a wrapper around an int to allow for atomic queries and updates, just like the DonutShop class above. It also has the advantage that it's more efficient in terms of minimising exclusive blocking, and it's part of the standard library so will be more immediately familiar to other developers than a class you've written yourself.
In terms of correctness, either will suffice.
Like Tudor wrote, you can use any object as monitor for general purpose locking and synchronization.
However, if you got the requirement that only x orders (x=5000 for your case) can be processed at any one time, you could use the java.util.concurrent.Semaphore class. It is made specifically for use cases where you can only have fixed number of jobs running - it is called permits in the terminology of Semaphore
If you do the processing immediately, you can go with
private Semaphore semaphore = new Semaphore(5000);
public void process(Order order)
{
if (semaphore.tryAcquire())
{
try
{
//do your processing here
}
finally
{
semaphore.release();
}
}
else
{
throw new IllegalStateException("can't take more orders");
}
}
If if takes more than that (human input required, starting another thread/process, etc.), you need to add callback for when the processing is over, like:
private Semaphore semaphore = new Semaphore(5000);
public void process(Order order)
{
if (semaphore.tryAcquire())
{
//start a new job to process order
}
else
{
throw new IllegalStateException("can't take more orders");
}
}
//call this from the job you started, once it is finished
public void processingFinished(Order order)
{
semaphore.release();
//any other post-processing for that order
}

Piping data between threads with Java

I am writing a multi-threaded application that mimics a movie theater. Each person involved is its own thread and concurrency must be done completely by semaphores. The only issue I am having is how to basically link threads so that they can communicate (via a pipe for instance).
For instance:
Customer[1] which is a thread, acquires a semaphore that lets it walk up to the Box Office. Now Customer[1] must tell the Box Office Agent that they want to see movie "X". Then BoxOfficeAgent[1] also a thread, must check to make sure the movie isn't full and either sell a ticket or tell Customer[1] to pick another movie.
How do I pass that data back and forth while still maintaining concurrency with the semaphores?
Also, the only class I can use from java.util.concurrent is the Semaphore class.
One easy way to pass data back and forth between threads is to use the implementations of the interface BlockingQueue<E>, located in the package java.util.concurrent.
This interfaces has methods to add elements to the collection with different behaviors:
add(E): adds if possible, otherwise throws exception
boolean offer(E): returns true if the element has been added, false otherwise
boolean offer(E, long, TimeUnit): tries to add the element, waiting the specified amount of time
put(E): blocks the calling thread until the element has been added
It also defines methods for element retrieval with similar behaviors:
take(): blocks until there's an element available
poll(long, TimeUnit): retrieves an element or returns null
The implementations I use most frequently are: ArrayBlockingQueue, LinkedBlockingQueue and SynchronousQueue.
The first one, ArrayBlockingQueue, has a fixed size, defined by a parameter passed to its constructor.
The second, LinkedBlockingQueue, has illimited size. It will always accept any elements, that is, offer will return true immediately, add will never throw an exception.
The third, and to me the most interesting one, SynchronousQueue, is exactly a pipe. You can think of it as a queue with size 0. It will never keep an element: this queue will only accept elements if there's some other thread trying to retrieve elements from it. Conversely, a retrieval operation will only return an element if there's another thread trying to push it.
To fulfill the homework requirement of synchronization done exclusively with semaphores, you could get inspired by the description I gave you about the SynchronousQueue, and write something quite similar:
class Pipe<E> {
private E e;
private final Semaphore read = new Semaphore(0);
private final Semaphore write = new Semaphore(1);
public final void put(final E e) {
write.acquire();
this.e = e;
read.release();
}
public final E take() {
read.acquire();
E e = this.e;
write.release();
return e;
}
}
Notice that this class presents similar behavior to what I described about the SynchronousQueue.
Once the methods put(E) gets called it acquires the write semaphore, which will be left empty, so that another call to the same method would block at its first line. This method then stores a reference to the object being passed, and releases the read semaphore. This release will make it possible for any thread calling the take() method to proceed.
The first step of the take() method is then, naturally, to acquire the read semaphore, in order to disallow any other thread to retrieve the element concurrently. After the element has been retrieved and kept in a local variable (exercise: what would happen if that line, E e = this.e, were removed?), the method releases the write semaphore, so that the method put(E) may be called again by any thread, and returns what has been saved in the local variable.
As an important remark, observe that the reference to the object being passed is kept in a private field, and the methods take() and put(E) are both final. This is of utmost importance, and often missed. If these methods were not final (or worse, the field not private), an inheriting class would be able to alter the behavior of take() and put(E) breaking the contract.
Finally, you could avoid the need to declare a local variable in the take() method by using try {} finally {} as follows:
class Pipe<E> {
// ...
public final E take() {
try {
read.acquire();
return e;
} finally {
write.release();
}
}
}
Here, the point of this example if just to show an use of try/finally that goes unnoticed among inexperienced developers. Obviously, in this case, there's no real gain.
Oh damn, I've mostly finished your homework for you. In retribution -- and for you to test your knowledge about Semaphores --, why don't you implement some of the other methods defined by the BlockingQueue contract? For example, you could implement an offer(E) method and a take(E, long, TimeUnit)!
Good luck.
Think it in terms of shared memory with read/write lock.
Create a buffer to put the message.
The access to the buffer should be controlled by using a lock/semaphore.
Use this buffer for inter thread communication purpose.
Regards
PKV

killing an infinite loop in java

I am using a third-party library to process a large number of data sets. The process very occasionally goes into an infinite loop (or is blocked - don't know why and can't get into the code). I'd like to kill this after a set time and continue to the next case. A simple example is:
for (Object data : dataList) {
Object result = TheirLibrary.processData(data);
store(result);
}
processData normally takes 1 second max. I'd like to set a timer which kills processData() after , say, 10 seconds
EDIT
I would appreciate a code snippet (I am not practiced in using Threads). The Executor approach looks useful but I don't quite know how to start. Also the pseudocode for the more conventional approach is too general for me to code.
#Steven Schlansker - suggests that unless the thirdparty app anticipates the interrupt it won't work. Again detail and examples would be appreciated
EDIT
I got the precise solution I was wanting from my colleagues Sam Adams, which I am appending as an answer. It has more detail than the other answers, but I will give them both a vote. I'll mark Sam's as the approved answer
One of the ExecutorService.invokeAll(...) methods takes a timeout argument. Create a single Callable that calls the library, and wrap it in a List as an argument to that method. The Future returned indicate how it went.
(Note: untested by me)
Put the call to the library in another thread and kill this thread after a timeout. That way you could also proces multiple objects at the same time if they are not dependant to each other.
EDIT: Democode request
This is pseudo code so you have to improve and extend it. Also error checking weather a call was succesful or not will be of help.
for (Object data : dataList) {
Thread t = new LibThread(data);
// store the thread somewhere with an id
// tid and starting time tstart
// threads
t.start();
}
while(!all threads finished)
{
for (Thread t : threads)
{
// get start time of thread
// and check the timeout
if (runtime > timeout)
{
t.stop();
}
}
}
class LibThread extends Thread {
Object data;
public TextThread(Object data)
{
this.data = data;
}
public void processData()
{
Object result = TheirLibrary.processData(data);
store(result);
}
}
Sam Adams sent me the following answer, which is my accepted one
Thread thread = new Thread(myRunnableCode);
thread.start();
thread.join(timeoutMs);
if (thread.isAlive()) {
thread.interrupt();
}
and myRunnableCode regularly checks Thread.isInterrupted(), and exits cleanly if this returns true.
Alternatively you can do:
Thread thread = new Thread(myRunnableCode);
thread.start();
thread.join(timeoutMs);
if (thread.isAlive()) {
thread.stop();
}
But this method has been deprecated since it is DANGEROUS.
http://download.oracle.com/javase/1.4.2/docs/api/java/lang/Thread.html#stop()
"This method is inherently unsafe. Stopping a thread with Thread.stop causes it to unlock all of the monitors that it has locked (as a natural consequence of the unchecked ThreadDeath exception propagating up the stack). If any of the objects previously protected by these monitors were in an inconsistent state, the damaged objects become visible to other threads, potentially resulting in arbitrary behavior."
I've implemented the second and it does what I want at present.

Is this java code thread-safe?

I am planning to use this schema in my application, but I was not sure whether this is safe.
To give a little background, a bunch of servers will compute results of sub-tasks that belong to a single task and report them back to the central server. This piece of code is used to register the results, and also check whether all the subtasks for the task has completed and if so, report that fact only once.
The important point is that, all task must be reported once and only once as soon as it is completed (all subTaskResults are set).
Can anybody help? Thank you! (Also, if you have a better idea to solve this problem, please let me know!)
*Note that I simplified the code for brevity.
Solution I
class Task {
//Populate with bunch of (Long, new AtomicReference()) pairs
//Actual app uses read only HashMap
Map<Id, AtomicReference<SubTaskResult>> subtasks = populatedMap();
Semaphore permission = new Semaphore(1);
public Task set(id, subTaskResult){
//null check omitted
subtasks.get(id).set(result);
return check() ? this : null;
}
private boolean check(){
for(AtomicReference ref : subtasks){
if(ref.get()==null){
return false;
}
}//for
return permission.tryAquire();
}
}//class
Stephen C kindly suggested to use a counter. Actually, I have considered that once, but I reasoned that the JVM could reorder the operations and thus, a thread can observe a decremented counter (by another thread) before the result is set in AtomicReference (by that other thread).
*EDIT: I now see this is thread safe. I'll go with this solution. Thanks, Stephen!
Solution II
class Task {
//Populate with bunch of (Long, new AtomicReference()) pairs
//Actual app uses read only HashMap
Map<Id, AtomicReference<SubTaskResult>> subtasks = populatedMap();
AtomicInteger counter = new AtomicInteger(subtasks.size());
public Task set(id, subTaskResult){
//null check omitted
subtasks.get(id).set(result);
//In the actual app, if !compareAndSet(null, result) return null;
return check() ? this : null;
}
private boolean check(){
return counter.decrementAndGet() == 0;
}
}//class
I assume that your use-case is that there are multiple multiple threads calling set, but for any given value of id, the set method will be called once only. I'm also assuming that populateMap creates the entries for all used id values, and that subtasks and permission are really private.
If so, I think that the code is thread-safe.
Each thread should see the initialized state of the subtasks Map, complete with all keys and all AtomicReference references. This state never changes, so subtasks.get(id) will always give the right reference. The set(result) call operates on an AtomicReference, so the subsequent get() method calls in check() will give the most up-to-date values ... in all threads. Any potential races with multiple threads calling check seem to sort themselves out.
However, this is a rather complicated solution. A simpler solution would be to use an concurrent counter; e.g. replace the Semaphore with an AtomicInteger and use decrementAndGet instead of repeatedly scanning the subtasks map in check.
In response to this comment in the updated solution:
Actually, I have considered that once,
but I reasoned that the JVM could
reorder the operations and thus, a
thread can observe a decremented
counter (by another thread) before the
result is set in AtomicReference (by
that other thread).
The AtomicInteger and AtomicReference by definition are atomic. Any thread that tries to access one is guaranteed to see the "current" value at the time of the access.
In this particular case, each thread calls set on the relevant AtomicReference before it calls decrementAndGet on the AtomicInteger. This cannot be reordered. Actions performed by a thread are performed in order. And since these are atomic actions, the efects will be visible to other threads in order as well.
In other words, it should be thread-safe ... AFAIK.
The atomicity guaranteed (per class documentation) explicitly for AtomicReference.compareAndSet extends to set and get methods (per package documentation), so in that regard your code appears to be thread-safe.
I am not sure, however, why you have Semaphore.tryAquire as a side-effect there, but without complimentary code to release the semaphore, that part of your code looks wrong.
The second solution does provide a thread-safe latch, but it's vulnerable to calls to set() that provide an ID that's not in the map -- which would trigger a NullPointerException -- or more than one call to set() with the same ID. The latter would mistakenly decrement the counter too many times and falsely report completion when there are presumably other subtasks IDs for which no result has been submitted. My criticism isn't with regard to the thread safety, but rather to the invariant maintenance; the same flaw would be present even without the thread-related concern.
Another way to solve this problem is with AbstractQueuedSynchronizer, but it's somewhat gratuitous: you can implement a stripped-down counting semaphore, where each call set() would call releaseShared(), decrementing the counter via a spin on compareAndSetState(), and tryAcquireShared() would only succeed when the count is zero. That's more or less what you implemented above with the AtomicInteger, but you'd be reusing a facility that offers more capabilities you can use for other portions of your design.
To flesh out the AbstractQueuedSynchronizer-based solution requires adding one more operation to justify the complexity: being able to wait on the results from all the subtasks to come back, such that the entire task is complete. That's Task#awaitCompletion() and Task#awaitCompletion(long, TimeUnit) in the code below.
Again, it's possibly overkill, but I'll share it for the purpose of discussion.
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.AbstractQueuedSynchronizer;
final class Task
{
private static final class Sync extends AbstractQueuedSynchronizer
{
public Sync(int count)
{
setState(count);
}
#Override
protected int tryAcquireShared(int ignored)
{
return 0 == getState() ? 1 : -1;
}
#Override
protected boolean tryReleaseShared(int ignored)
{
int current;
do
{
current = getState();
if (0 == current)
return true;
}
while (!compareAndSetState(current, current - 1));
return 1 == current;
}
}
public Task(int count)
{
if (count < 0)
throw new IllegalArgumentException();
sync_ = new Sync(count);
}
public boolean set(int id, Object result)
{
// Ensure that "id" refers to an incomplete task. Doing so requires
// additional synchronization over the structure mapping subtask
// identifiers to results.
// Store result somehow.
return sync_.releaseShared(1);
}
public void awaitCompletion()
throws InterruptedException
{
sync_.acquireSharedInterruptibly(0);
}
public void awaitCompletion(long time, TimeUnit unit)
throws InterruptedException
{
sync_.tryAcquireSharedNanos(0, unit.toNanos(time));
}
private final Sync sync_;
}
I have a weird feeling reading your example program, but it depends on the larger structure of your program what to do about that. A set function that also checks for completion is almost a code smell. :-) Just a few ideas.
If you have synchronous communication with your servers you might use an ExecutorService with the same number of threads like the number of servers that do the communication. From this you get a bunch of Futures, and you can naturally proceed with your calculation - the get calls will block at the moment the result is needed but not yet there.
If you have asynchronous communication with the servers you might also use a CountDownLatch after submitting the task to the servers. The await call blocks the main thread until the completion of all subtasks, and other threads can receive the results and call countdown on each received result.
With all these methods you don't need special threadsafety measures other than that the concurrent storing of the results in your structure is threadsafe. And I bet there are even better patterns for this.

Categories

Resources