I'm creating a media player app for Android. I have two threads: one producing audio frames, and another consuming those frames.
I want my customer to be able to experiment with using different sizes of ArrayBlockedQueue's, from "no" buffering (really 1) to up to 10 blocks of buffering.
I can't seem to find any classes in Java that provide a similar functionality to ArrayBlockedQueue, but allows me to dynamically make the list of items longer/shorter.
Question 1) Does anyone know of a class that functions similar to ArrayBlockedQueue, yet allows me to change the amount of items to hold?
Then I had a strange thought: Could I fudge it? Could I create a new ArrayBlockedQueue with the new size, and step through copying the 1-10 items that are currently in the old ArrayBlockedQueue and putting them into the new ArrayBlockedQueue, and then storing a pointer to the new ArrayBlockedQueue over the old one?
Since there'll never be more than 10 (or whatever my buffer limit is), it shouldn't take too much time copying the items to the new array.
Question 2) Is that a "reasonable" way to approach an ArrayBlockedQueue implementation that still gives me flexibility?
Question 3)Is there a better way to approach this?
-Ken
You will probably need to create your own BlockingQueue implementation that wraps your old queue and the new queue - poll from the old queue until it's empty, then set it to null to prevent any memory leaks. This way you won't lose any pending puts on the old queue
MyBlockingQueue {
private MyBlockingQueue oldQueue
private ArrayBlockingQueue newQueue
ArrayBlockingQueue(int newCapacity, MyBlockingQueue _oldQueue) {
oldQueue = _oldQueue
newQueue = new ArrayBlockingQueue(newCapacity)
E oldVal = null
while(newQueue.remainingCapacity() > 0 &&
(oldVal = oldPoll) != null)
newQueue.put(oldVal)
}
boolean isEmpty() {
(oldQueue == null || oldQueue.isEmpty) && newQueue.isEmpty
}
void put(E e) {
newQueue.put(e)
}
E take() {
E oldVal = oldPoll
if(oldVal != null) oldVal else newQueue.take
}
E poll() {
E oldVal = oldPoll
if(oldVal != null) oldVal else newQueue.poll
}
private E oldPoll() {
// If you have more than one consumer thread, then use a temporary variable
// for oldQueue - otherwise it might be set to null between the null check
// and the call to poll
if(oldQueue == null) null
else {
E oldVal = oldQueue.poll
if(oldVal != null) oldVal
else {
oldQueue = null
null
}
}
}
}
To your questions:
1) There isn't one that allows you to manually change the queue size, although something like a LinkedBlockingQueue will grow up to the max that you set for it.
2 and 3) You could do what you described (create a new ArrayBlockingQueue) using the 3rd constructor described in the docs:
ArrayBlockingQueue(int capacity, boolean fair, Collection c)
Creates an ArrayBlockingQueue with the given (fixed) capacity, the specified access policy and initially containing the elements of the given collection, added in traversal order of the collection's iterator.
This gives you the copy construction that you're looking for, and allows you to set the new capacity. Sizing up:
// create the first queue
Queue smallQueue = new ArrayBlockingQueue(5);
// copy small queue over to big queue
Queue bigQueue = new ArrayBlockingQueue(10, false, smallQueue);
Sizing down (pseudocode):
Queue bigQueue = new ArrayBlockingQueue(10);
// start processing data with your producer / consumer.
// then...
Queue smallQueue = new ArrayBlockingQueue(1);
// 1) change producer to start doing puts into the smallQueue
// 2) let consumer continue consuming from the bigQueue until it is empty
// 3) change consumer to start polling from the smallQueue
Your puts from step 1 will block until you switch the consumer over.
Related
I have two separate ChronicleQueues that were created by independent threads that monitor web socket streams in a Java application. When I read each queue independently in a separate single-thread program, I can traverse each entire queue as expected - using the following minimal code:
final ExcerptTailer queue1Tailer = queue1.createTailer();
final ExcerptTailer queue2Tailer = queue2.createTailer();
while (true)
{
try( final DocumentContext context = queue1Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter1++;
queue1Data = context.wire()
.bytes()
.readObject(Queue1Data.class);
queue1Writer.write(String.format("%d\t%d\t%d%n", counter1, queue1Data.getEventTime(), queue1Data.getEventContent()));
}
}
while (true)
{
try( final DocumentContext context = queue2Tailer.readingDocument() )
{
if ( isNull(context.wire()) )
break;
counter2++;
queue2Data = context.wire()
.bytes()
.readObject(Queue2Data.class);
queue2Writer.write(String.format("%d\t%d\t%d%n", counter2, queue2Data.getEventTime(), queue2Data.getEventContent()));
}
}
In the above, I am able to read all the Queue1Data objects, then all the Queue2Data objects and access values as expected. However, when I try to interleave reading the queues (read an object from one queue, based on a property of Queue1Data object (a time stamp), read Queue2Data objects until the first object that is after the time stamp (the limit variable below), of the active Queue1Data object is found - then do something with it) after only one object from the queue2Tailer is read, an exception is thrown .DecoratedBufferUnderflowException: readCheckOffset0 failed. The simplified code that fails is below (I have tried putting the outer while(true) loop inside and outside the the queue2Tailer try block):
final ExcerptTailer queue1Tailer = queue1Queue.createTailer("label1");
try( final DocumentContext queue1Context = queue1Tailer.readingDocument() )
{
final ExcerptTailer queue2Tailer = queue2Queue.createTailer("label2");
while (true)
{
try( final DocumentContext queue2Context = queue2Tailer.readingDocument() )
{
if ( isNull(queue2Context.wire()) )
{
terminate = true;
break;
}
queue2Data = queue2Context.wire()
.bytes()
.readObject(Queue2Data.class);
while(true)
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues
{ // but the second read fails
// cache a value
break;
}
}
// continue working with queu2Data object and cached values
} // end try block for queue2 tailer
} // end outer while loop
} // end outer try block for queue1 tailer
I have tried as above, and also with both Tailers created at the beginning of the function which does the processing (a private function executed when a button is clicked in a relatively simple Java application). Basically I took the loop which worked independently, and put it inside another loop in the function, expecting no problems. I thinking I am missing something crucial in how tailers are positioned and used to read objects, but I cannot figure out what it is - since the same basic code works when reading queues independently. The use of isNull(context.wire()) to determine when there are no more objects in a queue I got from one of the examples, though I am not sure this is the proper way to determine when there are no more objects in a queue when processing the queue sequentially.
Any suggestions would be appreciated.
You're not writing it correctly in the first instance.
Now, there's hardcore way of achieving what you are trying to achieve (that is, do everything explicitly, on lower level), and use MethodReader/MethodWriter magic rovided by Chronicle.
Hardcore way
Writing
// write first event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("first").text("Hello first");
}
// write second event type
try (DocumentContext dc = queueAppender.writingDocument()) {
dc.wire().writeEventName("second").text("Hello second");
}
This will write different types of messages into the same queue, and you will be able to easily distinguish those when reading.
Reading
StringBuilder reusable = new StringBuilder();
while (true) {
try (DocumentContext dc = tailer.readingDocument()) {
if (!dc.isPresent) {
continue;
}
dc.wire().readEventName(reusable);
if ("first".contentEquals(reusable)) {
// handle first
} else if ("second".contentEquals(reusable)) {
// handle second
}
// optionally handle other events
}
}
The Chronicle Way (aka Peter's magic)
This works with any marshallable types, as well as any primitive types and CharSequence subclasses (i.e. Strings), and Bytes. For more details have a read of MethodReader/MethodWriter documentation.
Suppose you have some data classes:
public class FirstDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
public class SecondDataType implements Marshallable { // alternatively - extends SelfDescribingMarshallable
// data fields...
}
Then, to write those data classes to the queue, you just need to define the interface, like this:
interface EventHandler {
void first(FirstDataType first);
void second(SecondDataType second);
}
Writing
Then, writing data is as simple as:
final EventHandler writer = appender.methodWriterBuilder(EventHandler).get();
// assuming firstDatum and secondDatum are created earlier
writer.first(firstDatum);
writer.second(secondDatum);
What this does is the same as in the hardcore section - it writes event name (which is taken from the method name in method writer, i.e. "first" or "second" correspondingly), and then the actual data object.
Reading
Now, to read those events from the queue, you need to provide an implementation of the above interface, that will handle corresponding event types, e.g.:
// you implement this to read data from the queue
private class MyEventHandler implements EventHandler {
public void first(FirstDataType first) {
// handle first type of events
}
public void second(SecondDataType second) {
// handle second type of events
}
}
And then you read as follows:
EventHandler handler = new MyEventHandler();
MethodReader reader = tailer.methodReader(handler);
while (true) {
reader.readOne(); // readOne returns boolean value which can be used to determine if there's no more data, and pause if appropriate
}
Misc
You don't have to use the same interface for reading and writing. In case you want to only read events of second type, you can define another interface:
interface OnlySecond {
void second(SecondDataType second);
}
Now, if you create a handler implementing this interface and give it to tailer#methodReader() call, the readOne() calls will only process events of second type while skipping all others.
This also works for MethodWriters, i.e. if you have several processes writing different types of data and one process consuming all that data, it is not uncommon to define multiple interfaces for writing data and then single interface extending all others for reading, e.g.:
interface FirstOut {
void first(String first);
}
interface SecondOut {
void second(long second);
}
interface ThirdOut {
void third(ThirdDataType third);
}
interface AllIn extends FirstOut, SecondOut, ThirdOut {
}
(I deliberately used different data types for method parameters to show how it is possible to use various types)
With further testing, I have found that nested loops to read multiple queues which contain data in different POJO classes is possible. The problem with the code in the above question is that queue1Context is obtained once, OUTSIDE the loop that I expected to read queue1Data objects. My fundamental misconception was that DocumentContext objects managed stepping through objects in a queue, whereas actually ExcerptTailer objects manage stepping (maintaining indices) when reading a queue sequentially.
In case it might help someone else just getting started with ChronicleQueues, the inner loop in the original question should be:
while(true)
{
try (final DocumentContext queue1Context = queue1Tailer() )
{
queue1Data = queue1Context.wire()
.bytes()
.readObject(Queue1Data.class); // first read succeeds
if (queue1Data.getFieldValue() > limit) // if this fails the inner loop continues as expected
{ // and second and subsequent reads now succeed
// cache a value
break;
}
}
}
And of course the outer-most try block containing queue1Context (in the original code) should be removed.
I am looking into the implementation of Streams::findLast from Guava and while trying to understand it, there were a couple of things that simply I could not grasp. Here is it's implementation:
public static <T> java.util.Optional<T> findLast(Stream<T> stream) {
class OptionalState {
boolean set = false;
T value = null;
void set(#Nullable T value) {
set = true;
this.value = value;
}
T get() {
checkState(set);
return value;
}
}
OptionalState state = new OptionalState();
Deque<Spliterator<T>> splits = new ArrayDeque<>();
splits.addLast(stream.spliterator());
while (!splits.isEmpty()) {
Spliterator<T> spliterator = splits.removeLast();
if (spliterator.getExactSizeIfKnown() == 0) {
continue; // drop this split
}
// Many spliterators will have trySplits that are SUBSIZED even if they are not themselves
// SUBSIZED.
if (spliterator.hasCharacteristics(Spliterator.SUBSIZED)) {
// we can drill down to exactly the smallest nonempty spliterator
while (true) {
Spliterator<T> prefix = spliterator.trySplit();
if (prefix == null || prefix.getExactSizeIfKnown() == 0) {
break;
} else if (spliterator.getExactSizeIfKnown() == 0) {
spliterator = prefix;
break;
}
}
// spliterator is known to be nonempty now
spliterator.forEachRemaining(state::set);
return java.util.Optional.of(state.get());
}
Spliterator<T> prefix = spliterator.trySplit();
if (prefix == null || prefix.getExactSizeIfKnown() == 0) {
// we can't split this any further
spliterator.forEachRemaining(state::set);
if (state.set) {
return java.util.Optional.of(state.get());
}
// fall back to the last split
continue;
}
splits.addLast(prefix);
splits.addLast(spliterator);
}
return java.util.Optional.empty();
}
In essence the implementation is not that complicated to be honest, but here are the things that I find a bit weird (and I'll take the blame here if this question gets closed as "opinion-based", I understand it might happen).
First of all is the creation of OptionalState class, this could have been replaced with an array of a single element:
T[] state = (T[]) new Object[1];
and used as simple as:
spliterator.forEachRemaining(x -> state[0] = x);
Then the entire method could be split into 3 pieces:
when a certain Spliterator is known to be empty:
if (spliterator.getExactSizeIfKnown() == 0)
In this case it's easy - just drop it.
then if the Spliterator is known to be SUBSIZED. This is the "happy-path" scenario; as in this case we can split this until we get to the last element. Basically the implementation says: split until the prefix is either null or it's empty (in which case consume the "right" spliterator) or if after a split the "right" spliterator is known to be empty, consume the prefix one. This is done via:
// spliterator is known to be nonempty now
spliterator.forEachRemaining(state::set);
return java.util.Optional.of(state.get());
Second question I have is actually about this comment:
// Many spliterators will have trySplits that are SUBSIZED
// even if they are not themselves SUBSIZED.
This is very interesting, but I could not find such an example, would appreciate if someone would introduce me to one. As a matter of fact, because this comment exists, the code in the next (3-rd part of the method can not be done with a while(true) like the second), because it assumes that after a trySplit we could obtain a Spliterator that is SUBSIZED, even if our initial one was not, so it has to go to the very beginning of findLast.
this part of the method is when a Spliterator is known not to be SUBSIZED and in this case it does not have a known size; thus it relies on how the Spliterator from the source is implemented and in this case actually a findLast makes little sense... for example a Spliterator from a HashSet will return whatever the last entry is in the last bucket...
When you iterate a Spliterator of an unknown size, you have to track whether an element has been encountered. This can be done by calling tryAdvance and using the return value or by using forEachRemaining with a Consumer which records whether an element has been encountered. When you go the latter route, a dedicated class is simpler than an array. And once you have a dedicated class, why not use it for the SIZED spliterator as well.
What’s strange to me, is that this local class, which only exists to be used as a Consumer, doesn’t implement Consumer but requires the binding via state::set.
Consider
Stream.concat(
Stream.of("foo").filter(s -> !s.isEmpty()),
Stream.of("bar", "baz"))
The Spliterator representing the entire stream can’t have the SIZED characteristic. But when splitting off the first substream with the unknown size, the remaining stream has a known size.
Test code:
Spliterator<String> sp = Stream.concat(
Stream.of("foo").filter(s -> !s.isEmpty()),
Stream.of("bar", "baz"))
.spliterator();
do {
System.out.println(
"SIZED: "+sp.hasCharacteristics(Spliterator.SIZED)
+ ", SUBSIZED: "+sp.hasCharacteristics(Spliterator.SUBSIZED)
+ ", exact size if known: "+sp.getExactSizeIfKnown());
} while(sp.trySplit() != null);
Result:
SIZED: false, SUBSIZED: false, exact size if known: -1
SIZED: true, SUBSIZED: true, exact size if known: 2
SIZED: true, SUBSIZED: true, exact size if known: 1
But to me, it looks weird when someone tells in a comment to know that splitting can change the characteristics and then doing a pre-test with SUBSIZED, instead of just doing the split and check whether the result has a known size. After all, the code is doing the split anyway, in the alternative branch, when the characteristic is not present. In my old answer, I did the pretest to avoid allocating data structures, but here, the ArrayDeque is always created and used. But I think, even my old answer could be simplified.
I’m not sure what you are aiming at. When a Spliterator has the ORDERED characteristic, the order of traversal and splitting is well-defined. Since HashSet is not ordered, the term “last” is meaningless. If you are radical, you could optimize the operation to just return the first element for unordered streams; that’s valid and much faster.
What is strange, is this condition:
if (prefix == null || prefix.getExactSizeIfKnown() == 0) {
// we can't split this any further
(and a similar loop termination in the SUBSIZED path)
Just because one prefix happened to have a known zero size, it does not imply that the suffix can’t split further. Nothing in the specification says that.
As a consequence of this condition, Stream.concat(Stream.of("foo"), Stream.of("bar","baz")) can be handled optimally, whereas for Stream.concat(Stream.of(), Stream.of("bar", "baz")), it will fall back to a traversal, because the first prefix has a known size of zero.
I'm modeling a fastfood drive-through using a priority queue of Event objects (yep, homework). There are three stations, an order, payment and collection station, each with their own queues. I'm having an issue removing an item from the collection station (the last station visited by a Customer). When the collection queue fills up, the simulation begins to loop indefinitely and the timer no longer increments. I assume it's because of this line:
public void processFoodCollection(Customer c, Event e) {
collection.remove(c);//this is the issue I believe
collection.setServerStatus(false);
However, if I attempt to use my standard remove() method (which just calls queue.poll() in the station class), it returns null. I have no idea why this would happen after having just added to the queue, it's given me zero problems any other time I've used it at the other stations (and I just copy-pasted the methods for each station class, they're all identical). I'd really appreciate any help on identifying what is causing this loop, or, if it is the remove(c), how I can fix this.
Here's my Restaurant class (contains the simulation, lazy I know), but hopefully it's not necessary to look at since my documentation isn't complete yet: http://pastebin.com/cHj3xqJN[1]
Here's the method in particular I'm having issue with (collection is a CollectionStation var composed of a queue field, the remove(Customer c) and remove() methods are just for access to the queue methods):
public void processFoodCollection(Customer c, Event e) {
collection.remove(c);//Customer c should be head of queue
collection.setServerStatus(false);//cashier not helping anyone
if (collection.getQueueSize() < 2) {
//process event if room available in collection queue
collection.add(payment.remove());//remove customer from payment queue, add to collection queue
payment.setServerStatus(false);
if (!collection.getServerStatus())
createFinishCollection(collection.peek());//creates event to be processed in the future
//generate new finish payment event for new head of payment queue
if (payment.getQueueSize()>0) {
double b = this.exponentialPay.next();//minutes until finished paying
while (b == 0.0) {
b = this.exponentialPay.next();//ensure return > 0.0
}
createFinishPayment(b, payment.peek());
}
//check if head of order queue can move up
if (order.getQueueSize() > 0) {
if (order.peek().getTimeFinished() == this.clockTime && order.getServerStatus()) {
processFinishOrder(order.peek(), eventList.peek());
} else if (!order.getServerStatus()) {
double timeToOrder = (order.peek().getOrderSize() * this.timeToOrderItem);
createFinishOrder(timeToOrder, order.peek());
}
}
}
}//end method processFoodCollection
If I try to use the remove() method, I get a null pointer exception at:
Event newE = new Event(Events.FINISHFOODCOLLECTION, this.clockTime + (c.getOrderSize() * this.timeToProcessItem), c);
from the null customer object (c). This traces back to the createFinishCollection() method call in the above code. So this is how I know I get a null from calling remove, but I don't understand why my queue would say it's empty when I just added to it. Is there some trick to look out for when indirectly removing data structure elements?
Here's the remove method I'm calling (in the CollectionStation class):
/**
* Removes and returns object at front of queue.
* #return
*/
public Customer remove() {
return customerQueue.poll();
}//end method remove
I'm honestly pretty stumped why this wouldn't work. Any guidance would be appreciated (not looking for answers, just help).
I'm trying to find an alternative to using java.utils.TreeMap in a threaded environment due to the memory TreeMap consumes and doesn't free, using Sun JDK 1.6. We have a constant resizing TreeMap, which needs to keep sorted by key:
public class WKey implements Comparable<Object> {
private Long ms = null;
private Long id = null;
public WKey(Long ms, Long id) {
this.ms = ms;
this.id = id;
}
#Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((id == null) ? 0 : id.hashCode());
result = prime * result + ((ms == null) ? 0 : ms.hashCode());
return result;
}
#Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
WKey other = (WKey) obj;
if (id == null) {
if (other.id != null)
return false;
} else if (!id.equals(other.id))
return false;
if (ms == null) {
if (other.ms != null)
return false;
} else if (!ms.equals(other.ms))
return false;
return true;
}
#Override
public int compareTo(Object arg0) {
WKey k = (WKey) arg0;
if (this.ms < k.ms)
return -1;
else if (this.ms.equals(k.ms)) {
if (this.id < k.id)
return -1;
else if (this.id.equals(k.id)) {
return 0;
}
}
return 1;
}
}
Thread 1
-------------------------
Iterator<WKey> it = result.keySet().iterator();
if (it.hasNext()) {
WKey key = it.next();
/// Some processing here
result.remove(key);
}
Constantly retrieves the first element within the TreeMap and then
removes it.
Threads 2, 3, and 4
-------------------------
for (Object r : rs) {
Object[] row = (Object[]) r;
Long ms = ((Calendar) row[1]).getTimeInMillis();
Long id = (Long) row[0];
WKey key = new WKey(ms, id);
result.put(key, row);
}
Are bulk processing threads which process returned results from various
services, which are generally basic POJOs. POJOs are generated a key
based off their id and timestamp using the key above. I cannot
modify the POJO to implement a Comparator, so I must use this key.
After keys have been identified and process, they are inserted into a
shared tree map where they are getting pulled off in sorted order by
a processing thread.
We were using:
Map<WKey, Object[]> result =
Collections.synchronizedMap(new TreeMap<WKey, Object[]>());
We also tried using ConcurrentSkipListMap:
SortedMap<WKey, Object[]> result =
new ConcurrentSkipListMap<WKey, Object[]>();
We are experimenting with big data and need a collection which sufficiently utilizes memory any time remove or put is used in a threaded environment. We are inserting records by the hundred-thousands and removing elements from the top on a needed basis. We need a container which can scale. The problem with TreeMap is it never releases memory unless you recreate the container, new Collections.synchronizedMap(new TreeMap()) . This is an expensive operation to call in a threaded environment anytime a new entry is removed.
Alternatively, I've been experimenting with Javolution. It has a FastSortedMap, which seems to fit in nicely. However, I find their implementation and usage of the collection rather quirky and lacking sufficient documentation and examples.
They do have a few examples listed in the doc, which relate to the clases FastSortedMap is derived from, but nothing seems to work:
A high-performance hash map with real-time behavior. Related to FastCollection, fast map supports various views.
atomic() - Thread-safe view for which all reads are mutex-free and map updates (e.g. putAll) are atomic.
shared() - View allowing concurrent modifications.
parallel() - A view allowing parallel processing including updates.
sequential() - View disallowing parallel processing.
unmodifiable() - View which does not allow any modifications.
entrySet() - FastSet view over the map entries allowing entries to be added/removed.
keySet() - FastSet view over the map keys allowing keys to be added (map entry with null value).
values() - FastCollection view over the map values (add not supported).
I instantiated the following collection as a replacement to TreeMap:
private FastMap<WKey, Object[]> result =
new FastSortedMap<WKey, Object[]>().shared();
However, once another thread touches the container. All the member functions start to fail. I still encounter null values returned from result.iterator().next(), size() sometimes hangs, result.keySet().min() is very sluggish. result.get returns null. None of the examples in doc really show how the concurrent views are used, listed above. It's really frustrating.
I've looked a at Apache Collections, but I'm afraid I might experience the same issue as many of their sorting collections are derived from java.utils HashMaps and TreeMaps. I looked into Guava as well, but their sorted containers require you to implement comparable on both key and value. I was trying to avoid implementing comparable on the 'value'. I don't need to sort both objects. If I implemented comparable on the value, I would just use a sorted list, queue, or table. Highscale and Trove don't have ordered maps. Fastutils may be a candidate, but I'd have to synchronize everything manually, and I'm trying to save time.
I've reviewed others listed in the stackoverflow benchmark post, but the projects listed previously seem to be my best alternatives.
So far, I'm not convinced Javolution is everything they advertise on their site. My experience is that their implementation is very inconsistent, lacking documentation, and performs rather sluggish in threaded environments. TreeMap performs great; I just wish it wouldn't allocate in such large bursts and GC every now and then. However, I'm hoping there might be somebody out there to prove me wrong, may even demonstrate appropriate usage for Javolutions collections in a threaded environment.
Otherwise, if somebody knows a way around resizing Treemaps, without using 'new', or has solved similar/alternative instances working with threading and sorted maps, any info would be greatly appreciated!
I have two processes (producer/consumer). The first one puts elements in a Collection, the second one reads them.
I want the second process not to read every individual element, but wait until:
There are at least N elements in the collection OR
The last element was received T seconds ago.
Is there any Collection in Java 5+ that allows this kind of behaviour? I was thinking about an implementation of Queue, but I've only found DelayQueue that is not exactly what I need.
Thank you.
I'd implement an observable collection. The second process will listen to events, signalling that N elements are in the collection (events based on size attribute) and that no element has been added for a certain time (needs a timer, that is reset on every add operation)
Something like this (just drafting the size requirement):
public ObservableCollection implements Collection {
private int sizetrigger;
private Collection collection;
private Collection<Listener> listeners = new ArrayList<Listener>();
public ObservableCollection(Collection collection) {
this.collection = collection;
}
#Override
boolean add(Object element) {
collection.add(element);
if (size >= sizeTrigger) {
fireSizeEvent();
}
}
private fireSizeEvent() {
for(Listener listener:listeners) {
listener.thresholdReached(this);
}
}
// addListener, removeListener and implementations of interface methods
}