LinkedHashMap access by index vs Performance - java

I would like to discuss a bit of performance of a particular collection, LinkedHashMap, for a particular requirement and how Java 8 or 9 new features could help on that.
Let's suppose I have the following LinkedHashMap:
private Map<Product, Item> items = new LinkedHashMap<>();
Using the default constructor means this Map follows the insertion-order when it is iterated.
--EDITED--
Just to be clear here, I understand that Maps are not the right data structure to be accessed by index, it happens that this class needs actually two remove methods, one by Product, the right way, which is the key, and the other by position, or index, which is not common so that's my concern about performance. BTW, it's not MY requirement.
I have to implement a removeItem() method by index. For those that doesn't know, a LinkedHashMap doesn't have some sort of map.get(index); method available.
So I will list a couple of solutions:
Solution 1:
public boolean removeItem(int position) {
List<Product> orderedList = new ArrayList<>(items.keySet());
Product key = orderedList.get(position);
return items.remove(key) != null;
}
Solution 2:
public boolean removeItem(int position) {
int counter = 0;
Product key = null; //assuming there's no null keys
for(Map.Entry<Product, Item> entry: items.entrySet() ){
if( counter == position ){
key = entry.getKey();
break;
}
counter++;
}
return items.remove(key) != null;
}
Considerations about these 2 solutions.
S1: I understand that ArrayLists have fast iteration and access, so I believe the problem here is that a whole new collection is being created, so the memory would be compromised if I had a huge collection.
S2: I understand that LinkedHashMap iteration is faster than a HashMap but not as fast as an ArrayList, so I believe the time of iteration here would be compromised if we had a huge collection, but not the memory.
Considering all of that, and that my considerations are correct, can I say that both solutions have O(n) complexity?
Is there a better solution for this case in terms of performance, using the latest features of Java 8 or 9?
Cheers!

As said by Stephen C, the time complexity is the same, as in either case, you have a linear iteration, but the efficiency still differs, as the second variant will only iterate to the specified element, instead of creating a complete copy.
You could optimize this even further, by not performing an additional lookup after finding the entry. To use the pointer to the actual location within the Map, you have to make the use of its Iterator explicit:
public boolean removeItem(int position) {
if(position >= items.size()) return false;
Iterator<?> it=items.values().iterator();
for(int counter = 0; counter < position; counter++) it.next();
boolean result = it.next() != null;
it.remove();
return result;
}
This follows the logic of your original code to return false if the key was mapped to null. If you never have null values in the map, you could simplify the logic:
public boolean removeItem(int position) {
if(position >= items.size()) return false;
Iterator<?> it=items.entrySet().iterator();
for(int counter = 0; counter <= position; counter++) it.next();
it.remove();
return true;
}
You may retrieve a particular element using the Stream API, but the subsequent remove operation requires a lookup which makes it less efficient as calling remove on an iterator which already has a reference to the position in the map for most implementations.
public boolean removeItem(int position) {
if(position >= items.size() || position < 0)
return false;
Product key = items.keySet().stream()
.skip(position)
.findFirst()
.get();
items.remove(key);
return true;
}

Considering all of that, and that my considerations are correct, can I say that both solutions have O(n) complexity?
Yes. The average complexity is the same.m
In the first solution the new ArrayList<>(entrySet) step is O(N).
In the second solution the loop is O(N).
There is difference in the best case complexity though. In the first solution you always copy the entire list. In the second solution, you only iterate as far as you need to. So the best case is that it can stop iterating at the first element.
But while the average complexity is O(N) in both cases, my gut feeling is that the second solution will be fastest. (If it matters to you, benchmark it ...)
Is there a better solution for this case in terms of performance, using the latest features of Java 8 or 9?
Java 8 and Java 9 don't offer any performance improvements.
If you want better that O(N) average complexity, you will need a different data structure.
The other thing to note is that indexing the Map's entry sets is generally not a useful thing to do. Whenever an entry is removed from the set, the index values for some of the other entries change ....
Mimicking this "unstable" indexing behavior efficiently is difficult. If you want stable behavior, then you can augment your primary HashMap<K,V> / LinkedHashMap<K,V> with a HashMap<Integer,K> which you use for positional lookup / insertion / retrieval. But even that is a bit awkward ... considering what happens if you need to insert a new entry between entries at positions i and i + 1.

Related

Java <key, value> collection to retrieve minimum element in O(1) or O(log(n)) at worst

I'm iterating through a huge file reading key and value from every line. I need to obtain specific number (say 100k) of elements with highest values. To store them I figured that I need a collection that allows me to check a minimum value in O(1) or O(log(n)) and if currently read value is higher then remove element with minimum value and put new one. What collection enables me to do that? Values are not unique so BiMap is probably not adequate here.
EDIT:
Ultimate goal is to obtain best [key, value] that will be used later. Say my file looks like below (first column - key, second value):
3 6
5 9
2 7
1 6
4 5
Let's assume I'm looking for best two elements and algorithm to achieve that. I figured that I'll use a key-based collection to store best elements. First two elements (<3, 6>, <5, 9>) will be obviously added to the collection as its capacity is 2. But when I get to the third line I need to check if <2, 7> is eligible to be added to the collection (so I need to be able to check if 7 is higher than minimum value in collection (6)
It sounds like you don't actually need a structure because you are simply looking for the largest N values with their corresponding keys, and the keys are not actually used for sorting or retrieval for the purpose of this problem.
I would use the PriorityQueue, with the minimum value at the root. This allows you to retrieve the smallest element in constant time, and if your next value is larger, removal and insertion in O(log N) time.
class V{
int key;
int value;
}
class ComparatorV implements Comparator<V>{
int compare(V a, V b){
return Integer.compare(a.value, b.value);
}
}
For your specific situation, you can use a TreeSet, and to get around the uniqueness of elements in a set you can store pairs which are comparable but which never appear equal when compared. This will allow you to violate the contract with Set which specifies that the Set not contain equal values.
The documentation for TreeSet contains:
The behavior of a set is well-defined even if its ordering is
inconsistent with equals; it just fails to obey the general contract
of the Set interface
So using the TreeSet with the Comparable inconsistent with equals should be fine in this situation. If you ever need to compare your chess pairs for a different reason (perhaps some other algorithm you are also running in this app) where the comparison should be consistent with equals, then provide a Comparator for the other use. Notice that TreeSet has a constructor which takes a Comparator, so you can use that instead of having ChessPair implement Comparable.
Notice: A TreeSet provides more flexibility than a PriorityQueue in general because of all of its utility methods, but by violating the "comparable consistent with equals" contract of Set some of the functionality of the TreeSet is lost. For example, you can still remove the first element of the set using Set.pollFirst, but you cannot remove an arbitrary element using remove since that will rely on the elements being equivalent.
Per your "n or at worst log(n)" requirement, the documentation also states:
This implementation provides guaranteed log(n) time cost for the basic
operations (add, remove and contains).
Also, I provide an optimization below which reduces the minimum-value query to O(1).
Example
Set s = new TreeSet<ChessPair>();
and
public class ChessPair implements Comparable<ChessPair>
{
final int location;
final int value;
public ChessPair(final int location, final int value)
{
this.location = location;
this.value = value;
}
#Override
public int compareTo(ChessPair o)
{
if(value < o.value) return -1;
return 1;
}
}
Now you have an ordered set containing your pairs of numbers, they are ordered by your value, you can have duplicate values, and you can get the associated locations. You can also easily grab the first element (set.first), last (set.last), or get a sub-set (set.subSet(a,b)), or iterate over the first (or last, by using descendingSet) n elements. This provides everything you asked for.
Example Use
You specified wanting to keep the 100 000 best elements. So I would use one algorithm for the first 100 000 possibilities which simply adds every time.
for(int i = 0; i < 100000 && dataSource.hasNext(); i += 1)
{
ChessPair p = dataSource.next(); // or whatever you do to get the next line
set.add(p);
}
and then a different one after that
while(dataSource.hasNext())
{
ChessPair p = dataSource.next();
if(p.value > set.first().value)
{
set.remove(set.pollFirst());
set.add(p);
}
}
Optimization
In your case, you can insert an optimization into the algorithm where you compare against the lowest value. The above, simple version performs an O(log(n)) operation every time it compares against minimum-value since set.first() is O(log(n)). Instead, you can store the minimum value in a local variable.
This optimization works well for scaling this algorithm because the impact is negligible - no gain, no loss - when n is close to the total data set size (ie: you want best 100 values out of 110), but when the total data set is vastly larger than n (ie: best 100 000 out of 100 000 000 000) the query for the minimum value is going to be your most common operation and will now be constant.
So now we have (after loading the initial n values)...
int minimum = set.first().value;
while(dataSource.hasNext())
{
ChessPair p = dataSource.next();
if(p.value > minimum)
{
set.remove(set.pollFirst());
set.add(p);
minimum = set.first().value;
}
}
Now your most common operation - query minimum value - is constant time (O(1)), your second most common operation - add - is worst case log(n) time, and your least most common operation - remove - is worst case log(n) time.
For arbitrarily large data sets, each input is now processed in constant O(1) time.
See java.util.TreeSet
Previous answer (now obsolete)
Based on question edits and discussion in the question's comments, I no longer believe my original answer to be correct. I am leaving it below for reference.
If you want a Map collection which allows fast access to elements based on order, then you want an ordered Map, for which there is a sub-interface SortedMap. Fortunately for you, Java has a great implementation of SortedMap: it's TreeMap, a Map which is backed by a "red-black" tree structure which is an ordered tree.
Red-black-trees are nice since they rotate branches in order to keep the tree balanced. That is, you will not end up with a tree that branches n times in one direction, yielding n layers, just because your data may already have been sorted. You are guaranteed to have approximately log(n) layers in the tree, so it is always fast and guarantees log(n) query even for worst-case.
For your situation, try out the java.util.TreeMap. On the page linked in the previous sentence, there are links also to Map and SortedMap. You should check out the one for SortedMap too, so you can see where TreeMap gets some of the specific functionality that you are looking for. It allows you to get the first key, the last key, and a sub-map that fetches a range from within this map.
For your situation though, it is probably sufficient to just grab an iterator from the TreeMap and iterate over the first n pairs, where n is the number of lowest (or highest) values that you want.
Use a TreeSet, which offers O(log n) insertion and O(1) retrieval of either the highest or lowest scored item.
Your item class must:
Implement Comparable
Not implement equals()
To keep the top 100K items only, use this code:
Item item; // to add
if (treeSet.size() == 100_000) {
if (treeSet.first().compareTo(item) < 0) {
treeSet.remove(treeSet.first());
treeSet.add(item);
}
} else {
treeSet.add(item);
}
If you want a collection ordered by values, you can use a TreeSet which stores tuples of your keys and values. A TreeSet has O(log(n)) access times.
class KeyValuePair<Key, Value: Comparable<Value>> implements Comparable<KeyValuePair<Key, Value>> {
Key key;
Value value;
KeyValuePair(Key key, Value value) {
this.key = key;
this.value = value;
}
public int compare(KeyValuePair<Key, Value> other) {
return this.value.compare(other.value);
}
}
or instead of implementing Comparable, you can pass a Comparator to the set at creation time.
You can then retrieve the first value using treeSet.first().value.
Something like this?
entry for your data structure, that can be sorted based on the value
class Entry implements Comparable<Entry> {
public final String key;
public final long value;
public Entry(String key, long value) {
this.key = key;
this.value = value;
}
public int compareTo(Entry other) {
return this.value - other.value;
}
public int hashCode() {
//hashcode based on the same values on which equals works
}
}
actual code that works with a PriorityQueue. The sorting is based on the value, not on the key as with a TreeMap. This is because of the compareMethod defined in Entry. If the sets grows above 100000, the lowest entry (with the lowest value) is removed.
public class ProcessData {
private int maxSize;
private PriorityQueue<Entry> largestEntries = new PriorityQueue<>(maxSize);
public ProcessData(int maxSize) {
this.maxSize = maxSize;
}
public void addKeyValue(String key, long value) {
largestEntries.add(new Entry(key, value));
if (largestEntries.size() > maxSize) {
largestEntries.poll();
}
}
}

What is the time and space complexity of method retainAll when used on HashSets in Java?

For example in the code below:
public int commonTwo(String[] a, String[] b)
{
Set common = new HashSet<String>(Arrays.asList(a));
common.retainAll(new HashSet<String>(Arrays.asList(b)));
return common.size();
}
Lets take a peruse at the code. The method retainAll is inherited from AbstractCollection and (at least in OpenJDK) looks like this:
public boolean retainAll(Collection<?> c) {
boolean modified = false;
Iterator<E> it = iterator();
while (it.hasNext()) {
if (!c.contains(it.next())) {
it.remove();
modified = true;
}
}
return modified;
}
There is one big this to note here, we loop over this.iterator() and call c.contains. So the time complexity is n calls to c.contains where n = this.size() and at most n calls to it.remove().
This important thing is that the contains method is called on the other Collection and so the complexity is dependant upon the complexity of the other Collection contains.
So, whilst:
Set<String> common = new HashSet<>(Arrays.asList(a));
common.retainAll(new HashSet<>(Arrays.asList(b)));
Would be O(a.length), as HashSet.contains and HashSet.remove are both O(1) (amortized).
If you were to call
common.retainAll(Arrays.asList(b));
Then due to the O(n) contains on Arrays.ArrayList this would become O(a.length * b.length) - i.e. by spending O(n) copying the array to a HashSet you actually make the call to retainAll much faster.
As far as space complexity goes, no additional space (beyond the Iterator) is required by retainAll, but your invocation is actually quite expensive space-wise as you allocate two new HashSet implementations which are actually fully fledged HashMap.
Two further things can be noted:
There is no reason to allocate a HashSet from the elements in a - a cheaper collection that also has O(1) remove from the middle such as an LinkedList can be used. (cheaper in memory and also build time - a hash table is not built)
Your modifications are being lost as you create new collection instances and only return b.size().
The implementation can be found in the java.util.AbstractCollection class. The way it is implemented looks like this:
public boolean retainAll(Collection<?> c) {
Objects.requireNonNull(c);
boolean modified = false;
Iterator<E> it = iterator();
while (it.hasNext()) {
if (!c.contains(it.next())) {
it.remove();
modified = true;
}
}
return modified;
}
So it will iterate everything in your common set and check if the collection that was passed as a parameter contains this element.
In your case both are HashSets, thus it will be O(n), as contains should be O(1) amortized and iterating over your common set is O(n).
One improvement you can make, is simply not copy a into a new HashSet, because it will be iterated anyway you can keep a list.

Good way to get *any* value from a Java Set?

Given a simple Set<T>, what is a good way (fast, few lines of code) to get any value from the Set?
With a List, it's easy:
List<T> things = ...;
return things.get(0);
But, with a Set, there is no .get(...) method because Sets are not ordered.
A Set<T> is an Iterable<T>, so iterating to the first element works:
Set<T> things = ...;
return things.iterator().next();
Guava has a method to do this, though the above snippet is likely better.
Since streams are present, you can do it that way, too, but you have to use the class java.util.Optional. Optional is a wrapper-class for an element or explicitely no-element (avoiding the Nullpointer).
//returns an Optional.
Optional <T> optT = set.stream().findAny();
//Optional.isPresent() yields false, if set was empty, avoiding NullpointerException
if(optT.isPresent()){
//Optional.get() returns the actual element
return optT.get();
}
Edit:
As I use Optional quite often myself: There is a way for accessing the element or getting a default, in case it's not present:
optT.orElse(other) returns either the element or, if not present, other. other may be null, btw.
Getting any element from a Set or Collection may seem like an uncommon demand - if not arbitrary or eclectic - but, it is quite common when one, for example, needs to calculate statistics on Keys or Values objects in a Map and must initialise min/max values. The any element from a Set/Collection (returned by Map.keySet() or Map.values()) will be used for this initialisation prior to updating min/max values over each element.
So, what options one has when faced with this problem and at the same time trying to keep memory and execution time small and code clear?
Often you get the usual: "convert Set to ArrayList and get the first element". Great! Another array of millions of items and extra processing cycles to retrieve objects from Set, allocate array and populate it:
HashMap<K,V> map;
List<K> list = new ArrayList<V>(map.keySet()); // min/max of keys
min = max = list.get(0).some_property(); // initialisation step
for(i=list.size();i-->1;){
if( min > list.get(i).some_property() ){ ... }
...
}
Or one may use looping with an Iterator, using a flag to denote that min/max need to be initialised and a conditional statement to check if that flag is set for all the iterations in the loop. This implies a lot of conditional checking.
boolean flag = true;
Iterator it = map.keySet().iterator();
while( it.hasNext() ){
if( flag ){
// initialisation step
min = max = it.next().some_property();
flag = false;
} else {
if( min > list.get(i).some_property() ){ min = list.get(i).some_property() }
...
}
}
Or do the initialisation outside the loop:
HashMap<K,V> map;
Iterator it = map.keySet().iterator();
K akey;
if( it.hasNext() ){
// initialisation step:
akey = it.next();
min = max = akey.value();
do {
if( min > list.get(i).some_property() ){ min = akey.some_property() }
} while( it.hasNext() && ((akey=it.next())!=null) );
}
But is it really worth all this manouvre on the behalf of the programmer (and setting up the Iterator on behalf of the JVM) whenever min/max's needed?
The suggestion from a javally-correct ol' sport could well be: "wrap your Map in a class which keeps track of min and max values when put or deleted!".
There is another situation which in my experience the need for just any item from a Map arises. This is when the map contains objects which have a common property - all the same for all of them in that map - and you need to read that property. For example suppose there is a Map of holding bins of the same histogram which have the same number of dimensions. Given such a Map you may need to know the number of dimensions of just any Histobin in the Map in order to, say, create another Histobin of the same dimensions. Do I need to setup an iterator again and dispose it after calling next() just once? I will skip the javally-correct person's suggestion to this situation.
And if all the trouble in getting the any element causes insignificant memory and cpu cycles increase, then what about all the code one has to write just to get the hard-to-get any element.
We need the any element. Give it to us!

Any big difference between using contains or loop through a list?

Performance wise, is there really a big difference between using:
ArrayList.contains(o) vs foreach|iterator
LinkedList.contains(o) vs foreach|iterator
Of course, for the foreach|iterator loops, I'll have to explicitly compare the methods and return true or false accordingly.
The object I'm comparing is an object where equals() and hashcode() are both properly overridden.
EDIT: Don't need to know about containsValue after all, sorry about that. And yes, I'm stupid... I realized how stupid my question was about containsKey vs foreach, nevermind about that, I don't know what I was thinking. I basically want to know about the ones above (edited out the others).
EDITED:
With the new form of the question no longer including HashMap and TreeMap, my answer is entirely different. I now say no.
I'm sure that other people have answered this, but in both LinkedList and ArrayList, contains() just calls indexOf(), which iterates over the collection.
It's possible that there are tiny performance differences, both between LinkedList and ArrayList, and between contains and foreach, there aren't any big differences.
This makes no differency since contains(o) calls indexOf(o) which simply loops like this:
for (int i = 0; i < size; i++)
if (o.equals(elementData[i]))
return i;
(Checked in ArrayList)
Without benchmarking, contains should be faster or the same in all cases.
For 1 and 2, it doesn't need to call the iterator methods. It can loop internally. Both ArrayList and LinkedList implement contains in terms of indexOf
ArrayList - indexOf is a C-style for loop on the backing array.
LinkedList - indexOf walks the linked list in a C-style for loop.
For 3 and 4, you have to distinguish between containsKey and containsValue.
3. HashMap, containsKey is O(1). It works by hashing the key, getting the associated bucket, then walking the linked list. containsValue is O(n) and works by simply checking every value in every bucket in a nested for loop.
4. TreeMap, containsKey is O(log n). It checks whether it's in range, then searches the red-black tree. containsValue, which is O(n), uses an in-order walk of the tree.
ArrayList.contains does
return indexOf(o) >= 0;
where
public int indexOf(Object o) {
if (o == null) {
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
} else {
for (int i = 0; i < size; i++)
if (o.equals(elementData[i]))
return i;
}
return -1;
}
It's similar for LinkedList, only it uses .next() to iterate through the elements, so not much difference there.
public int indexOf(Object o) {
int index = 0;
if (o==null) {
for (Entry e = header.next; e != header; e = e.next) {
if (e.element==null)
return index;
index++;
}
} else {
for (Entry e = header.next; e != header; e = e.next) {
if (o.equals(e.element))
return index;
index++;
}
}
return -1;
}
HashMap.containKey uses the hash of the key to fetch all keys with that hash (which is fast) and then uses equals only on those keys, so there's an improvement there; but containsValue() goes through the values with a for.
TreeMap.containsKey seem to do an informed search using a comparator to find the Key faster, so still better; but containsValue still seems to go through the entire three until it finds a value.
Overall I think you should use the methods, since they're easier to write than doing a loop every time :).
I think using contains is better because generally the library implementation is more efficient than manual implementation of the same. Check out if you can during object construction or afterwards pass a comparator method that you have written which takes care of your custom equals and hashcode implementation.
Thanks,
Krishna
Traversing the container with foreach/iterator is always O(n) time.
ArrayList/LinkedList search is O(n) as well.
HashMap.containsKey() is O(1) amortized time.
TreeMap.containsKey() is O(log n) time.
For both HashMap and TreeMap containsValue() is O(n), but there may be implementations optimized for containsValue() be as fast as containsKey().

Java PriorityQueue with fixed size

I am calculating a large number of possible resulting combinations of an algortihm. To sort this combinations I rate them with a double value und store them in PriorityQueue. Currently, there are about 200k items in that queue which is pretty much memory intesive. Acutally, I only need lets say the best 1000 or 100 of all items in the list.
So I just started to ask myself if there is a way to have a priority queue with a fixed size in Java. I should behave like this:
Is the item better than one of the allready stored? If yes, insert it to the according position and throw the element with the least rating away.
Does anyone have an idea? Thanks very much again!
Marco
que.add(d);
if (que.size() > YOUR_LIMIT)
que.poll();
or did I missunderstand your question?
edit: forgot to mention that for this to work you probably have to invert your comparTo function since it will throw away the one with highest priority each cycle. (if a is "better" b compare (a, b) should return a positvie number.
example to keep the biggest numbers use something like this:
public int compare(Double first, Double second) {
// keep the biggest values
return first > second ? 1 : -1;
}
MinMaxPriorityQueue, Google Guava
There is indeed a class for maintaining a queue that, when adding an item that would exceed the maximum size of the collection, compares the items to find an item to delete and thereby create room: MinMaxPriorityQueue found in Google Guava as of version 8.
EvictingQueue
By the way, if you merely want deleting the oldest element without doing any comparison of the objects’ values, Google Guava 15 gained the EvictingQueue class.
There is a fixed size priority queue in Apache Lucene: http://lucene.apache.org/java/2_4_1/api/org/apache/lucene/util/PriorityQueue.html
It has excellent performance based on my tests.
Use SortedSet:
SortedSet<Item> items = new TreeSet<Item>(new Comparator<Item>(...));
...
void addItem(Item newItem) {
if (items.size() > 100) {
Item lowest = items.first();
if (newItem.greaterThan(lowest)) {
items.remove(lowest);
}
}
items.add(newItem);
}
Just poll() the queue if its least element is less than (in your case, has worse rating than) the current element.
static <V extends Comparable<? super V>>
PriorityQueue<V> nbest(int n, Iterable<V> valueGenerator) {
PriorityQueue<V> values = new PriorityQueue<V>();
for (V value : valueGenerator) {
if (values.size() == n && value.compareTo(values.peek()) > 0)
values.poll(); // remove least element, current is better
if (values.size() < n) // we removed one or haven't filled up, so add
values.add(value);
}
return values;
}
This assumes that you have some sort of combination class that implements Comparable that compares combinations on their rating.
Edit: Just to clarify, the Iterable in my example doesn't need to be pre-populated. For example, here's an Iterable<Integer> that will give you all natural numbers an int can represent:
Iterable<Integer> naturals = new Iterable<Integer>() {
public Iterator<Integer> iterator() {
return new Iterator<Integer>() {
int current = 0;
#Override
public boolean hasNext() {
return current >= 0;
}
#Override
public Integer next() {
return current++;
}
#Override
public void remove() {
throw new UnsupportedOperationException();
}
};
}
};
Memory consumption is very modest, as you can see - for over 2 billion values, you need two objects (the Iterable and the Iterator) plus one int.
You can of course rather easily adapt my code so it doesn't use an Iterable - I just used it because it's an elegant way to represent a sequence (also, I've been doing too much Python and C# ☺).
A better approach would be to more tightly moderate what goes on the queue, removing and appending to it as the program runs. It sounds like there would be some room to exclude some the items before you add them on the queue. It would be simpler than reinventing the wheel so to speak.
It seems natural to just keep the top 1000 each time you add an item, but the PriorityQueue doesn't offer anything to achieve that gracefully. Maybe you can, instead of using a PriorityQueue, do something like this in a method:
List<Double> list = new ArrayList<Double>();
...
list.add(newOutput);
Collections.sort(list);
list = list.subList(0, 1000);

Categories

Resources