O(1) in a Java algorithm - java

I have two endpoints, one responsible for receive transactions and other responsible for generate stats based on transactions from the last minute only.
To store them, I'm using a ConcurrentNavigableMap:
#Component
#Log
public class DatastoreComponent {
private ConcurrentNavigableMap<Long, List<Transaction>> transactions;
public DatastoreComponent() {
this.transactions = new ConcurrentSkipListMap<>();
}
public synchronized List<Transaction> addTransaction(Transaction t){
log.info("Adding transaction: "+t);
List<Transaction> transactionAtGivenTime = transactions.get(t.getTimestamp());
if(transactionAtGivenTime == null) transactionAtGivenTime = new ArrayList<>();
transactionAtGivenTime.add(t);
return transactions.put(t.getTimestamp(), transactionAtGivenTime);
}
I use the timestamp as key, so that I can get all transactions from last minute just tailing the map, as follow:
public StatisticsFacade aggregate(){
List<Transaction> validTransactions = new ArrayList<>();
dataStore.getTransactions().tailMap(sixtySecondsAgo())
.values()
.parallelStream()
.forEach(list -> validTransactions.addAll(list));
statsAgg.aggreate(validTransactions);
return this;
}
so far, so good (I guess?). well anyway, the process happens in the statsAgg.aggreate() method, and this method should be O(1). My implementation is like that:
public synchronized void aggreate(List<Transaction> validTransactions) {
if(validTransactions == null || validTransactions.isEmpty())
return;
this.avg = validTransactions.parallelStream().mapToDouble(a -> a.getAmount()).average().getAsDouble();
this.sum = validTransactions.parallelStream().mapToDouble(a -> a.getAmount()).sum();
this.max = validTransactions.parallelStream().mapToDouble(a -> a.getAmount()).max().getAsDouble();
this.min = validTransactions.parallelStream().mapToDouble(a -> a.getAmount()).min().getAsDouble();
this.count = new Long(validTransactions.size());
}
I'm not really sure that this is O(1) since I'm running through the list 4 times...I tried extract validTransactions.parallelStream().mapToDouble(a -> a.getAmount()) to a variable and re-use it, but of course, once the stream is processed, it is closed and I can't do anything.
So the question is: is this O(1) and if not, is there a way to run through the stream and too all this calculations at once?

An algorithm that solves your problem has to be at least O(n) complexity, as you have to go through each element in validTransactions at least once.
And it wouldn't become O(1) even if you run
validTransactions.parallelStream().mapToDouble(a -> a.getAmount()) just once.

Related

How to remove all elements that match a certain condition except for N greatest of them with Stream API

My question is: is there a better way to implement this task?
I have a list of orderable elements (in this example by age, the youngest first).
And I want to delete all elements that fulfill a condition (in this example red elements) but keep the first 2 of them.
Stream<ElementsVO> stream = allElements.stream();
Stream<ElementsVO> redStream = stream.filter(elem->elem.getColor()==RED).sorted((c1, c2) -> { return c1.getAge() - c2.getAge();
}).limit(2);
Stream<ElementsVO> nonRedStream=stream.filter(elem->elem.getColor()!=RED);
List<ElementsVO> resultList = Stream.concat(redStream,nonRedStream).sorted((c1, c2) -> { return c1.getAge() - c2.getAge();
}).collect(Collectors.toList());
Any idea to improve this? Any way to implement an accumulator function or something like that with streams?
You can technically do this with a stateful predicate:
Predicate<ElementsV0> statefulPredicate = new Predicate<ElementsV0>() {
private int reds = 0;
#Override public boolean test(ElementsV0 e) {
if (elem.getColor() == RED) {
reds++;
return reds < 2;
}
return true;
}
};
Then:
List<ElementsVO> resultList =
allElements.stream()
.sorted(comparingInt(ElementsV0::getAge))
.filter(statefulPredicate)
.collect(toList());
This might work, but it is a violation of the Stream API: the documentation for Stream.filter says that the predicate should be stateless, which in general allows the stream implementation to apply the filter in any order. For small input lists, streamed sequentially, this will almost certainly be the appearance order in the list, but it's not guaranteed.
Caveat emptor. Your current way works, although you could do the partitioning of the list more efficiently using Collectors.partitioningBy to avoid iterating it twice.
You can implement a custom collector that will maintain two separate collections of RED and non-RED element.
And since you need only two red elements having the greatest age to improve performance, you can introduce a partial sorting. I.e. collection of non-red element needs to maintain an order and always must be of size 2 at most, with that overhead of sorting will be far less significant in comparison to sorting of elements having the property of RED in order to pick only two of them.
In order to create a custom collector, you might make use of the static method Collector.of() which expects the following arguments:
Supplier Supplier<A> is meant to provide a mutable container which store elements of the stream. Because we need to separate elements by color into two groups as a container, we can use a map that will contain only 2 keys (true and false), denoting whether elements mapped to this key are red. In order to store red-elements and perform a partial sorting, we need a collection that is capable of maintaining the order. PriorityQueue is a good choice for that purpose. To store all other elements, I've used ArrayDeque, which doesn't maintain the order and as fast as ArrayList.
Accumulator BiConsumer<A,T> defines how to add elements into the mutable container provided by the supplier. For this task, the accumulator needs to guarantee that the queue, containing red-elements will not exceed the given size by rejecting values that are smaller than the lowest value previously added to the queue and by removing the lowest value if the size has reached the limit and a new value needs to be added. This functionality extracted into a separate method tryAdd()
Combiner BinaryOperator<A> combiner() establishes a rule on how to merge two containers obtained while executing stream in parallel. Here, combiner rely on the same logic that was described for accumulator.
Finisher Function<A,R> is meant to produce the final result by transforming the mutable container. In the code below, finisher dumps the contents of both queues into a stream, sorts them and collects into an immutable list.
Characteristics allow fine-tuning the collector by providing additional information on how it should function. Here a characteristic Collector.Characteristics.UNORDERED is being applied. Which indicates that the order in which partial results of the reduction produced in parallel is not significant, that can improve performance of this collector with parallel streams.
The code might look like this:
public static void main(String[] args) {
List<ElementsVO> allElements =
List.of(new ElementsVO(Color.RED, 25), new ElementsVO(Color.RED, 23), new ElementsVO(Color.RED, 27),
new ElementsVO(Color.BLACK, 19), new ElementsVO(Color.GREEN, 23), new ElementsVO(Color.GREEN, 29));
Comparator<ElementsVO> byAge = Comparator.comparing(ElementsVO::getAge);
List<ElementsVO> resultList = allElements.stream()
.collect(getNFiltered(byAge, element -> element.getColor() != Color.RED, 2));
resultList.forEach(System.out::println);
}
The method below is responsible for creating of a collector that partition the elements based on the given predicate and will sort them in accordance with the provided comparator.
public static <T> Collector<T, ?, List<T>> getNFiltered(Comparator<T> comparator,
Predicate<T> condition,
int limit) {
return Collector.of(
() -> Map.of(true, new PriorityQueue<>(comparator),
false, new ArrayDeque<>()),
(Map<Boolean, Queue<T>> isRed, T next) -> {
if (condition.test(next)) isRed.get(false).add(next);
else tryAdd(isRed.get(true), next, comparator, limit);
},
(Map<Boolean, Queue<T>> left, Map<Boolean, Queue<T>> right) -> {
left.get(false).addAll(right.get(false));
left.get(true).forEach(next -> tryAdd(left.get(true), next, comparator, limit));
return left;
},
(Map<Boolean, Queue<T>> isRed) -> isRed.values().stream()
.flatMap(Queue::stream).sorted(comparator).toList(),
Collector.Characteristics.UNORDERED
);
}
This method is responsible for adding the next red-element into the priority queue. It expects a comparator in order to be able to determine whether the next element should be added or discarded, and a value of the maximum size of the queue (2), to check if it was exceeded.
public static <T> void tryAdd(Queue<T> queue, T next, Comparator<T> comparator, int size) {
if (queue.size() == size && comparator.compare(queue.element(), next) < 0)
queue.remove(); // if the next element is greater than the smallest element in the queue and max size has been exceeded, the smallest element needs to be removed from the queue
if (queue.size() < size) queue.add(next);
}
Output
lementsVO{color=BLACK, age=19}
ElementsVO{color=GREEN, age=23}
ElementsVO{color=RED, age=25}
ElementsVO{color=RED, age=27}
ElementsVO{color=GREEN, age=29}
I wrote a generic Collector with a predicate and a limit of elements to add which match the predicate:
public class LimitedMatchCollector<T> implements Collector<T, List<T>, List<T>> {
private Predicate<T> filter;
private int limit;
public LimitedMatchCollector(Predicate<T> filter, int limit)
{
super();
this.filter = filter;
this.limit = limit;
}
private int count = 0;
#Override
public Supplier<List<T>> supplier() {
return () -> new ArrayList<T>();
}
#Override
public BiConsumer<List<T>, T> accumulator() {
return this::accumulator;
}
#Override
public BinaryOperator<List<T>> combiner() {
return this::combiner;
}
#Override
public Set<Characteristics> characteristics() {
return Stream.of(Characteristics.IDENTITY_FINISH)
.collect(Collectors.toCollection(HashSet::new));
}
public List<T> accumulator(List<T> list , T e) {
if (filter.test(e)) {
if (count >= limit) {
return list;
}
count++;
}
list.add(e);
return list;
}
public List<T> combiner(List<T> left , List<T> right) {
right.forEach( e -> {
if (filter.test(e)) {
if (count < limit) {
left.add(e);
count++;
}
}
});
return left;
}
#Override
public Function<List<T>, List<T>> finisher()
{
return Function.identity();
}
}
Usage:
List<ElementsVO> list = Arrays.asList(new ElementsVO("BLUE", 1)
,new ElementsVO("BLUE", 2) // made color a String
,new ElementsVO("RED", 3)
,new ElementsVO("RED", 4)
,new ElementsVO("GREEN", 5)
,new ElementsVO("RED", 6)
,new ElementsVO("YELLOW", 7)
);
System.out.println(list.stream().collect(new LimitedMatchCollector<ElementsVO>( (e) -> "RED".equals(e.getColor()),2)));

Java stream, how to apply a function to the previous result

I was looking for a right answer, but found nothing that fill my purpose.
I have a simple for loop like this:
String test = "hi";
for(Something something : somethingList) {
if(something.getSomething() != null) {
test = cleaner.clean(test, something.getSomething());
} else if(something.getOther() != null) {
test = StaticClass.clean(test, something.getOther());
}
}
and I never understood if the same result can be achieved using java stream. With reduce maybe? I need to pass the response of the previuos loop (saved in the "test" variable) to the next loop (see clean method, where I pass test). How can I do that?
If you want to do something for each element in a list (with a for each loop) I would suggest using the forEach or forEachOrdered functions. These should represent your for(Object o : objects). You can easily define your own Consumer class, which handles all the stuff for you:
class CustomConsumer implements Consumer<Integer> {
private Integer previous;
public CustomConsumer(Integer initialValue) {
previous = initialValue;
}
#Override
public void accept(Integer current) {
// do stuff with your current / previous object :)
System.out.println("previous: " + previous);
previous = current;
}
}
List<Integer> values = getValues();
values.stream()
.forEachOrdered(new CustomConsumer(-1));
This example uses Integer as a provided class, if you want to use your own just replace Integer. You can even use generics:
class CustomConsumer<T> implements Consumer<T> {
private T previous;
public CustomConsumer(T initialValue) {
previous = initialValue;
}
#Override
public void accept(T current) {
// do stuff with your current / previous object :)
System.out.println("previous: " + previous);
previous = current;
}
}
List<Integer> values = new ArrayList<>();
for(int i = 0; i < 6; i++)
values.add(i);
values.stream()
.forEachOrdered(new CustomConsumer<>("hello"));
Output:
previous: hello
previous: 0
previous: 1
previous: 2
previous: 3
previous: 4
If you want to learn more about streams the oracle docs provide some good stuff.
To expand on my comment, with streams you basically could use reduction, e.g. by using the reduce() method.
Example:
//some list simulating your somethingList
List<Integer> list = List.of(2,4,6,1,3,5);
String result = list.stream()
//make sure the stream is sequential to keep processing order
.sequential()
//start reduction with an initial value
.reduce("initial",
//in the accumulator you get the previous reduction result and the current element
(test, element) -> {
//simulates your conditions, just adding the new element for demonstration purposes
// test could also be replaced
if( element % 2 == 0 ) {
test += ", even:" + element;
} else {
test += ", odd: " + element;
}
//return the new reduction result
return test;
},
//combiner is not used in sequential streams so just one of the elements
(l, r) -> l);
This would result in:
initial, even:2, even:4, even:6, odd: 1, odd: 3, odd: 5
Note, however, that streams are not a silver bullet and sometimes a simple loop like your initial code is just fine or even better. This seems to be such a case.

Java Streams: Is the complexity of collecting a stream of long same as filtering it based on Set::contains?

I have an application which accepts employee ids as user input and then filters the employee list for matching ids. User input is supposed to be 3-4 ids and employee list is a few thousands.
I came up with the following 2 methods using Streams filter based on performance concerns.
Method1
Motivation here is to not run filter for each employee, rather run it on the requested ids list which is guaranteed to be very short.
private static Set<Long> identifyEmployees(CustomRequest request)
List<Long> requestedIds = request.getRequestedIDs();
if (!requestedIds.isEmpty()) {
Set<Long> allEmployeeIds =
employeeInfoProvider
.getEmployeeInfoList() // returns List<EmployeeInfo>
.stream()
.map(EmployeeInfo::getEmpId) // getEmpId() returns a Long
.collect(Collectors.toSet());
return requestedIds.stream().filter(allEmployeeIds::contains).collect(Collectors.toSet());
}
return Collections.emptySet();
}
Method2
Motivation here is to replace collect() in Method1 with a filter as complexity would be same. collect() here would actually be running on a very small number of elements.
private static Set<Long> identifyEmployees(CustomRequest request)
Set<Long> requestedIds = request.getRequestedIDs() // returns List<Long>
.stream()
.collect(Collectors.toSet());
if (!requestedIds.isEmpty()) {
return employeeInfoProvider
.getEmployeeInfoList() // returns List<EmployeeInfo>
.stream()
.map(EmployeeInfo::getEmpId) // getEmpId() returns a Long
.filter(requestedIds::contains)
.collect(Collectors.toSet());
}
return Collections.emptySet();
}
Does Method2 perform as good as Method1? Or does Method1 perform better?
I would expect Method2 to perform as good or better in all scenarios.
Collecting to an intermediate set adds allocation overhead. It reduces the number of requestedIds::contains calls you have to do later if there are lots of duplicates, but even then, you're exchanging each Set::add call for a Set::contains call, each of which should be a small win.
A potentially faster (not cleaner) option would be to return immediately when all the requestedIds have been detected, but I'm not sure whether it could be implemented with Stream API.
private static Set<Long> identifyEmployees(CustomRequest request) {
Set<Long> requestedIds = request.getRequestedIDs() // returns List<Long>
.stream()
.collect(Collectors.toSet());
Set<Long> result = new HashSet<>();
if (!requestedIds.isEmpty()) {
Iterator<EmployeeInfo> employees = employeeInfoProvider.getEmployeeInfoList().iterator();
while (result.size() < requestedIds.size() && employees.hasNext()) {
Long employeeId = employees.next().getEmpId();
if (requestedIds.contains(employeeId)) {
result.add(employeeId);
}
}
}
return result;
}
However, it makes sense only if employeeInfoProvider.getEmployeeInfoList() returns multiple duplicates of employees with the same IDs. Otherwise, as mentioned above, the method2 is a better choice.

Group and Reduce list of objects

I have a list of objects with many duplicated and some fields that need to be merged. I want to reduce this down to a list of unique objects using only Java 8 Streams (I know how to do this via old-skool means but this is an experiment.)
This is what I have right now. I don't really like this because the map-building seems extraneous and the values() collection is a view of the backing map, and you need to wrap it in a new ArrayList<>(...) to get a more specific collection. Is there a better approach, perhaps using the more general reduction operations?
#Test
public void reduce() {
Collection<Foo> foos = Stream.of("foo", "bar", "baz")
.flatMap(this::getfoos)
.collect(Collectors.toMap(f -> f.name, f -> f, (l, r) -> {
l.ids.addAll(r.ids);
return l;
})).values();
assertEquals(3, foos.size());
foos.forEach(f -> assertEquals(10, f.ids.size()));
}
private Stream<Foo> getfoos(String n) {
return IntStream.range(0,10).mapToObj(i -> new Foo(n, i));
}
public static class Foo {
private String name;
private List<Integer> ids = new ArrayList<>();
public Foo(String n, int i) {
name = n;
ids.add(i);
}
}
If you break the grouping and reducing steps up, you can get something cleaner:
Stream<Foo> input = Stream.of("foo", "bar", "baz").flatMap(this::getfoos);
Map<String, Optional<Foo>> collect = input.collect(Collectors.groupingBy(f -> f.name, Collectors.reducing(Foo::merge)));
Collection<Optional<Foo>> collected = collect.values();
This assumes a few convenience methods in your Foo class:
public Foo(String n, List<Integer> ids) {
this.name = n;
this.ids.addAll(ids);
}
public static Foo merge(Foo src, Foo dest) {
List<Integer> merged = new ArrayList<>();
merged.addAll(src.ids);
merged.addAll(dest.ids);
return new Foo(src.name, merged);
}
As already pointed out in the comments, a map is a very natural thing to use when you want to identify unique objects. If all you needed to do was find the unique objects, you could use the Stream::distinct method. This method hides the fact that there is a map involved, but apparently it does use a map internally, as hinted by this question that shows you should implement a hashCode method or distinct may not behave correctly.
In the case of the distinct method, where no merging is necessary, it is possible to return some of the results before all of the input has been processed. In your case, unless you can make additional assumptions about the input that haven't been mentioned in the question, you do need to finish processing all of the input before you return any results. Thus this answer does use a map.
It is easy enough to use streams to process the values of the map and turn it back into an ArrayList, though. I show that in this answer, as well as providing a way to avoid the appearance of an Optional<Foo>, which shows up in one of the other answers.
public void reduce() {
ArrayList<Foo> foos = Stream.of("foo", "bar", "baz").flatMap(this::getfoos)
.collect(Collectors.collectingAndThen(Collectors.groupingBy(f -> f.name,
Collectors.reducing(Foo.identity(), Foo::merge)),
map -> map.values().stream().
collect(Collectors.toCollection(ArrayList::new))));
assertEquals(3, foos.size());
foos.forEach(f -> assertEquals(10, f.ids.size()));
}
private Stream<Foo> getfoos(String n) {
return IntStream.range(0, 10).mapToObj(i -> new Foo(n, i));
}
public static class Foo {
private String name;
private List<Integer> ids = new ArrayList<>();
private static final Foo BASE_FOO = new Foo("", 0);
public static Foo identity() {
return BASE_FOO;
}
// use only if side effects to the argument objects are okay
public static Foo merge(Foo fooOne, Foo fooTwo) {
if (fooOne == BASE_FOO) {
return fooTwo;
} else if (fooTwo == BASE_FOO) {
return fooOne;
}
fooOne.ids.addAll(fooTwo.ids);
return fooOne;
}
public Foo(String n, int i) {
name = n;
ids.add(i);
}
}
If the input elements are supplied in the random order, then having intermediate map is probably the best solution. However if you know in advance that all the foos with the same name are adjacent (this condition is actually met in your test), the algorithm can be greatly simplified: you just need to compare the current element with the previous one and merge them if the name is the same.
Unfortunately there's no Stream API method which would allow you do to such thing easily and effectively. One possible solution is to write custom collector like this:
public static List<Foo> withCollector(Stream<Foo> stream) {
return stream.collect(Collector.<Foo, List<Foo>>of(ArrayList::new,
(list, t) -> {
Foo f;
if(list.isEmpty() || !(f = list.get(list.size()-1)).name.equals(t.name))
list.add(t);
else
f.ids.addAll(t.ids);
},
(l1, l2) -> {
if(l1.isEmpty())
return l2;
if(l2.isEmpty())
return l1;
if(l1.get(l1.size()-1).name.equals(l2.get(0).name)) {
l1.get(l1.size()-1).ids.addAll(l2.get(0).ids);
l1.addAll(l2.subList(1, l2.size()));
} else {
l1.addAll(l2);
}
return l1;
}));
}
My tests show that this collector is always faster than collecting to map (up to 2x depending on average number of duplicate names), both in sequential and parallel mode.
Another approach is to use my StreamEx library which provides a bunch of "partial reduction" methods including collapse:
public static List<Foo> withStreamEx(Stream<Foo> stream) {
return StreamEx.of(stream)
.collapse((l, r) -> l.name.equals(r.name), (l, r) -> {
l.ids.addAll(r.ids);
return l;
}).toList();
}
This method accepts two arguments: a BiPredicate which is applied for two adjacent elements and should return true if elements should be merged and the BinaryOperator which performs merging. This solution is a little bit slower in sequential mode than the custom collector (in parallel the results are very similar), but it's still significantly faster than toMap solution and it's simpler and somewhat more flexible as collapse is an intermediate operation, so you can collect in another way.
Again both these solutions work only if foos with the same name are known to be adjacent. It's a bad idea to sort the input stream by foo name, then using these solutions, because the sorting will drastically reduce the performance making it slower than toMap solution.
As already pointed out by others, an intermediate Map is unavoidable, as that’s the way of finding the objects to merge. Further, you should not modify source data during reduction.
Nevertheless, you can achieve both without creating multiple Foo instances:
List<Foo> foos = Stream.of("foo", "bar", "baz")
.flatMap(n->IntStream.range(0,10).mapToObj(i -> new Foo(n, i)))
.collect(collectingAndThen(groupingBy(f -> f.name),
m->m.entrySet().stream().map(e->new Foo(e.getKey(),
e.getValue().stream().flatMap(f->f.ids.stream()).collect(toList())))
.collect(toList())));
This assumes that you add a constructor
public Foo(String n, List<Integer> l) {
name = n;
ids=l;
}
to your Foo class, as it should have if Foo is really supposed to be capable of holding a list of IDs. As a side note, having a type which serves as single item as well as a container for merged results seems unnatural to me. This is exactly why to code turns out to be so complicated.
If the source items had a single id, using something like groupingBy(f -> f.name, mapping(f -> id, toList()), followed by mapping the entries of (String, List<Integer>) to the merged items was sufficient.
Since this is not the case and Java 8 lacks the flatMapping collector, the flatmapping step is moved to the second step, making it look much more complicated.
But in both cases, the second step is not obsolete as it is where the result items are actually created and converting the map to the desired list type comes for free.

java 8 stream groupingBy sum of composite variable

I have a class Something which contains an instance variable Anything.
class Anything {
private final int id;
private final int noThings;
public Anything(int id, int noThings) {
this.id = id;
this.noThings = noThings;
}
}
class Something {
private final int parentId;
private final List<Anything> anythings;
private int getParentId() {
return parentId;
}
private List<Anything> getAnythings() {
return anythings;
}
public Something(int parentId, List<Anything> anythings) {
this.parentId = parentId;
this.anythings = anythings;
}
}
Given a list of Somethings
List<Something> mySomethings = Arrays.asList(
new Something(123, Arrays.asList(new Anything(45, 65),
new Anything(568, 15),
new Anything(145, 27))),
new Something(547, Arrays.asList(new Anything(12, 123),
new Anything(678, 76),
new Anything(98, 81))),
new Something(685, Arrays.asList(new Anything(23, 57),
new Anything(324, 67),
new Anything(457, 87))));
I want to sort them such that the Something objects are sorted depending on the total descending sum of the (Anything object) noThings, and then by the descending value of the (Anything object) noThings
123 = 65+15+27 = 107(3rd)
547 = 123+76+81 = 280 (1st)
685 = 57+67+87 = 211 (2nd)
So that I end up with
List<Something> orderedSomethings = Arrays.asList(
new Something(547, Arrays.asList(new Anything(12, 123),
new Anything(98, 81),
new Anything(678, 76))),
new Something(685, Arrays.asList(new Anything(457, 87),
new Anything(324, 67),
new Anything(23, 57))),
new Something(123, Arrays.asList(new Anything(45, 65),
new Anything(145, 27),
new Anything(568, 15))));
I know that I can get the list of Anythings per parent Id
Map<Integer, List<Anythings>> anythings
= mySomethings.stream()
.collect(Collectors.toMap(p->p.getParentId(),
p->p.getAnythings()))
;
But after that I'm a bit stuck.
Unless I'm mistaken, you can not do both sorts in one go. But since they are independent of each other (the sum of the nothings in the Anythings in a Something is independent of their order), this does not matter much. Just sort one after the other.
To sort the Anytings inside the Somethings by their noThings:
mySomethings.stream().map(Something::getAnythings)
.forEach(as -> as.sort(Comparator.comparing(Anything::getNoThings)
.reversed()));
To sort the Somethings by the sum of the noThings of their Anythings:
mySomethings.sort(Comparator.comparing((Something s) -> s.getAnythings().stream()
.mapToInt(Anything::getNoThings).sum())
.reversed());
Note that both those sorts will modify the respective lists in-place.
As pointed out by #Tagir, the second sort will calculate the sum of the Anythings again for each pair of Somethings that are compared in the sort. If the lists are long, this can be very wasteful. Instead, you could first calculate the sums in a map and then just look up the value.
Map<Something, Integer> sumsOfThings = mySomethings.stream()
.collect(Collectors.toMap(s -> s, s -> s.getAnythings().stream()
.mapToInt(Anything::getNoThings).sum()));
mySomethings.sort(Comparator.comparing(sumsOfThings::get).reversed());
The problem of other solutions is that sums are not stored anywhere during sorting, thus when sorting large input, sums will be calculated for every row several times reducing the performance. An alternative solution is to create intermediate pairs of (something, sum), sort by sum, then extract something and forget about sum. Here's how it can be done with Stream API and SimpleImmutableEntry as pair class:
List<Something> orderedSomethings = mySomethings.stream()
.map(smth -> new AbstractMap.SimpleImmutableEntry<>(smth, smth
.getAnythings().stream()
.mapToInt(Anything::getNoThings).sum()))
.sorted(Entry.<Something, Integer>comparingByValue().reversed())
.map(Entry::getKey)
.collect(Collectors.toList());
There's some syntactic sugar available in my free StreamEx library which makes the code a little bit cleaner:
List<Something> orderedSomethings = StreamEx.of(mySomethings)
.mapToEntry(smth -> smth
.getAnythings().stream()
.mapToInt(Anything::getNoThings).sum())
.reverseSorted(Entry.comparingByValue())
.keys().toList();
As for sorting the Anything inside something: other solutions are ok.
In the end I added an extra method to the Something class.
public int getTotalNoThings() {
return anythings.stream().collect(Collectors.summingInt(Anything::getNoThings));
}
then I used this method to sort by total noThings (desc)
somethings = somethings.stream()
.sorted(Comparator.comparing(Something::getTotalNoThings).reversed())
.collect(Collectors.toList());
and then I used the code suggested above (thanks!) to sort by the Anything instance noThings
somethings .stream().map(Something::getAnythings)
.forEach(as -> as.sort(Comparator.comparing(Anything::getNoThings).reversed()));
Thanks again for help.

Categories

Resources