Java stream collect to Map<String, Map<Integer, MyObject>> - java

I'm using Java 11 and I have a List<MyObject> called myList of the following object:
public class MyObject {
private final String personalId;
private final Integer rowNumber;
private final String description;
<...>
}
and I want using streams to collect these objects into a Map<String, Map<Integer, List<MyObject>>>
(with following syntax: Map<personalId, Map<rowNumber, List<MyObject>>>) and I don't want to use Collectors.groupBy(), because it has issues with null
values.
I tried to do it using Collectors.toMap(), but it seems that it is not possible to do it
myList
.stream()
.Collectors.toMap(s -> s.getPersonalId(), s -> Collectors.toMap(t-> s.getRowNumber(), ArrayList::new))
My question is it possible to make a Map<String, Map<Integer, List<MyObject>>> object using streams without using Collectors.groupBy() or should I write a full method myself?

In your case I would create the maps first and then loop through the elements in this list as shown:
Map<String, List<MyObject>> rows = new HashMap<>();
list.forEach(element -> rows.computeIfAbsent(element.personalId, s -> new ArrayList<>()).add(element));
You can use computeIfAbsent in order to create a new list/map as a value of the map before you can put your data in.
The same is with the second data type you created:
Map<String, Map<Integer, MyObject>> persons = new HashMap<>();
list.forEach(element -> persons.computeIfAbsent(element.personalId, s -> new HashMap<>()).put(element.rowNumber, element));
Here is a way to solve this with streams. But note that the objects must have a unique personId/rowNumber:
Map<String, List<MyObject>> rows = list.stream().collect(
Collectors.toMap(element -> element.personalId,
element -> new ArrayList<MyObject>(Arrays.asList(element))));
As well as for the other map:
Map<String, Map<Integer, MyObject>> persons = list.stream().collect(
Collectors.toMap(e -> e.personalId,
e -> new HashMap<>(Map.of(e.rowNumber, e))));

Map<String, Map<Integer, List<MyObject>>> object using streams without using Collectors.groupingBy()
By looking at the map type, I can assume that a combination of personalId and rowNumber is not unique, i.e. there could be multiple occurrences of each combination (otherwise you don't need to group objects into lists). And there could be different rowNumber associated with each personalId. Only if these conclusions correct, this nested collection might have a very vague justification for existence.
Otherwise, you probably can substitute it multiple collections for different use-cases, for Map<String, MyObject> - object by id (if every id is unique):
Map<String, MyObject> objById = myList.stream()
.collect(Collectors.toMap(
MyObject::getPersonalId,
Function.identity()
));
I'll proceed assuming that you really need such a nested collection.
Now, let's address the issue with groupingBy(). This collector uses internally Objects.requireNonNull() to make sure that a key produced by the classifier function is non-null.
If you tried to use and failed because of the hostility to null-keys, that implies that either personalId, or rowNumber, or both can be null.
Now let's make a small detour and pose the question of what does it imply if a property that considered to be significant (you're going to use personalId and rowNumber to access the data, hence they are definitely important) and null means first of all?
null signifies the absence of data and nothing else. If in your application null values have an additional special meaning, that's a design flaw. If properties that are significant for managing your data for some reason appear to be null you need to fix that.
You might claim that you're quite comfortable with null values. If so let pause for a moment and imagine a situation: person enters a restaurant, orders a soup and asks the waiter to bring them a fork instead of spoon (because of they have a negative experience with a spoon, and they are enough comfortable with fork).
null isn't a data, it's an indicator of the absence of data, storing null is an antipattern. If you're storing null it obtains a special meaning because you're forced to treat it separately.
To replace personalId and rowNumber that are equal to null with default values we need only one line of code.
public static void replaceNullWithDefault(List<MyObject> list,
String defaultId,
Integer defaultNum) {
list.replaceAll(obj -> obj.getPersonalId() != null && obj.getRowNumber() != null ? obj :
new MyObject(Objects.requireNonNullElse(obj.getPersonalId(), defaultId),
Objects.requireNonNullElse(obj.getRowNumber(), defaultNum),
obj.getDescription()));
}
After that can use the proper tool instead eating soup with a fork, I mean we can process the list data with groupingBy():
public static void main(String[] args) {
List<MyObject> myList = new ArrayList<>(
List.of(
new MyObject("id1", 1, "desc1"),
new MyObject("id1", 1, "desc2"),
new MyObject("id1", 2, "desc3"),
new MyObject("id1", 2, "desc4"),
new MyObject("id2", 1, "desc5"),
new MyObject("id2", 1, "desc6"),
new MyObject("id2", 1, "desc7"),
new MyObject(null, null, "desc8")
));
replaceNullWithDefault(myList, "id0", 0); // replacing null values
Map<String, Map<Integer, List<MyObject>>> byIdAndRow = myList // generating a map
.stream()
.collect(Collectors.groupingBy(
MyObject::getPersonalId,
Collectors.groupingBy(MyObject::getRowNumber)
));
byIdAndRow.forEach((k, v) -> { // printing the map
System.out.println(k);
v.forEach((k1, v1) -> System.out.println(k1 + " -> " + v1));
});
}
Output:
id0
0 -> [MyObject{'id0', 0, 'desc8'}]
id2
1 -> [MyObject{'id2', 1, 'desc5'}, MyObject{'id2', 1, 'desc6'}, MyObject{'id2', 1, 'desc7'}]
id1
1 -> [MyObject{'id1', 1, 'desc1'}, MyObject{'id1', 1, 'desc2'}]
2 -> [MyObject{'id1', 2, 'desc3'}, MyObject{'id1', 2, 'desc4'}]
A link to Online Demo
Now, please pay attention to the usage of groupingBy() did you notice its conciseness. That's the right tool which allows generating even such a clumsy nested map.
And now we're going to eat the soup with a fork! All null properties would be used as is:
public static void main(String[] args) {
List<MyObject> myList = new ArrayList<>(
List.of(
new MyObject("id1", 1, "desc1"),
new MyObject("id1", 1, "desc2"),
new MyObject("id1", 2, "desc3"),
new MyObject("id1", 2, "desc4"),
new MyObject("id2", 1, "desc5"),
new MyObject("id2", 1, "desc6"),
new MyObject("id2", 1, "desc7"),
new MyObject(null, null, "desc8")
));
Map<String, Map<Integer, List<MyObject>>> byIdAndRow = myList // generating a map
.stream()
.collect(
HashMap::new,
(Map<String, Map<Integer, List<MyObject>>> mapMap, MyObject next) ->
mapMap.computeIfAbsent(next.getPersonalId(), k -> new HashMap<>())
.computeIfAbsent(next.getRowNumber(), k -> new ArrayList<>())
.add(next),
(left, right) -> right.forEach((k, v) -> left.merge(k, v,
(oldV, newV) -> {
newV.forEach((k1, v1) -> oldV.merge(k1, v1,
(listOld, listNew) -> {
listOld.addAll(listNew);
return listOld;
}));
return oldV;
}))
);
byIdAndRow.forEach((k, v) -> { // printing the map
System.out.println(k);
v.forEach((k1, v1) -> System.out.println(k1 + " -> " + v1));
});
}
Output:
null
null -> [MyObject{'null', null, 'desc8'}]
id2
1 -> [MyObject{'id2', 1, 'desc5'}, MyObject{'id2', 1, 'desc6'}, MyObject{'id2', 1, 'desc7'}]
id1
1 -> [MyObject{'id1', 1, 'desc1'}, MyObject{'id1', 1, 'desc2'}]
2 -> [MyObject{'id1', 2, 'desc3'}, MyObject{'id1', 2, 'desc4'}]
A link to Online Demo

Related

java 8 map.get multiple key values

I have following code where I want to get value for multiple keys which starts with same name:
for example contents_of_a1, contents_of_ab2, contents_of_abc3
Optional.ofNullable(((Map<?, ?>) fieldValue))
.filter(Objects::nonNull)
.map(coverages -> coverages.get("contents_of_%"))
.filter(Objects::nonNull)
.filter(LinkedHashMap.class::isInstance)
.map(LinkedHashMap.class::cast)
.map(contents -> contents.get("limit"))
.map(limit -> new BigDecimal(String.valueOf(limit)))
.orElse(new BigDecimal(number));
How can I pass contents_of%
I don't know the reasons behind the data structure and what you want to achieve.
However, it is not important as this can be easily reproduced.
Using of Optional is a good start, however, for iterating and processing multiple inputs, you need to use java-stream instead and then Optional inside of collecting (I assume you want Map<String, BigDecimal output, but it can be adjusted easily).
Also, note .filter(Objects::nonNull) is meaningless as Optional handles null internally and is never passed to the next method.
final Map<String, Map<?, ?>> fieldValue = Map.of(
"contents_of_a", new LinkedHashMap<>(Map.of("limit", "10")),
"contents_of_b", new HashMap<>(Map.of("limit", "11")), // Different
"contents_of_c", new LinkedHashMap<>(Map.of("amount", "12")), // No amount
"contents_of_d", new LinkedHashMap<>(Map.of("limit", "13")));
final List<String> contents = List.of(
"contents_of_a",
"contents_of_b",
"contents_of_c",
// d is missing, e is requested instead
"contents_of_e");
final int number = -1;
final Map<String, BigDecimal> resultMap = contents.stream()
.collect(Collectors.toMap(
Function.identity(), // key
content -> Optional.of(fieldValue) // value
.map(coverages -> fieldValue.get(content))
.filter(LinkedHashMap.class::isInstance)
// casting here to LinkedHashMap is not required
// unless its specific methods are to be used
// but we only get a value using Map#get
.map(map -> map.get("limit"))
.map(String::valueOf)
.map(BigDecimal::new)
// prefer this over orElse as Optional#orElseGet
// does not create an object if not required
.orElseGet(() -> new BigDecimal(number))));
// check out the output below the code
resultMap.forEach((k, v) -> System.out.println(k + " -> " + v));
Only the content for a is used as the remaining were either not an instance of LinkedHashMap, didn't contain a key limit or were not among requested contents.
contents_of_a -> 10
contents_of_b -> -1
contents_of_e -> -1
contents_of_c -> -1
If you want to filter a map for which key starting with "contents_of_", you can do this for Java 8:
Map<String, Object> filteredFieldValue = fieldValue.entrySet().stream().filter(e -> {
String k = e.getKey();
return Stream.of("contents_of_").anyMatch(k::startsWith);
}).collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));

How to merge two Maps based on values with Java 8 streams?

I have a Collection of Maps containing inventory information:
0
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "60"
1
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "1000"
2
"subtype" -> "FRESH"
"itemNumber" -> "EU999"
"quantity" -> "800"
3
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I need to condense this list based on the itemNumber, while summing the quantity and retaining unique subtypes in a comma separated string. Meaning, new Maps would look like this:
0
"subtype" -> "DAIRY, FRESH"
"itemNumber" -> "EU999"
"quantity" -> "1860"
1
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I've tried a variations of streams, collectors, groupby etc., and I'm lost.
This is what I have so far:
public Collection<Map> mergeInventoryPerItemNumber(Collection<Map> InventoryMap){
Map condensedInventory = null;
InventoryMap.stream()
.collect(groupingBy(inv -> new ImmutablePair<>(inv.get("itemNumber"), inv.get("subtype")))), collectingAndThen(toList(), list -> {
long count = list.stream()
.map(list.get(Integer.parseInt("quantity")))
.collect(counting());
String itemNumbers = list.stream()
.map(list.get("subtype"))
.collect(joining(" , "));
condensedInventory.put("quantity", count);
condensedInventory.put("subtype", itemNumbers);
return condensedInventory;
});
Here is one approach.
first iterate thru the list of maps.
for each map, process the keys as required
special keys are itemNumber and quantity
itemNumber is the joining element for all the values.
quantity is the value that must be treated as an integer
the others are strings and are treated as such (for all other values, if the value already exists in the string of concatenated values, then it is not added again)
Some data
List<Map<String, String>> mapList = List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999",
"quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100",
"quantity", "100"));
The building process
Map<String, Map<String, String>> result = new HashMap<>();
for (Map<String, String> m : mapList) {
result.compute(m.get("itemNumber"), (k, v) -> {
for (Entry<String, String> e : m.entrySet()) {
String key = e.getKey();
String value = e.getValue();
if (v == null) {
v = new HashMap<String, String>();
v.put(key, value);
} else {
if (key.equals("quantity")) {
v.compute(key,
(kk, vv) -> vv == null ? value :
Integer.toString(Integer
.valueOf(vv)
+ Integer.valueOf(
value)));
} else {
v.compute(key, (kk, vv) -> vv == null ?
value : (vv.contains(value) ? vv :
vv + ", " + value));
}
}
}
return v;
});
}
List<Map<String,String>> list = new ArrayList<>(result.values());
for (int i = 0; i < list.size(); i++) {
System.out.println(i + " " + list.get(i));
}
prints
0 {itemNumber=EU100, quantity=100, subtype=FRESH}
1 {itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
Note that the map of maps may be more useful that a list of maps. For example, you can retrieve the map for the itemNumber by simply specifying the desired key.
System.out.println(result.get("EU999"));
prints
{itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
You are misusing a Map here. Every map contains the same keys ("subtype", "itemNumber", "quantity"). And they are treated almost like object properties in your code. They are expected to be present in every map and each of them expected to have a specific range of values, although are stored as strings according to your example.
Side-note: avoid using row types (like Map without generic information in angle brackets <>), in such a case all elements inside a collection will be treated as Objects.
Item clearly has to be defined as a class. By storing these data inside a map, you're loosing a possibility to define an appropriate data type for each property, and also you're not able to define behaviour to manipulate with these properties (for more elaborate explanation take a look at this answer).
public class Item {
private final String itemNumber;
private Set<Subtype> subtypes;
private long quantity;
public Item combine(Item other) {
Set<Subtype> combinedSubtypes = new HashSet<>(subtypes);
combinedSubtypes.addAll(other.subtypes);
return new Item(this.itemNumber,
combinedSubtypes,
this.quantity + other.quantity);
}
// + constructor, getters, hashCode/equals, toString
}
Method combine represents the logic for merging two items together. By placing it inside this class, you could easily reuse and change it when needed.
The best choice for the type of the subtype field is an enum. Because it'll allow to avoid mistakes caused by misspelled string values and also enums have an extensive language support (switch expressions and statements, special data structures designed especially for enums, enum could be used with annotations).
This custom enum can look like this.
public enum Subtype {DAIRY, FRESH}
With all these changes, the code inside the mergeInventoryPerItemNumber() becomes concise and easier to comprehend. Collectors.groupingBy() is used to create a map by grouping items with the same itemNumber. A downstream collector Collectors.reducing() is used to combine items grouped under the same key to a single object.
Note that Collectors.reducing() produces an Optional result. Therefore, filter(Optional::isPresent) is used as a precaution to make sure that the result exists and subsequent operation map(Optional::get) extracts the item from the optional object.
public static Collection<Item> mergeInventoryPerItemNumber(Collection<Item> inventory) {
return inventory.stream()
.collect(Collectors.groupingBy(Item::getItemNumber,
Collectors.reducing(Item::combine)))
.values().stream()
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
main()
public static void main(String[] args) {
List<Item> inventory =
List.of(new Item("EU999", Set.of(Subtype.DAIRY), 60),
new Item("EU999", Set.of(Subtype.DAIRY), 1000),
new Item("EU999", Set.of(Subtype.FRESH), 800),
new Item("EU100", Set.of(Subtype.FRESH), 100));
Collection<Item> combinedItems = mergeInventoryPerItemNumber(inventory);
combinedItems.forEach(System.out::println);
}
Output
Item{itemNumber='EU100', subtypes=[FRESH], quantity=100}
Item{itemNumber='EU999', subtypes=[FRESH, DAIRY], quantity=1860}
It may be possible to do this with a single sweep, but here I have solved it with two passes: one to group like items together, and another over the items in each group to build a representative item (which seems similar in spirit to your code, where you were also attempting to stream elements from groups).
public static Collection<Map<String, String>>
mergeInventoryPerItemNumber(Collection<Map<String, String>> m){
return m.stream()
// returns a map of itemNumber -> list of products with that number
.collect(Collectors.groupingBy(o -> o.get("itemNumber")))
// for each item number, builds new representative product
.entrySet().stream().map(e -> Map.of(
"itemNumber", e.getKey(),
// ... merging non-duplicate subtypes
"subtype", e.getValue().stream()
.map(v -> v.get("subtype"))
.distinct() // avoid duplicates
.collect(Collectors.joining(", ")),
// ... adding up quantities
"quantity", ""+e.getValue().stream()
.map(v -> Integer.parseInt(v.get("quantity")))
.reduce(0, Integer::sum)))
.collect(Collectors.toList());
}
public static void main(String ... args) {
Collection<Map<String, String>> c = mkMap();
dump(c);
dump(mergeInventoryPerItemNumber(c));
}
public static Collection<Map<String, String>> mkMap() {
return List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999", "quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100", "quantity", "100"));
}
public static void dump(Collection<Map<String, String>> col) {
int i = 0;
for (Map<String, String> m : col) {
System.out.println(i++);
for (Map.Entry e : m.entrySet()) {
System.out.println("\t" + e.getKey() + " -> " + e.getValue());
}
}
}

Loop through n number of maps

Right now I have the following code, which takes 2 recipes and finds duplicates in the recipes and "merges" them.
public void mergeIngredients(Recipe recipe1, Recipe recipe2) {
Map<String, Ingredients> recipe1Map = recipe1.getIngredientsMap();
Map<String, Ingredients> recipe2Map = recipe2.getIngredientsMap();
for (Map.Entry<String, Ingredients> s : recipe1Map.entrySet()) {
if (recipe2Map.containsKey(s.getKey())) {
double newValue = recipe1.getAmount(s.getKey()) + recipe2.getAmount(s.getKey());
System.out.println(newValue);
}
}
}
I want to change this code so instead of only being able to check 2 maps against each other, I need to refactor the code so it can take N number of maps and compare all of them.
Example: The user inputs 8 different recipes, it should loop through all of these and merge ingredients if duplicates are found. What is the best way to achieve this?
I would first extract all keys from all Maps into a Set. This gives you all unique ingredients-keys.
Then iterate that Set and get all the values from all the recipes and merge them.
For example:
public void mergeIngredients(Set<Recipe> recipes) {
Set<String> keys = recipes.stream() //
.map(Recipe::getIngredientsMap) // Get the map
.flatMap(m -> m.keySet().stream()) // Get all keys and make 1 big stream
.collect(Collectors.toSet()); // Collect them to a set
for (String k : keys)
{
double newValue = recipes.stream() //
.map(Recipe::getIngredientsMap) //
.map(i->i.get(k)) //
.mapToDouble(i->i.getAmount()) //
.sum(); //
System.out.println(newValue);
}
}
You problably can do this more efficient; but this is easier to follow I think.
You can use Merging Multiple Maps Using Java 8 Streams in the case of duplicate keys:
public void mergerMap() throws Exception {
Map<String, Integer> m1 = ImmutableMap.of("a", 2, "b", 3);
Map<String, Integer> m2 = ImmutableMap.of("a", 3, "c", 4);
Map<String, Integer> mx = Stream.of(m1, m2)
.map(Map::entrySet) // converts each map into an entry set
.flatMap(Collection::stream) // converts each set into an entry stream, then
// "concatenates" it in place of the original set
.collect(
Collectors.toMap( // collects into a map
Map.Entry::getKey, // where each entry is based
Map.Entry::getValue, // on the entries in the stream
Integer::max // such that if a value already exist for
// a given key, the max of the old
// and new value is taken
)
)
;
Map<String, Integer> expected = ImmutableMap.of("a", 3, "b", 3, "c", 4);
assertEquals(expected, mx);
}
I don't really see the need of a Map for your ingredients so here is an alternative solution.
If you make your Ingredients class implement equals & hashcode you can use it directly in a Set. You will of course also have a method in Recipe that returns all ingredients as a List. Then the following will return all unique ingredients.
Set<Ingredients> merge(List<Recipe> recipies) {
return recipies.stream().map(s -> s.allIngredients()).collect(Collectors.toSet());
}

Converting List of object to Guava Table data structure using lambda

I have list of ImmutableTriple object where for first and middle there could collection of last values (first, middle and last are triple values).
Now in order to make it queryable, I need to convert it to Guava Table data structure. I am able to achieve this, with for loop as below but I am wondering if I can achieve this more functionally using lambda expression.
here is the piece code -
public static void main(String[] args) {
//In real world, this list is coming from various transformation of lamda
final List<ImmutableTriple<LocalDate, Integer, String>> list = ImmutableList.of(
ImmutableTriple.of(LocalDate.now(), 1, "something"),
ImmutableTriple.of(LocalDate.now(), 1, "anotherThing")
);
Table<LocalDate, Integer, List<String>> table = HashBasedTable.create();
//is it possible to avoid this forEach and use side effect free lambda.
list.forEach(s -> {
final List<String> strings = table.get(s.left, s.middle);
final List<String> slotList = strings == null ? new ArrayList<>() : strings;
slotList.add(s.right);
table.put(s.left, s.middle, slotList);
});
System.out.println(table);
}
There is a Tables class which contains a Collector to get your desired result.
Table<LocalDate, Integer, ImmutableList<String>> collect = list.stream()
.collect(Tables.toTable(
it -> it.left,
it -> it.middle,
it -> ImmutableList.of(it.right),
(l1, l2) -> ImmutableList.<String>builder()
.addAll(l1).addAll(l2).build(),
HashBasedTable::create));
If you really want a mutable List then you can use:
Table<LocalDate, Integer, List<String>> collect = list.stream()
.collect(Tables.toTable(
it -> it.left,
it -> it.middle,
it -> Lists.newArrayList(it.right),
(l1, l2) -> {l1.addAll(l2); return l1;},
HashBasedTable::create));

Use of stream, filter and average on list and jdk8

I have this list of data that look like this;
{id, datastring}
{1,"a:1|b:2|d:3"}
{2,"a:2|c:2|c:4"}
{3,"a:2|bb:2|a:3"}
{4,"a:3|e:2|ff:3"}
What I need to do here is to do operations like average or find all id for which a element in the string is less than a certain value.
Here are some example;
Averages
{a,2}{b,2}{bb,2}{c,3}{d,3}{e,2}{ff,3}
Find all id's where c<4
{2}
Find all id's where a<3
{1,2,3}
Would this be a good use of stream() and filter() ??
Yes you can use stream operations to achieve that but I would suggest to create a class for this datas, so that each row corresponds to one specific instance. That will make your life easier IMO.
class Data {
private int id;
private Map<String, List<Integer>> map;
....
}
That said let's take a look at how you could implement this. First, the find all's implementation:
public static Set<Integer> ids(List<Data> list, String value, Predicate<Integer> boundPredicate) {
return list.stream()
.filter(d -> d.getMap().containsKey(value))
.filter(d -> d.getMap().get(value).stream().anyMatch(boundPredicate))
.map(d -> d.getId())
.collect(toSet());
}
This one is simple to read. You get a Stream<Data> from the list. Then you apply a filter such that you only get instances that have the value given in the map, and that there is a value which satisfies the predicate you give. Then you map each instance to its corresponding id and you collect the resulting stream in a Set.
Example of call:
Set<Integer> set = ids(list, "a", value -> value < 3);
which outputs:
[1, 2, 3]
The average request was a bit more tricky. I ended up with another implementation, you finally get a Map<String, IntSummaryStatistics> at the end (which does contain the average) but also other informations.
Map<String, IntSummaryStatistics> stats = list.stream()
.flatMap(d -> d.getMap().entrySet().stream())
.collect(toMap(Map.Entry::getKey,
e -> e.getValue().stream().mapToInt(i -> i).summaryStatistics(),
(i1, i2) -> {i1.combine(i2); return i1;}));
You first get a Stream<Data>, then you flatMap each entry set of each map to have Stream<Entry<String, List<Integer>>. Now you collect this stream into a map for which each key is mapped by the entry's key and each List<Integer> is mapped by its corresponding IntSummaryStatistics value. If you have two identical keys, you combine their respective IntSummaryStatistics values.
Given you data set, you get a Map<String, IntSummaryStatistics>
ff => IntSummaryStatistics{count=1, sum=3, min=3, average=3.000000, max=3}
bb => IntSummaryStatistics{count=1, sum=2, min=2, average=2.000000, max=2}
a => IntSummaryStatistics{count=5, sum=11, min=1, average=2.200000, max=3}
b => IntSummaryStatistics{count=1, sum=2, min=2, average=2.000000, max=2}
c => IntSummaryStatistics{count=2, sum=6, min=2, average=3.000000, max=4}
d => IntSummaryStatistics{count=1, sum=3, min=3, average=3.000000, max=3}
e => IntSummaryStatistics{count=1, sum=2, min=2, average=2.000000, max=2}
from which you can easily grab the average.
Here's a full working example, the implementation can certainly be improved though.
I know that you have your answer, but here are my versions too :
Map<String, Double> result = list.stream()
.map(Data::getElements)
.flatMap((Multimap<String, Integer> map) -> {
return map.entries().stream();
})
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.averagingInt((Entry<String, Integer> token) -> {
return token.getValue();
})));
System.out.println(result);
List<Integer> result2 = list.stream()
.filter((Data data) -> {
return data.getElements().get("c").stream().anyMatch(i -> i < 4);
})
.map(Data::getId)
.collect(Collectors.toList());
System.out.println(result2);

Categories

Resources