So I have a piece of code where I'm iterating over a list of data. Each one is a ReportData that contains a case with a Long caseId and one Ruling. Each Ruling has one or more Payment. I want to have a Map with the caseId as keys and sets of payments as values (i.e. a Map<Long, Set<Payments>>).
Cases are not unique across rows, but cases are.
In other words, I can have several rows with the same case, but they will have unique rulings.
The following code gets me a Map<Long, Set<Set<Payments>>> which is almost what I want, but I've been struggling to find the correct way to flatMap the final set in the given context. I've been doing workarounds to make the logic work correctly using this map as is, but I'd very much like to fix the algorithm to correctly combine the set of payments into one single set instead of creating a set of sets.
I've searched around and couldn't find a problem with the same kind of iteration, although flatMapping with Java streams seems like a somewhat popular topic.
rowData.stream()
.collect(Collectors.groupingBy(
r -> r.case.getCaseId(),
Collectors.mapping(
r -> r.getRuling(),
Collectors.mapping(ruling->
ruling.getPayments(),
Collectors.toSet()
)
)));
Another JDK8 solution:
Map<Long, Set<Payment>> resultSet =
rowData.stream()
.collect(Collectors.toMap(p -> p.Case.getCaseId(),
p -> new HashSet<>(p.getRuling().getPayments()),
(l, r) -> { l.addAll(r);return l;}));
or as of JDK9 you can use the flatMapping collector:
rowData.stream()
.collect(Collectors.groupingBy(r -> r.Case.getCaseId(),
Collectors.flatMapping(e -> e.getRuling().getPayments().stream(),
Collectors.toSet())));
The cleanest solution is to define your own collector:
Map<Long, Set<Payment>> result = rowData.stream()
.collect(Collectors.groupingBy(
ReportData::getCaseId,
Collector.of(HashSet::new,
(s, r) -> s.addAll(r.getRuling().getPayments()),
(s1, s2) -> { s1.addAll(s2); return s1; })
));
Two other solutions to which I thought first but are actually less efficient and readable, but still avoid constructing the intermediate Map:
Merging the inner sets using Collectors.reducing():
Map<Long, Set<Payment>> result = rowData.stream()
.collect(Collectors.groupingBy(
ReportData::getCaseId,
Collectors.reducing(Collections.emptySet(),
r -> r.getRuling().getPayments(),
(s1, s2) -> {
Set<Payment> r = new HashSet<>(s1);
r.addAll(s2);
return r;
})
));
where the reducing operation will merge the Set<Payment> of entries with the same caseId. This can however cause a lot of copies of the sets if you have a lot of merges needed.
Another solution is with a downstream collector that flatmaps the nested collections:
Map<Long, Set<Payment>> result = rowData.stream()
.collect(Collectors.groupingBy(
ReportData::getCaseId,
Collectors.collectingAndThen(
Collectors.mapping(r -> r.getRuling().getPayments(), Collectors.toList()),
s -> s.stream().flatMap(Set::stream).collect(Collectors.toSet())))
);
Basically it puts all sets of matching caseId together in a List, then flatmaps that list into a single Set.
There are probably better ways to do this, but this is the best I found:
Map<Long, Set<Payment>> result =
rowData.stream()
// First group by caseIds.
.collect(Collectors.groupingBy(r -> r.case.getCaseId()))
.entrySet().stream()
// By streaming over the entrySet, I map the values to the set of payments.
.collect(Collectors.toMap(
Map.Entry::getKey,
entry -> entry.getValue().stream()
.flatMap(r -> r.getRuling().getPayments().stream())
.collect(Collectors.toSet())));
Related
Here's what I have so far:
Map<Care, List<Correlative>> mapOf = quickSearchList
.stream()
.map(QuickSearch::getFacility)
.collect(Collectors.flatMapping(facility -> facility.getFacilityCares().stream(),
Collectors.groupingBy(FacilityCare::getCare,
Collectors.mapping(c -> {
final Facility facility = new Facility();
facility.setId(c.getFacilityId());
return Correlative.createFromFacility(facility);
}, Collectors.toList()))));
I have a list of Quick Searches to begin with. Each item in the quick search has a single facility as in:
public class QuickSearch {
Facility facility;
}
In every Facility, there's a List of FacilityCare as in:
public class Facility {
List<FacilityCare> facilityCares;
}
And finally, FacilityCare has Care property as in:
public class FacilityCare {
Care care;
}
Now, the idea is to convert a List of QuickSearch to a Map of <Care, List<Correlative>>.
The code within the mapping() function is bogus, in the example above. FacilityCare only has facilityID and not Facility entity. I want the facility object that went as param in flatMapping to be my param again in mapping() function as in:
Collectors.mapping(c -> Correlative.createFromFacility(facility))
where "facility" is the same object as the one in flatMapping.
Is there any way to achieve this? Please let me know if things need to be explained further.
Edit:
Here's a solution doesn't fully utilize Collectors.
final Map<Care, List<Correlative>> mapToHydrate = new HashMap<>();
quickSearchList
.stream()
.map(QuickSearch::getFacility)
.forEach(facility -> {
facility.getFacilityCares()
.stream()
.map(FacilityCare::getCare)
.distinct()
.forEach(care -> {
mapToHydrate.computeIfAbsent(care, care -> new ArrayList<>());
mapToHydrate.computeIfPresent(care, (c, list) -> {
list.add(Correlative.createFromFacility(facility));
return list;
});
});
});
Sometimes, streams are not the best solution. This seems to be the case, because you are losing each facility instance when going down the pipeline.
Instead, you could do it as follows:
Map<Care, List<Correlative>> mapToHydrate = new LinkedHashMap<>();
quickSearchList.forEach(q -> {
Facility facility = q.getFacility();
facility.getFacilityCares().forEach(fCare ->
mapToHydrate.computeIfAbsent(fCare.getCare(), k -> new ArrayList<>())
.add(Correlative.createFromFacility(facility)));
});
This uses the return value of Map.computeIfAbsent (which is either the newly created list of correlatives or the already present one).
It is not clear from your question why you need distinct cares before adding them to the map.
EDIT: Starting from Java 16, you might want to use Stream.mapMulti:
Map<Care, List<Correlative>> mapToHydrate = quickSearchList.stream()
.map(QuickSearch::getFacility)
.mapMulti((facility, consumer) -> facility.getFacilityCares()
.forEach(fCare -> consumer.accept(Map.entry(fCare.getCare(), facility))))
.collect(Collectors.groupingBy(
e -> e.getKey(),
Collectors.mapping(
e -> Correlative.createFromFacility(e.getValue()),
Collectors.toList())));
This is what I came up with based on the information provided. The Facility and Care are stored in a temp array to be processed later in the desired map.
Map<Care, List<Correlative>> mapOf = quickSearchList.stream()
.map(QuickSearch::getFacility)
.flatMap(facility -> facility
.getFacilityCares().stream()
.map(facCare->new Object[]{facility, facCare.getCare()}))
.collect(Collectors.groupingBy(obj->(Care)obj[1], Collectors
.mapping(obj -> Correlative.createFromFacility(
(Facility)obj[0]),
Collectors.toList())));
I prepared some simple test data and this seems to work assuming I understand the ultimate goal. For each type of care offered, it puts all the facilities that offer that care in an associated list of facilities.
Inspired by #fps answer, I was able to come up with a solution that will work for the time being (pre-Java16).
Map<Care, List<Correlative>> mapOf = quickSearchList
.stream()
.map(QuickSearch::getFacility)
.map(expandIterable())
.collect(
Collectors.flatMapping(map -> map.entrySet().stream(),
Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(entry -> Correlative.createFromFacility(entry.getValue()),
Collectors.toList()
)
)
));
}
public Function<Facility, Map<Care, Facility>> expandIterable() {
return facility -> facility.getFacilityCares()
.stream()
.map(FacilityCare::getCare)
.distinct()
.collect(Collectors.toMap(c -> c, c -> facility));
}
Basically, I added a method call that returns a Function that takes in Facility as argument and returns a Map of Care as key with Facility as value. That map is used in the collection of the previous stream.
I have the below multilevel map:
Map<String, List<Map<String, Map<String, Map<String, Map<String, String>>>>>> input =
ImmutableMap.of("A",
ImmutableList.of(ImmutableMap.of("2",
ImmutableMap.of("3",
ImmutableMap.of("4",
ImmutableMap.of("5", "a"))))));
In short it'll be like
{
"A":[{"2":{"3":{"4":{"5":"a"}}}}],
"B":[{"2":{"3":{"4":{"5":"b"}}}}]
}
My requirement is to construct a map of the form
{
"A":"a",
"B":"b"
}
I tried the below code but for some reason myMap is always empty even though I'm populating it. What am I missing?
Map<String, String> myMap = new HashMap<>();
input.entrySet()
.stream()
.map(l -> l.getValue().stream().map(m -> m.get(m.keySet().toArray()[0]))
.map(n -> n.get(n.keySet().toArray()[0]))
.map(o -> o.get(o.keySet().toArray()[0]))
.map(p -> myMap.put(l.getKey(), p.get(p.keySet().toArray()[0])))).collect(Collectors.toList());
System.out.println(myMap);
Here's what I get when I add two peek calls to your pipeline:
input.entrySet().stream()
.peek(System.out::println) //<- this
.map(l -> ...)
.peek(System.out::println) //<- and this
.collect(Collectors.toList());
output:
A=[{2={3={4={5=a}}}}]
java.util.stream.ReferencePipeline$3#d041cf
If you notice the problem, you're collecting streams, and these streams don't get executed through a call to a terminal operation... When I try adding something like .count() to the inner stream, your expected output is produced:
...
.map(l -> l.getValue().stream().map(m -> m.get(m.keySet().toArray()[0]))
.map(n -> n.get(n.keySet().toArray()[0]))
.map(o -> o.get(o.keySet().toArray()[0]))
.map(p -> myMap.put(l.getKey(), p.get(p.keySet().toArray()[0])))
.count()) //just an example
...
Now, I suppose you know that a terminal operation needs to be called for the intermediate ones to run.
In a rather desperate attempt to simplify this code, as the stream seems to make it simply hard to read, I thought you might be interested in this, which assumes that no collection is empty in the tree but at least addrsses the record as one object, and not a collection of records (but I'm sure no code will look clean for that deep map of of maps).
String key = input.keySet().iterator().next();
String value = input.entrySet().iterator().next()
.getValue().get(0)
.values().iterator().next()
.values().iterator().next()
.values().iterator().next()
.values().iterator().next();
myMap.put(key, value);
I have the following TreeMap:
TreeMap<Long,String> gasType = new TreeMap<>(); // Long, "Integer-Double"
gasType.put(1L, "7-1.50");
gasType.put(2L, "7-1.50");
gasType.put(3L, "7-3.00");
gasType.put(4L, "8-5.00");
gasType.put(5L, "8-7.00");
Map<Integer,TreeSet<Long>> capacities = new TreeMap<>);
The key is of the form 1L (a Long), and value of the form "7-1.50" (a String concatenation of an int and a double separated by a -).
I need to create a new TreeMap where the keys are obtained by taking the int part of the values of the original Map (for example, for the value "7-1.50", the new key will be 7). The value of the new Map would be a TreeSet containing all the keys of the original Map matching the new key.
So, for the input above, the value for the 7 key will be the Set {1L,2L,3L}.
I can do this without Streams, but I would like to do it with Streams. Any help is appreciated. Thank you.
Here's one way to do it:
Map<Integer,TreeSet<Long>> capacities =
gasType.entrySet()
.stream ()
.collect(Collectors.groupingBy (e -> Integer.parseInt(e.getValue().substring(0,e.getValue ().indexOf("-"))),
TreeMap::new,
Collectors.mapping (Map.Entry::getKey,
Collectors.toCollection(TreeSet::new))));
I modified the original code to support integers of multiple digits, since it appears you want that.
This produces the Map:
{7=[1, 2, 3], 8=[4, 5]}
If you don't care about the ordering of the resulting Map and Sets, you can let the JDK decide on the implementations, which would somewhat simplify the code:
Map<Integer,Set<Long>> capacities =
gasType.entrySet()
.stream ()
.collect(Collectors.groupingBy (e -> Integer.parseInt(e.getValue().substring(0,e.getValue ().indexOf("-"))),
Collectors.mapping (Map.Entry::getKey,
Collectors.toSet())));
You may try this out,
final Map<Integer, Set<Long>> map = gasType.entrySet().stream()
.collect(Collectors.groupingBy(entry -> Integer.parseInt(entry.getValue().substring(0, 1)),
Collectors.mapping(Map.Entry::getKey, Collectors.toSet())));
UPDATE
If you want to split the value based on "-" since there may be more that one digit, you can change it like so:
final Map<Integer, Set<Long>> map = gasType.entrySet().stream()
.collect(Collectors.groupingBy(entry -> Integer.parseInt(entry.getValue().split("-")[0]),
Collectors.mapping(Map.Entry::getKey, Collectors.toSet())));
Other solution would be like this
list = gasType.entrySet()
.stream()
.map(m -> new AbstractMap.SimpleImmutableEntry<Integer, Long>(Integer.valueOf(m.getValue().split("-")[0]), m.getKey()))
.collect(Collectors.toList());
and second step:
list.stream()
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue,Collectors.toCollection(TreeSet::new))));
or in one step:
gasType.entrySet()
.stream()
.map(m -> new AbstractMap.SimpleImmutableEntry<>(Integer.valueOf(m.getValue().split("-")[0]), m.getKey()))
.collect(Collectors.groupingBy(Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue, Collectors.toCollection(TreeSet::new))))
My class has two fields:
MyKey - the key that I want to group by
Set<MyEnum> - the set that I want to be flattened and merged.
I have a list of such objects, and what I want is to obtain a Map<MyKey, Set<MyEnum> of which the value is joined from all myEnums of the objects with this key.
For example, if I have three objects:
myKey: key1, myEnums: [E1]
myKey: key1, myEnums: [E2]
myKey: key2, myEnums: [E1, E3]
The expected result should be:
key1 => [E1, E2], key2 => [E1, E3]
I came up with this code:
Map<MyKey, Set<MyEnum>> map = myObjs.stream()
.collect(Collectors.groupingBy(
MyType::getMyKey,
Collectors.reducing(
new HashSet<MyEnum>(),
MyType::getMyEnums,
(a, b) -> {
a.addAll(b);
return a;
})));
There're two problems with it:
The HashSet inside the reducing seems to be shared between all keys. That being said the actual run result of the above example is key1 => [E1, E2, E3], key2 => [E1, E2, E3]. Why is it the case?
Even if this code works, it looks ugly especially at the part of reducing that I have to handle the logic of constructing the joined collection manually. Is there a better way of doing this?
Thank you!
Notice that you are only ever creating one identity object: new HashSet<MyEnum>().
The BinaryOperator you supply as the third argument must be idempotent, the same way common math operators are, e.g. x = y + z doesn't change the value of y and z.
This means you need to merge the two input sets a and b, without updating either.
Also, working with enums, you should use EnumSet, not HashSet.
Map<MyKey, Set<MyEnum>> map = myObjs.stream()
.collect(Collectors.groupingBy(
MyType::getMyKey,
Collectors.reducing(
EnumSet.noneOf(MyEnum.class), // <-- EnumSet
MyType::getMyEnums,
(a, b) -> {
EnumSet<MyEnum> c = EnumSet.copyOf(a); // <-- copy
c.addAll(b);
return c;
})));
UPDATE
Shorter, more streamlined version, that doesn't have to keep creating new sets while accumulating the result:
Map<MyKey, Set<MyEnum>> map = myObjs.stream()
.collect(Collectors.groupingBy(
MyType::getMyKey,
Collector.of(
() -> EnumSet.noneOf(MyEnum.class),
(r, myObj) -> r.addAll(myObj.getMyEnums()),
(r1, r2) -> { r1.addAll(r2); return r1; }
)));
Not ideal, but using a mutable container makes it fairly easy to understand.
myObjs.stream()
.collect(groupingBy(MyType::getMyKey)
.entrySet().stream()
.collect(toMap(
Map.Entry::getKey,
e -> e.getValue()
.stream()
.flatMap(v -> v.getMyEnums().stream())
.collect(toSet())
)
Collectors.mapping(Function, Collector) is so nearly a perfect fit for what you want to do here, if only it were Collectors.flatMapping
EDIT: until java 9 is out, there is a handy implementation of flatMapping in this answer. With it our solution looks like this:
myObjs.stream()
.collect(
groupingBy(MyType::getMyKey,
flatMapping(v -> v.getMyEnums().stream(), toSet())
);
Say I have the following two maps:
Map<Member, List<Message>> one = ...;//one constructed somehow
Map<Member, List<Message>> two = ...;//two also constructed somehow
I would like to obtain a third map containing the content of one and two.
So if a member key is in both one and two, the list value from the entry in two will be added to the list value from the entry in one.
What is the best and cleanest way to achieve this possibly using java 8?
You can use the merge method.
Map<Member, List<Message>> third = new HashMap<>(one);
two.forEach((k, v) -> third.merge(k, v, (v1, v2) -> {v1.addAll(v2); return v1;}));
This will also modify the list in the map one since you're manipulating the same reference. If you don't want, i.e create a new list, you can do it like this:
Map<Member, List<Message>> third = new HashMap<>(one);
two.forEach((k, v) -> third.merge(k, v, (v1, v2) -> Stream.concat(v1.stream(), v2.stream()).collect(toList())));
The merged list will be a new list but be careful that this won't be the case for the keys which weren't in both maps. You'd need to deep copy all the lists to achieve this.
Just added another solution with the Stream API:
Map<Member, List<Message>> third = Stream.of(one, two)
.flatMap(m -> m.entrySet().stream())
.collect(toMap(Map.Entry::getKey,
Map.Entry::getValue,
(l1, l2) -> Stream.concat(l1.stream(), l2.stream()).collect(toList())));
I also added my contribution to the proton-pack library. With this you could also achieve it like this:
Map<Member, List<Message>> third = MapStream.ofMaps(one, two)
.mergeKeys((l1, l2) -> Stream.concat(l1.stream(), l2.stream()).collect(toList()))
.collect();