How to merge two Maps based on values with Java 8 streams? - java

I have a Collection of Maps containing inventory information:
0
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "60"
1
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "1000"
2
"subtype" -> "FRESH"
"itemNumber" -> "EU999"
"quantity" -> "800"
3
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I need to condense this list based on the itemNumber, while summing the quantity and retaining unique subtypes in a comma separated string. Meaning, new Maps would look like this:
0
"subtype" -> "DAIRY, FRESH"
"itemNumber" -> "EU999"
"quantity" -> "1860"
1
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I've tried a variations of streams, collectors, groupby etc., and I'm lost.
This is what I have so far:
public Collection<Map> mergeInventoryPerItemNumber(Collection<Map> InventoryMap){
Map condensedInventory = null;
InventoryMap.stream()
.collect(groupingBy(inv -> new ImmutablePair<>(inv.get("itemNumber"), inv.get("subtype")))), collectingAndThen(toList(), list -> {
long count = list.stream()
.map(list.get(Integer.parseInt("quantity")))
.collect(counting());
String itemNumbers = list.stream()
.map(list.get("subtype"))
.collect(joining(" , "));
condensedInventory.put("quantity", count);
condensedInventory.put("subtype", itemNumbers);
return condensedInventory;
});

Here is one approach.
first iterate thru the list of maps.
for each map, process the keys as required
special keys are itemNumber and quantity
itemNumber is the joining element for all the values.
quantity is the value that must be treated as an integer
the others are strings and are treated as such (for all other values, if the value already exists in the string of concatenated values, then it is not added again)
Some data
List<Map<String, String>> mapList = List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999",
"quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100",
"quantity", "100"));
The building process
Map<String, Map<String, String>> result = new HashMap<>();
for (Map<String, String> m : mapList) {
result.compute(m.get("itemNumber"), (k, v) -> {
for (Entry<String, String> e : m.entrySet()) {
String key = e.getKey();
String value = e.getValue();
if (v == null) {
v = new HashMap<String, String>();
v.put(key, value);
} else {
if (key.equals("quantity")) {
v.compute(key,
(kk, vv) -> vv == null ? value :
Integer.toString(Integer
.valueOf(vv)
+ Integer.valueOf(
value)));
} else {
v.compute(key, (kk, vv) -> vv == null ?
value : (vv.contains(value) ? vv :
vv + ", " + value));
}
}
}
return v;
});
}
List<Map<String,String>> list = new ArrayList<>(result.values());
for (int i = 0; i < list.size(); i++) {
System.out.println(i + " " + list.get(i));
}
prints
0 {itemNumber=EU100, quantity=100, subtype=FRESH}
1 {itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
Note that the map of maps may be more useful that a list of maps. For example, you can retrieve the map for the itemNumber by simply specifying the desired key.
System.out.println(result.get("EU999"));
prints
{itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}

You are misusing a Map here. Every map contains the same keys ("subtype", "itemNumber", "quantity"). And they are treated almost like object properties in your code. They are expected to be present in every map and each of them expected to have a specific range of values, although are stored as strings according to your example.
Side-note: avoid using row types (like Map without generic information in angle brackets <>), in such a case all elements inside a collection will be treated as Objects.
Item clearly has to be defined as a class. By storing these data inside a map, you're loosing a possibility to define an appropriate data type for each property, and also you're not able to define behaviour to manipulate with these properties (for more elaborate explanation take a look at this answer).
public class Item {
private final String itemNumber;
private Set<Subtype> subtypes;
private long quantity;
public Item combine(Item other) {
Set<Subtype> combinedSubtypes = new HashSet<>(subtypes);
combinedSubtypes.addAll(other.subtypes);
return new Item(this.itemNumber,
combinedSubtypes,
this.quantity + other.quantity);
}
// + constructor, getters, hashCode/equals, toString
}
Method combine represents the logic for merging two items together. By placing it inside this class, you could easily reuse and change it when needed.
The best choice for the type of the subtype field is an enum. Because it'll allow to avoid mistakes caused by misspelled string values and also enums have an extensive language support (switch expressions and statements, special data structures designed especially for enums, enum could be used with annotations).
This custom enum can look like this.
public enum Subtype {DAIRY, FRESH}
With all these changes, the code inside the mergeInventoryPerItemNumber() becomes concise and easier to comprehend. Collectors.groupingBy() is used to create a map by grouping items with the same itemNumber. A downstream collector Collectors.reducing() is used to combine items grouped under the same key to a single object.
Note that Collectors.reducing() produces an Optional result. Therefore, filter(Optional::isPresent) is used as a precaution to make sure that the result exists and subsequent operation map(Optional::get) extracts the item from the optional object.
public static Collection<Item> mergeInventoryPerItemNumber(Collection<Item> inventory) {
return inventory.stream()
.collect(Collectors.groupingBy(Item::getItemNumber,
Collectors.reducing(Item::combine)))
.values().stream()
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
main()
public static void main(String[] args) {
List<Item> inventory =
List.of(new Item("EU999", Set.of(Subtype.DAIRY), 60),
new Item("EU999", Set.of(Subtype.DAIRY), 1000),
new Item("EU999", Set.of(Subtype.FRESH), 800),
new Item("EU100", Set.of(Subtype.FRESH), 100));
Collection<Item> combinedItems = mergeInventoryPerItemNumber(inventory);
combinedItems.forEach(System.out::println);
}
Output
Item{itemNumber='EU100', subtypes=[FRESH], quantity=100}
Item{itemNumber='EU999', subtypes=[FRESH, DAIRY], quantity=1860}

It may be possible to do this with a single sweep, but here I have solved it with two passes: one to group like items together, and another over the items in each group to build a representative item (which seems similar in spirit to your code, where you were also attempting to stream elements from groups).
public static Collection<Map<String, String>>
mergeInventoryPerItemNumber(Collection<Map<String, String>> m){
return m.stream()
// returns a map of itemNumber -> list of products with that number
.collect(Collectors.groupingBy(o -> o.get("itemNumber")))
// for each item number, builds new representative product
.entrySet().stream().map(e -> Map.of(
"itemNumber", e.getKey(),
// ... merging non-duplicate subtypes
"subtype", e.getValue().stream()
.map(v -> v.get("subtype"))
.distinct() // avoid duplicates
.collect(Collectors.joining(", ")),
// ... adding up quantities
"quantity", ""+e.getValue().stream()
.map(v -> Integer.parseInt(v.get("quantity")))
.reduce(0, Integer::sum)))
.collect(Collectors.toList());
}
public static void main(String ... args) {
Collection<Map<String, String>> c = mkMap();
dump(c);
dump(mergeInventoryPerItemNumber(c));
}
public static Collection<Map<String, String>> mkMap() {
return List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999", "quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100", "quantity", "100"));
}
public static void dump(Collection<Map<String, String>> col) {
int i = 0;
for (Map<String, String> m : col) {
System.out.println(i++);
for (Map.Entry e : m.entrySet()) {
System.out.println("\t" + e.getKey() + " -> " + e.getValue());
}
}
}

Related

Group strings into multiple groups when using stream groupingBy

A simplified example of what I am trying to do:
Suppose I have a list of strings, which need to be grouped into 4 groups according to a condition if a specific substring is contained or not. If a string contains Foo it should fall in the group FOO, if it contains Bar it should fall in the group BAR, if it contains both it should appear in both groups.
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
A naive approach for the above input doesn't work as expected since the string is grouped into the first matching group:
Map<String,List<String>> result1 =
strings.stream()
.collect(Collectors.groupingBy(
str -> str.contains("Foo") ? "FOO" :
str.contains("Bar") ? "BAR" :
str.contains("Baz") ? "BAZ" : "DEFAULT"));
result1 is
{FOO=[Foo, FooBar, FooBarBaz], DEFAULT=[XXX]}
where as the desired result should be
{FOO=[Foo, FooBar, FooBarBaz], BAR=[FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
After searching for a while I found another approach, which comes near to my desired result, but not quite fully
Map<String,List<String>> result2 =
List.of("Foo", "Bar", "Baz", "Default").stream()
.flatMap(str -> strings.stream().filter(s -> s.contains(str)).map(s -> new String[]{str.toUpperCase(), s}))
.collect(Collectors.groupingBy(arr -> arr[0], Collectors.mapping(arr -> arr[1], Collectors.toList())));
System.out.println(result2);
result2 is
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz]}
while this correctly groups strings containing the substrings into the needed groups, the strings which doesn't contain the substrings and therefore should fall in the default group are ignored. The desired result is as already mentioned above (order doesn't matter)
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
For now I'm using both result maps and doing an extra:
result2.put("DEFAULT", result1.get("DEFAULT"));
Can the above be done in one step? Is there a better approach better than what I have above?
This is ideal for using mapMulti. MapMulti takes a BiConsumer of the streamed value and a consumer.
The consumer is used to simply place something back on the stream. This was added to Java since flatMaps can incur undesirable overhead.
This works by can building a String array as you did before of Token and the containing String and collecting (also as you did before). If the key was found in the string, accept a String array with it and the containing string. Otherwise, accept a String array with the default key and the string.
List<String> strings =
List.of("Foo", "FooBar", "FooBarBaz", "XXX", "YYY");
Map<String, List<String>> result = strings.stream()
.<String[]>mapMulti((str, consumer) -> {
boolean found = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO", "BAR",
"BAZ")) {
if (temp.contains(token)) {
consumer.accept(
new String[] { token, str });
found = true;
}
}
if (!found) {
consumer.accept(
new String[] { "DEFAULT", str });
}
})
.collect(Collectors.groupingBy(arr -> arr[0],
Collectors.mapping(arr -> arr[1],
Collectors.toList())));
result.entrySet().forEach(System.out::println);
prints
BAR=[FooBar, FooBarBaz]
FOO=[Foo, FooBar, FooBarBaz]
BAZ=[FooBarBaz]
DEFAULT=[XXX, YYY]
Keep in mind that streams are meant to make your coding world easier. But sometimes, a regular loop using some Java 8 constructs is all that is needed. Outside of an academic exercise, I would probably do the task like so.
Map<String,List<String>> result2 = new HashMap<>();
for (String str : strings) {
boolean added = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO","BAR","BAZ")) {
if(temp.contains(token)) {
result2.computeIfAbsent(token, v->new ArrayList<>()).add(str);
added = true;
}
}
if (!added) {
result2.computeIfAbsent("DEFAULT", v-> new ArrayList<>()).add(str);
}
}
Instead of operating with strings "Foo", "Bar", etc. and their corresponding uppercase versions, it would be more convenient and cleaner to define an enum.
Let's call it Keys:
public enum Keys {
FOO("Foo"), BAR("Bar"), BAZ("Baz"), DEFAULT("");
private static final Set<Keys> nonDefaultKeys = EnumSet.range(FOO, BAZ); // Set of enum constants (not includes DEFAULT), needed to avoid creating EnumSet or array of constants via `values()` at every invocation of getKeys()
private String keyName;
Keys(String keyName) {
this.keyName = keyName;
}
public static List<String> getKeys(String str) {
List<String> keys = nonDefaultKeys.stream()
.filter(key -> str.contains(key.keyName))
.map(Enum::name)
.toList();
// if non-default keys not found, i.e. keys.isEmpty() - return the DEFAULT
return keys.isEmpty() ? List.of(DEFAULT.name()) : keys;
}
}
It has a method getKeys(String) which expects a string and returns a list of keys to which the given string should be mapped.
By using the functionality encapsulated in the Keys enum we can create a map of strings split into groups which correspond to the names of Keys-constants by using collect(supplier,accumulator,combiner).
main()
public static void main(String[] args) {
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
Map<String, List<String>> stringsByGroup = strings.stream()
.collect(
HashMap::new, // mutable container - which will contain results of mutable reduction
(Map<String, List<String>> map, String next) -> Keys.getKeys(next)
.forEach(key -> map.computeIfAbsent(key, k -> new ArrayList<>()).add(next)), // accumulator function - defines how to store stream elements into the container
(left, right) -> right.forEach((k, v) ->
left.merge(k, v, (oldV, newV) -> { oldV.addAll(newV); return oldV; }) // combiner function - defines how to merge container while executing the stream in parallel
));
stringsByGroup.forEach((k, v) -> System.out.println(k + " -> " + v));
}
Output:
BAR -> [FooBar, FooBarBaz]
FOO -> [Foo, FooBar, FooBarBaz]
BAZ -> [FooBarBaz]
DEFAULT -> [XXX]
A link to Online Demo

Java stream collect to Map<String, Map<Integer, MyObject>>

I'm using Java 11 and I have a List<MyObject> called myList of the following object:
public class MyObject {
private final String personalId;
private final Integer rowNumber;
private final String description;
<...>
}
and I want using streams to collect these objects into a Map<String, Map<Integer, List<MyObject>>>
(with following syntax: Map<personalId, Map<rowNumber, List<MyObject>>>) and I don't want to use Collectors.groupBy(), because it has issues with null
values.
I tried to do it using Collectors.toMap(), but it seems that it is not possible to do it
myList
.stream()
.Collectors.toMap(s -> s.getPersonalId(), s -> Collectors.toMap(t-> s.getRowNumber(), ArrayList::new))
My question is it possible to make a Map<String, Map<Integer, List<MyObject>>> object using streams without using Collectors.groupBy() or should I write a full method myself?
In your case I would create the maps first and then loop through the elements in this list as shown:
Map<String, List<MyObject>> rows = new HashMap<>();
list.forEach(element -> rows.computeIfAbsent(element.personalId, s -> new ArrayList<>()).add(element));
You can use computeIfAbsent in order to create a new list/map as a value of the map before you can put your data in.
The same is with the second data type you created:
Map<String, Map<Integer, MyObject>> persons = new HashMap<>();
list.forEach(element -> persons.computeIfAbsent(element.personalId, s -> new HashMap<>()).put(element.rowNumber, element));
Here is a way to solve this with streams. But note that the objects must have a unique personId/rowNumber:
Map<String, List<MyObject>> rows = list.stream().collect(
Collectors.toMap(element -> element.personalId,
element -> new ArrayList<MyObject>(Arrays.asList(element))));
As well as for the other map:
Map<String, Map<Integer, MyObject>> persons = list.stream().collect(
Collectors.toMap(e -> e.personalId,
e -> new HashMap<>(Map.of(e.rowNumber, e))));
Map<String, Map<Integer, List<MyObject>>> object using streams without using Collectors.groupingBy()
By looking at the map type, I can assume that a combination of personalId and rowNumber is not unique, i.e. there could be multiple occurrences of each combination (otherwise you don't need to group objects into lists). And there could be different rowNumber associated with each personalId. Only if these conclusions correct, this nested collection might have a very vague justification for existence.
Otherwise, you probably can substitute it multiple collections for different use-cases, for Map<String, MyObject> - object by id (if every id is unique):
Map<String, MyObject> objById = myList.stream()
.collect(Collectors.toMap(
MyObject::getPersonalId,
Function.identity()
));
I'll proceed assuming that you really need such a nested collection.
Now, let's address the issue with groupingBy(). This collector uses internally Objects.requireNonNull() to make sure that a key produced by the classifier function is non-null.
If you tried to use and failed because of the hostility to null-keys, that implies that either personalId, or rowNumber, or both can be null.
Now let's make a small detour and pose the question of what does it imply if a property that considered to be significant (you're going to use personalId and rowNumber to access the data, hence they are definitely important) and null means first of all?
null signifies the absence of data and nothing else. If in your application null values have an additional special meaning, that's a design flaw. If properties that are significant for managing your data for some reason appear to be null you need to fix that.
You might claim that you're quite comfortable with null values. If so let pause for a moment and imagine a situation: person enters a restaurant, orders a soup and asks the waiter to bring them a fork instead of spoon (because of they have a negative experience with a spoon, and they are enough comfortable with fork).
null isn't a data, it's an indicator of the absence of data, storing null is an antipattern. If you're storing null it obtains a special meaning because you're forced to treat it separately.
To replace personalId and rowNumber that are equal to null with default values we need only one line of code.
public static void replaceNullWithDefault(List<MyObject> list,
String defaultId,
Integer defaultNum) {
list.replaceAll(obj -> obj.getPersonalId() != null && obj.getRowNumber() != null ? obj :
new MyObject(Objects.requireNonNullElse(obj.getPersonalId(), defaultId),
Objects.requireNonNullElse(obj.getRowNumber(), defaultNum),
obj.getDescription()));
}
After that can use the proper tool instead eating soup with a fork, I mean we can process the list data with groupingBy():
public static void main(String[] args) {
List<MyObject> myList = new ArrayList<>(
List.of(
new MyObject("id1", 1, "desc1"),
new MyObject("id1", 1, "desc2"),
new MyObject("id1", 2, "desc3"),
new MyObject("id1", 2, "desc4"),
new MyObject("id2", 1, "desc5"),
new MyObject("id2", 1, "desc6"),
new MyObject("id2", 1, "desc7"),
new MyObject(null, null, "desc8")
));
replaceNullWithDefault(myList, "id0", 0); // replacing null values
Map<String, Map<Integer, List<MyObject>>> byIdAndRow = myList // generating a map
.stream()
.collect(Collectors.groupingBy(
MyObject::getPersonalId,
Collectors.groupingBy(MyObject::getRowNumber)
));
byIdAndRow.forEach((k, v) -> { // printing the map
System.out.println(k);
v.forEach((k1, v1) -> System.out.println(k1 + " -> " + v1));
});
}
Output:
id0
0 -> [MyObject{'id0', 0, 'desc8'}]
id2
1 -> [MyObject{'id2', 1, 'desc5'}, MyObject{'id2', 1, 'desc6'}, MyObject{'id2', 1, 'desc7'}]
id1
1 -> [MyObject{'id1', 1, 'desc1'}, MyObject{'id1', 1, 'desc2'}]
2 -> [MyObject{'id1', 2, 'desc3'}, MyObject{'id1', 2, 'desc4'}]
A link to Online Demo
Now, please pay attention to the usage of groupingBy() did you notice its conciseness. That's the right tool which allows generating even such a clumsy nested map.
And now we're going to eat the soup with a fork! All null properties would be used as is:
public static void main(String[] args) {
List<MyObject> myList = new ArrayList<>(
List.of(
new MyObject("id1", 1, "desc1"),
new MyObject("id1", 1, "desc2"),
new MyObject("id1", 2, "desc3"),
new MyObject("id1", 2, "desc4"),
new MyObject("id2", 1, "desc5"),
new MyObject("id2", 1, "desc6"),
new MyObject("id2", 1, "desc7"),
new MyObject(null, null, "desc8")
));
Map<String, Map<Integer, List<MyObject>>> byIdAndRow = myList // generating a map
.stream()
.collect(
HashMap::new,
(Map<String, Map<Integer, List<MyObject>>> mapMap, MyObject next) ->
mapMap.computeIfAbsent(next.getPersonalId(), k -> new HashMap<>())
.computeIfAbsent(next.getRowNumber(), k -> new ArrayList<>())
.add(next),
(left, right) -> right.forEach((k, v) -> left.merge(k, v,
(oldV, newV) -> {
newV.forEach((k1, v1) -> oldV.merge(k1, v1,
(listOld, listNew) -> {
listOld.addAll(listNew);
return listOld;
}));
return oldV;
}))
);
byIdAndRow.forEach((k, v) -> { // printing the map
System.out.println(k);
v.forEach((k1, v1) -> System.out.println(k1 + " -> " + v1));
});
}
Output:
null
null -> [MyObject{'null', null, 'desc8'}]
id2
1 -> [MyObject{'id2', 1, 'desc5'}, MyObject{'id2', 1, 'desc6'}, MyObject{'id2', 1, 'desc7'}]
id1
1 -> [MyObject{'id1', 1, 'desc1'}, MyObject{'id1', 1, 'desc2'}]
2 -> [MyObject{'id1', 2, 'desc3'}, MyObject{'id1', 2, 'desc4'}]
A link to Online Demo

Sorting a List dynamically using attributes provided at Rinetime

There's a Map with keys of type String as the and values represented by a list of objects as follows:
Map<String, List<ScoreAverage>> averagesMap
ScoreAverage record:
public record ScoreAverage(
#JsonProperty("average") double average,
#JsonProperty("name") String name
) {}
The map holds data as follows :
{
"averagesMap":{
"A":[
{
"average":4.0,
"name":"Accounting"
},
{
"average":4.0,
"name":"company-wide"
},
{
"average":4.0,
"name":"Engineering"
}
],
"B":[
{
"average":3.0,
"name":"Engineering"
},
{
"average":3.0,
"name":"company-wide"
},
{
"average":3.0,
"name":"Accounting"
}
],
"C":[
{
"average":2.0,
"name":"company-wide"
},
{
"average":2.0,
"name":"Engineering"
},
{
"average":2.0,
"en":"Accounting"
}
],
"specialAverages":[
{
"average":2.5,
"name":"Engineering"
},
{
"average":2.5,
"name":"company-wide"
},
{
"average":2.5,
"name":"Accounting"
}
]
}
}
What I want to achieve is to sort dynamically each list of objects in a map using the name attribute in the order specified at runtime, for instance:
1st item of list -> company-wide
2nd item of list -> Engineering
3rd item of list -> Accounting
What would be the easiest way of doing this?
To achieve that, first you need to establish the desired order. So that it will be encapsulated in a variable and passed as a parameter at runtime to the method that will take care of sorting. With that, sorting order will be dynamic, dependent on the provided argument.
In the code below, a List is being used for that purpose. Sorting is based on the indexes that each name occupies in the sortingRule list.
The next step is to create Comparator based on it. I'm using a condition sortingRule.contains(score.name()) as precaution for the cases like typo, etc. when name doesn't appear in the sortingRule. With that, all such objects will be placed at the end of a sorted list.
Comparator<ScoreAverage> dynamicComparator =
Comparator.comparingInt(score -> sortingRule.contains(score.name()) ?
sortingRule.indexOf(score.name()) :
sortingRule.size());
If we drop the condition comparator boils down to
Comparator.comparingInt(score -> sortingRule.indexOf(score.name()));
With that, all unidentified objects (if any) will be grouped at the beginning of a sorted list.
And finally, we need to sort every value in a map with this comparator.
Iterative implementation (note: defensive copy of every list is meant to preserve the source intact).
public static Map<String, List<ScoreAverage>> sortByRule(Map<String, List<ScoreAverage>> averagesMap,
List<String> sortingRule) {
Comparator<ScoreAverage> dynamicComparator =
Comparator.comparingInt(score -> sortingRule.contains(score.name()) ?
sortingRule.indexOf(score.name()) :
sortingRule.size());
Map<String, List<ScoreAverage>> result = new HashMap<>();
for (Map.Entry<String, List<ScoreAverage>> entry: averagesMap.entrySet()) {
List<ScoreAverage> copy = new ArrayList<>(entry.getValue());
copy.sort(dynamicComparator);
result.put(entry.getKey(), copy);
}
return result;
}
Stream based implementation (note: lists in the source map will not get modified, each entry will be replaced with a new one).
public static Map<String, List<ScoreAverage>> sortByRule(Map<String, List<ScoreAverage>> averagesMap,
List<String> sortingRule) {
Comparator<ScoreAverage> dynamicComparator =
Comparator.comparingInt(score -> sortingRule.contains(score.name()) ?
sortingRule.indexOf(score.name()) :
sortingRule.size());
return averagesMap.entrySet().stream()
.map(entry -> Map.entry(entry.getKey(),
entry.getValue().stream()
.sorted(dynamicComparator)
.collect(Collectors.toList())))
.collect(Collectors.toMap(Map.Entry::getKey,Map.Entry::getValue));
}
main()
public static void main(String[] args) {
Map<String, List<ScoreAverage>> averagesMap =
Map.of("A", List.of(new ScoreAverage(4.0, "Accounting"),
new ScoreAverage(4.0, "company-wide"),
new ScoreAverage(4.0, "Engineering")),
"B", List.of(new ScoreAverage(3.0, "Engineering"),
new ScoreAverage(3.0, "company-wide"),
new ScoreAverage(3.0, "Accounting")),
"C", List.of(new ScoreAverage(2.0, "company-wide"),
new ScoreAverage(2.0, "Engineering"),
new ScoreAverage(2.0, "Accounting")),
"specialAverages", List.of(new ScoreAverage(2.5, "Engineering"),
new ScoreAverage(2.5, "company-wide"),
new ScoreAverage(2.5, "Accounting")));
List<String> sortingRule = List.of("company-wide", "Engineering", "Accounting");
sortByRule(averagesMap, sortingRule).forEach((k, v) -> System.out.println(k + " : " + v));
}
Output
A : [ScoreAverage[average=4.0, name=company-wide], ScoreAverage[average=4.0, name=Engineering], ScoreAverage[average=4.0, name=Accounting]]
B : [ScoreAverage[average=3.0, name=company-wide], ScoreAverage[average=3.0, name=Engineering], ScoreAverage[average=3.0, name=Accounting]]
C : [ScoreAverage[average=2.0, name=company-wide], ScoreAverage[average=2.0, name=Engineering], ScoreAverage[average=2.0, name=Accounting]]
specialAverages : [ScoreAverage[average=2.5, name=company-wide], ScoreAverage[average=2.5, name=Engineering], ScoreAverage[average=2.5, name=Accounting]]
Update
It's also possible to combine the sorting rule encapsulated in a list and Comparator that will responsible for sorting elements that are not present in the sorting rule. Both sorting rule and a comparator will be provided dynamically at runtime.
For that, the method signature has to be changed (the third parameter needs to be added):
public static Map<String, List<ScoreAverage>> sortByRule(Map<String, List<ScoreAverage>> averagesMap,
List<String> sortingRule,
Comparator<ScoreAverage> downstreamComparator)
And the comparator will look like that:
Comparator<ScoreAverage> dynamicComparator =
Comparator.<ScoreAverage>comparingInt(score -> sortingRule.contains(score.name()) ?
sortingRule.indexOf(score.name()) :
sortingRule.size())
.thenComparing(downstreamComparator);
It will group all objects with names contained in the sortingRule at the beginning of the resulting list, the rest part will be sorted in accordance with the downstreamComparator.
Method call in the main will look like that:
sortByRule(averagesMap, sortingRule, Comparator.comparing(ScoreAverage::name))
.forEach((k, v) -> System.out.println(k + " : " + v));
If you apply these changes with and provide a sortingRule containing only one string "company-wide" you'll this output:
A : [ScoreAverage[average=4.0, name=company-wide], ScoreAverage[average=4.0, name=Accounting], ScoreAverage[average=4.0, name=Engineering]]
B : [ScoreAverage[average=3.0, name=company-wide], ScoreAverage[average=3.0, name=Accounting], ScoreAverage[average=3.0, name=Engineering]]
C : [ScoreAverage[average=2.0, name=company-wide], ScoreAverage[average=2.0, name=Accounting], ScoreAverage[average=2.0, name=Engineering]]
specialAverages : [ScoreAverage[average=2.5, name=company-wide], ScoreAverage[average=2.5, name=Accounting], ScoreAverage[average=2.5, name=Engineering]]

Can't convert tuple list to hashmap java

I want to convert a javax.persistence.Tuple into a HashMap, but like this, it inserts the last element of the tuple and takes also the alias and data type. How can I improve this method so it takes values of the tuple?
public Map<String, Object> tuplesToMap(List<Tuple> data){
Map<String, Object> values = new HashMap<>();
data.forEach(tuple -> {
tuple.getElements().forEach(
element -> {
values.put(element.getAlias(), tuple.get(element));
}
);
});
return values;
}
with java 8 is simply :
return data.stream()
.collect(Collectors.toMap(
t -> t.get(0, String.class),
t -> t.get(1, Object.class)));
Seems to be working :
public static List<Map<String/*UPPERCASE*/, Object>> jpaTuplesToMaps(
List<javax.persistence.Tuple> data
){
return data.stream()
.map(tuple -> { // per each tuple of the input List
// creating a new HashMap
Map<String, Object> resultItem = new HashMap<>();
// filling the created HashMap with values of
tuple.getElements().forEach( // each column of the tuple
col -> { resultItem.put(col.getAlias(), tuple.get(col)); }
);
// returning the created HashMap instead of the current Tuple
return resultItem;
})
// collecting & returning all the created HashMap-s as a List
.collect(Collectors.toList());
}
But usualy both single & list conversions are required, so let's combine them :
public static Map<String/*UPPERCASE*/, Object> jpaTupleToMap(
javax.persistence.Tuple data /*CASE INSENSITIVE*/
){
Map<String, Object> result =
new HashMap<>(); // exactly HashMap since it can handle NULL keys & values
data.getElements().forEach(
col -> { result.put(col.getAlias(), data.get(col)); }
);
return result;
}
//-------------------------
public static List<Map<String/*UPPERCASE*/, Object>> jpaTuplesToMaps(
List<javax.persistence.Tuple> data /*CASE INSENSITIVE*/
){
return data.stream() // List<Tuple> -> Tuple1,..TupleN
.map(tuple -> jpaTupleToMap(tuple)) // Tuple1 -> HashMap1,..TupleN -> HashMapN
.collect(Collectors.toList()); // HashMap1,..HashMapN -> List
}
The element.getAlias() you're using as the key for the hashmap is probably same for some of the elements.
Map keys are unique, meaning, if you insert entries (1, "one") and then (1, "two"), the first value will be overridden by the latter. If you want to have multiple values mapped to one key, use Map<String, Collection<Object>>, or a Multimap from Guava, which is exactly the same thing.
You can insert into multimap with this function - if the key is not in the map, create a new ArrayList and add it to the map, otherwise return the existing one. Then, insert the value to the list:
values
.computeIfAbsent(element.getAlias, k -> new ArrayList<>())
.add(tuple.get(element));

Loop through n number of maps

Right now I have the following code, which takes 2 recipes and finds duplicates in the recipes and "merges" them.
public void mergeIngredients(Recipe recipe1, Recipe recipe2) {
Map<String, Ingredients> recipe1Map = recipe1.getIngredientsMap();
Map<String, Ingredients> recipe2Map = recipe2.getIngredientsMap();
for (Map.Entry<String, Ingredients> s : recipe1Map.entrySet()) {
if (recipe2Map.containsKey(s.getKey())) {
double newValue = recipe1.getAmount(s.getKey()) + recipe2.getAmount(s.getKey());
System.out.println(newValue);
}
}
}
I want to change this code so instead of only being able to check 2 maps against each other, I need to refactor the code so it can take N number of maps and compare all of them.
Example: The user inputs 8 different recipes, it should loop through all of these and merge ingredients if duplicates are found. What is the best way to achieve this?
I would first extract all keys from all Maps into a Set. This gives you all unique ingredients-keys.
Then iterate that Set and get all the values from all the recipes and merge them.
For example:
public void mergeIngredients(Set<Recipe> recipes) {
Set<String> keys = recipes.stream() //
.map(Recipe::getIngredientsMap) // Get the map
.flatMap(m -> m.keySet().stream()) // Get all keys and make 1 big stream
.collect(Collectors.toSet()); // Collect them to a set
for (String k : keys)
{
double newValue = recipes.stream() //
.map(Recipe::getIngredientsMap) //
.map(i->i.get(k)) //
.mapToDouble(i->i.getAmount()) //
.sum(); //
System.out.println(newValue);
}
}
You problably can do this more efficient; but this is easier to follow I think.
You can use Merging Multiple Maps Using Java 8 Streams in the case of duplicate keys:
public void mergerMap() throws Exception {
Map<String, Integer> m1 = ImmutableMap.of("a", 2, "b", 3);
Map<String, Integer> m2 = ImmutableMap.of("a", 3, "c", 4);
Map<String, Integer> mx = Stream.of(m1, m2)
.map(Map::entrySet) // converts each map into an entry set
.flatMap(Collection::stream) // converts each set into an entry stream, then
// "concatenates" it in place of the original set
.collect(
Collectors.toMap( // collects into a map
Map.Entry::getKey, // where each entry is based
Map.Entry::getValue, // on the entries in the stream
Integer::max // such that if a value already exist for
// a given key, the max of the old
// and new value is taken
)
)
;
Map<String, Integer> expected = ImmutableMap.of("a", 3, "b", 3, "c", 4);
assertEquals(expected, mx);
}
I don't really see the need of a Map for your ingredients so here is an alternative solution.
If you make your Ingredients class implement equals & hashcode you can use it directly in a Set. You will of course also have a method in Recipe that returns all ingredients as a List. Then the following will return all unique ingredients.
Set<Ingredients> merge(List<Recipe> recipies) {
return recipies.stream().map(s -> s.allIngredients()).collect(Collectors.toSet());
}

Categories

Resources