What would be the simplest way to merge Map key values like keys "55", "55004", "550009", "550012" into one key: "55" and a sum of all those values().
I'm trying to think of ways to use containsKey or trimming the key. It's very hard to think about this.
Maybe a flatMap to flatten the map and reduce.
#Test
public void TestM(){
Map<String,Object> map1 = new HashMap();
map1.put("55", 3453.34);
map1.put("55001", 5322.44);
map1.put("55003", 10112.44);
map1.put("55004", 15555.74);
map1.put("77", 1000.74); // instead of 1000 it should be ~1500
map1.put("77004", 444.74);
map1.put("77003", 66.74);
// in real example I'll need "77" and "88" and "101" etc.
// All of which has little pieces like 77004, 77006
Map<String,Double> SumMap = new HashMap<String, Double>();
SumMap = map1.entrySet().stream().map
(e->e.getValue()).reduce(0d, Double::sum);
// INCORRECT
// REDUCE INTO ONE KEY startsWith 55
System.out.println("Map: " + SumMap);
// RESULT should be :
// Map<String, Double> result = { "55": TOTAL }
// real example might be "77": TOTAL, "88": TOTAL, "101": TOTAL
//(reducing away the "77004", "88005" etc.)
}
Basically this code reduces and rolls subitem totals into a bigger key.
It looks like you could use Collectors.groupingBy.
It requires Function which would allow us decide which elements belong to same group. Such function for elements from same group should always return same value which will be used as key in resulting map. In your case it looks like you want to group elements with same first two characters stored in key, which suggest mapping to substring(0,2).
When we already have way to determine which elements belong to same group, we can now specify how we want map to collect them. By default it collects them in list so we have key->[elemnt0, element1, ...] mapping.
But we can specify your own way of handling elements from same group by providing our own Collector. Since we want to create sum of values we can use Collectors.summingDouble(mappingToDouble).
DEMO:
Map<String, Double> map1 = new HashMap<>();
map1.put("661", 123d);
map1.put("662", 321d);
map1.put("55", 3453.34);
map1.put("55001", 5322.44);
map1.put("55003", 10112.44);
map1.put("55004", 15555.74);
Map<String, Double> map = map1.entrySet()
.stream()
.collect(
Collectors.groupingBy(
entry -> entry.getKey().substring(0, 2),
Collectors.summingDouble(Map.Entry::getValue)
)
);
System.out.println(map);
Output: {66=444.0, 55=34443.96}
Related
I need to make a third HashMap based off the values from the PeopleAndNumbers and PeopleAndGroups hashmaps. But the third HashMap would only have the 3 groups as keys and the total amounts from the people in that group as values.
(Also worth noting that the keys in the first both maps are the same.)
Here are the contents of the first two maps:
PeopleAndNumbers: {p1=1, p2=3, p3=2, p4=3, p5=1, p6=2}
PeopleAndGroups: {p1=GroupA, p2=GroupB, p3=GroupC, p4=GroupB, p5=GroupC, p6=GroupA}
I need to make a third HashMap that'd print out like this:
CombineMap: {GroupA=3, GroupB=6, GroupC=3}
Here is what the code looks like so far:
import java.util.HashMap;
public class HashmapTest {
public static void main(String[] args) {
HashMap<String, Integer> PeopleAndNumbers = new HashMap<String, Integer>();
HashMap<String, String> PeopleAndGroups = new HashMap<String, String>();
PeopleAndNumbers.put("p1", 1);
PeopleAndNumbers.put("p2", 3);
PeopleAndNumbers.put("p3", 2);
PeopleAndNumbers.put("p4", 3);
PeopleAndNumbers.put("p5", 1);
PeopleAndNumbers.put("p6", 2);
PeopleAndGroups.put("p1","GroupA");
PeopleAndGroups.put("p2","GroupB");
PeopleAndGroups.put("p3","GroupC");
PeopleAndGroups.put("p4","GroupB");
PeopleAndGroups.put("p5","GroupC");
PeopleAndGroups.put("p6","GroupA");
System.out.println(PeopleAndNumbers);
System.out.println(PeopleAndGroups);
HashMap<String, Integer> CombineMap = new HashMap<String, Integer>();
//Insert method to do this here, How would I go about this?
System.out.println("Expected Output for CombineMap should be");
System.out.println("{GroupA=3, GroupB=6, GroupC=3}");
System.out.println(CombineMap);
}
}
If I understand you correctly, you want to sum Numbers by Group, using the common keys to join them. If so, you can do it pretty easily with streams:
Map<String, Integer> combined = PeopleAndGroups.entrySet()
.stream()
.collect(Collectors.groupingBy(e -> e.getValue(),
Collectors.summingInt(e -> PeopleAndNumbers.get(e.getKey()))));
Or you can iterate and merge entries into your destination map:
Map<String, Integer> combined = new HashMap<>();
PeopleAndGroups.forEach((k, v) ->
combined.merge(v, PeopleAndNumbers.get(k), Integer::sum));
To achieve that you need to iterate over the entries of the PeopleAndGroups map and do the following for each entry:
check if the combinedMap has a key equal to the value of the current entry
If the key doesn't exist put the key with value 0: combinedMap.put(entry.getValue(), 0)
Get the value of the entry's key from the PeopleAndNumbers and let's call it N: int N = PeopleAndNumbers.get(entry.getKey())
add N to the old value of your result map:
combinedMap.put(entry.getValue(), combinedMap.get(entry.getValue()) + N)
Say I have mappings from Strings to a Mapping from Strings to int, such as
Map<String, Map<String, Integer>> myMap1 = new HashMap<>();
myMap1.put("A", Map.of("X", 1))
myMap1.put("B", Map.of("Y", 1))
Map<String, Map<String, Integer>> myMap2 = new HashMap<>();
myMap2.put("B", Map.of("Y", 3))
I would like to merge these mappings such that we get a mapping where the key is the inner map's key, and the value would be the average of the inner maps values of the same keys.
So the output to the example above would be
{"X" : 1, "Y", 2}
We can discard the outer map's key altogether.
What is the nicest way to do this with java. I thought there might be some nice way to do it with Collectors.groupBy method but I am quite inexperienced with this.
I’m going to assume there might be more than two maps, so let’s make a List out of them:
Collection<Map<String, Map<String, Integer>>> myMaps =
List.of(myMap1, myMap2);
Then we can use flatMap on the values() of each Map, which gives us a stream of Map<String, Integer> maps.
We can obtain the entrySet() of each of those, then apply flatMap to the streams of those entry sets, to give us a single Stream of Map.Entry<String, Integer> objects, which we can then group.
There is a groupingBy method which takes a second Collector for customizing the values of the groups, by collecting all of the grouped values seen. We can use that to get our averages, using an averaging collector.
Map<String, Double> averages =
myMaps.stream().flatMap(map -> map.values().stream()) // stream of Map<String, Integer>
.flatMap(innerMap -> innerMap.entrySet().stream()) // stream of Map.Entry<String, Integer>
.collect(Collectors.groupingBy(Map.Entry::getKey, // group by String key
Collectors.averagingInt(Map.Entry::getValue))); // value for each key = average of its Integers
I am pretty new to java moving from c#. I have the following class.
class Resource {
String name;
String category;
String component;
String group;
}
I want to know the following numbers:
1. Count of resources in the category.
2. Distinct count of components in each category. (component names can be duplicate)
3. Count of resources grouped by category and group.
I was able to achieve a little bit of success using Collectors.groupingBy. However, the result is always like this.
Map<String, List<Resource>>
To get the counts I have to parse the keyset and compute the sizes.
Using c# linq, I can easily compute all the above metrics.
I am assuming there is definitely a better way to do this in java as well. Please advise.
For #1, I'd use Collectors.groupingBy along with Collectors.counting:
Map<String, Long> resourcesByCategoryCount = resources.stream()
.collect(Collectors.groupingBy(
Resource::getCategory,
Collectors.counting()));
This groups Resource elements by category, counting how many of them belong to each category.
For #2, I wouldn't use streams. Instead, I'd use the Map.computeIfAbsent operation (introduced in Java 8):
Map<String, Set<String>> distinctComponentsByCategory = new LinkedHashMap<>();
resources.forEach(r -> distinctComponentsByCategory.computeIfAbsent(
r.getCategory(),
k -> new HashSet<>())
.add(r.getGroup()));
This first creates a LinkedHashMap (which preserves insertion order). Then, Resource elements are iterated and put into this map in such a way that they are grouped by category and each group is added to a HashSet that is mapped to each category. As sets don't allow duplicates, there won't be duplicated groups for any category. Then, the distinct count of groups is the size of each set.
For #3, I'd again use Collectors.groupingBy along with Collectors.counting, but I'd use a composite key to group by:
Map<List<String>, Long> resourcesByCategoryAndGroup = resources.stream()
.collect(Collectors.groupingBy(
r -> Arrays.asList(r.getCategory(), r.getGroup()), // or List.of
Collectors.counting()));
This groups Resource elements by category and group, counting how many of them belong to each (category, group) pair. For the grouping key, a two-element List<String> is being used, with the category being its 1st element and the component being its 2nd element.
Or, instead of using a composite key, you could use nested grouping:
Map<String, Map<String, Long>> resourcesByCategoryAndGroup = resources.stream()
.collect(Collectors.groupingBy(
Resource::getCategory,
Collectors.groupingBy(
Resource::getGroup,
Collectors.counting())));
Thanks Fedrico for detailed response. #1 and #3 worked great. For #2, i would like to see an output of Map. Here's the code that i am using currently to get that count. This is without using collectors in old style.
HashMap<String, HashSet<String>> map = new HashMap<>();
for (Resource resource : resources) {
if (map.containsKey(resource.getCategory())) {
map.get(resource.getCategory()).add(resource.getGroup());
} else
HashSet<String> componentSet = new HashSet<>();
componentSet.add(resource.getGroup());
map.put(resource.getCategory(), componentSet);
}
}
log.info("Group count in each category");
for (Map.Entry<String, HashSet<String>> entry : map.entrySet()) {
log.info("{} - {}", entry.getKey(), entry.getValue().size());
}
I have two maps that use the same object as keys. I want to merge these two streams by key. When a key exists in both maps, I want the resulting map to run a formula. When a key exists in a single map I want the value to be 0.
Map<MyKey, Integer> map1;
Map<MyKey, Integer> map2;
<Map<MyKey, Double> result =
Stream.concat(map1.entrySet().stream(), map2.entrySet().stream())
.collect(Collectors.toMap(
Map.Entry::getKey, Map.Entry::getValue,
(val1, val2) -> (val1 / (double)val2) * 12D));
This will use the formula if the key exists in both maps, but I need an easy way to set the values for keys that only existed in one of the two maps to 0D.
I can do this by doing set math and trying to calculate the inner-join of the two keySets, and then subtracting the inner-join result from the full outer join of them... but this is a lot of work that feels unnecessary.
Is there a better approach to this, or something I can easily do using the Streaming API?
Here is a simple way, only stream the keys, and then looking up the values, and leaving the original maps unchanged.
Map<String, Double> result =
Stream.concat(map1.keySet().stream(), map2.keySet().stream())
.distinct()
.collect(Collectors.toMap(k -> k, k -> map1.containsKey(k) && map2.containsKey(k)
? map1.get(k) * 12d / map2.get(k) : 0d));
Test
Map<String, Integer> map1 = new HashMap<>();
Map<String, Integer> map2 = new HashMap<>();
map1.put("A", 1);
map1.put("B", 2);
map2.put("A", 3);
map2.put("C", 4);
// code above here
result.entrySet().forEach(System.out::println);
Output
A=4.0
B=0.0
C=0.0
For this solution to work, your initial maps should be Map<MyKey, Double>. I'll try to find another solution that will work if the values are initially Integer.
You don't even need streams for this! You should simply be able to use Map#replaceAll to modify one of the Maps:
map1.replaceAll((k, v) -> map2.containsKey(k) ? 12D * v / map2.get(k) : 0D);
Now, you just need to add every key to map1 that is in map2, but not map1:
map2.forEach((k, v) -> map1.putIfAbsent(k, 0D));
If you don't want to modify either of the Maps, then you should create a deep copy of map1 first.
Stream.concat is not the right approach here, as you are throwing the elements of the two map together, creating the need to separate them afterward.
You can simplify this by directly doing the intended task of processing the intersection of the keys by applying your function and processing the other keys differently. E.g. when you stream over one map instead of the concatenation of two maps, you only have to check for the presence in the other map to either, apply the function or use zero. Then, the keys only present in the second map need to be put with zero in a second step:
Map<MyKey, Double> result = map1.entrySet().stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(Map.Entry::getKey, e -> {
Integer val2 = map2.get(e.getKey());
return val2==null? 0.0: e.getValue()*12.0/val2;
}),
m -> {
Map<MyKey, Double> rMap = m.getClass()==HashMap.class? m: new HashMap<>(m);
map2.keySet().forEach(key -> rMap.putIfAbsent(key, 0.0));
return rMap;
}));
This clearly suffers from the fact that Streams don’t offer convenience methods for processing map entries. Also, we have to deal with the unspecified map type for the second processing step. If we provided a map supplier, we also had to provide a merge function, making the code even more verbose.
The simpler solution is to use the Collection API rather than the Stream API:
Map<MyKey, Double> result = new HashMap<>(Math.max(map1.size(),map2.size()));
map2.forEach((key, value) -> result.put(key, map1.getOrDefault(key, 0)*12D/value));
map1.keySet().forEach(key -> result.putIfAbsent(key, 0.0));
This is clearly less verbose and potentially more efficient as it omits some of the Stream solution’s processing steps and provides the right initial capacity to the map. It utilizes the fact that the formula evaluates to the desired zero result if we use zero as default for the first map’s value for absent keys. If you want to use a different formula which doesn’t have this property or want to avoid the calculation for absent mappings, you’d have to use
Map<MyKey, Double> result = new HashMap<>(Math.max(map1.size(),map2.size()));
map2.forEach((key, value2) -> {
Integer value1 = map1.get(key);
result.put(key, value1 != null? value1*12D/value2: 0.0);
});
map1.keySet().forEach(key -> result.putIfAbsent(key, 0.0));
I have a set of Strings as follows
Set<String> ids;
Each id is of the form #userId:#sessionId so for e.g. 1:2 where 1 is the userId and 2 is the sessionId.
I want to split these into userId which would be the key in a HashMap and each userId is unique. But each userId can have multiple sessions. So how do I get the values from Set<String> to Map<String, List<String>>
For e.g.
If the set contains the following values {1:2, 2:2, 1:3}
The map should contain
key=1 value=<2,3>
key=2 value=<2>
By "lambdas" I'm assuming you mean streams, because a straightforward loop to build a map wouldn't really require lambdas. If so, you can get close, but not quite there, with some of the built-in Collectors.
Map<String, List<String>> map = ids.stream()
.collect(Collectors.groupingBy(id -> id.split(":")[0]));
// result: {"1": ["1:2", "1:3"], "2": ["2:2"]}
This will group by the left number, but will store the full strings in the map values rather than just the right-hand portion.
Map<String, List<String>> map = ids.stream()
.collect(Collectors.toMap(
id -> id.split(":")[0],
id -> new ArrayList<>(Arrays.asList(id.split(":")[1])),
(l1, l2) -> {
List<String> l3 = new ArrayList<>(l1);
l3.addAll(l2);
return l3;
}
);
// result: {"1": ["2", "3"], "2": ["2"]}
This will return exactly what you want, but suffers from severe inefficiency. Rather than adding all equal elements to a single list, it will create many temporary lists and join them together. That turns what should be an O(n) operation into an O(n2) one.
You could use HashMap<Integer,Set<T>> or HashMap<Integer,List<T>>, where T is the type of value1, value2, etc..
I think you're asking for something similar to this question.
Also, here is a link for several solutions on how to proceed with this problem.