Group strings into multiple groups when using stream groupingBy - java

A simplified example of what I am trying to do:
Suppose I have a list of strings, which need to be grouped into 4 groups according to a condition if a specific substring is contained or not. If a string contains Foo it should fall in the group FOO, if it contains Bar it should fall in the group BAR, if it contains both it should appear in both groups.
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
A naive approach for the above input doesn't work as expected since the string is grouped into the first matching group:
Map<String,List<String>> result1 =
strings.stream()
.collect(Collectors.groupingBy(
str -> str.contains("Foo") ? "FOO" :
str.contains("Bar") ? "BAR" :
str.contains("Baz") ? "BAZ" : "DEFAULT"));
result1 is
{FOO=[Foo, FooBar, FooBarBaz], DEFAULT=[XXX]}
where as the desired result should be
{FOO=[Foo, FooBar, FooBarBaz], BAR=[FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
After searching for a while I found another approach, which comes near to my desired result, but not quite fully
Map<String,List<String>> result2 =
List.of("Foo", "Bar", "Baz", "Default").stream()
.flatMap(str -> strings.stream().filter(s -> s.contains(str)).map(s -> new String[]{str.toUpperCase(), s}))
.collect(Collectors.groupingBy(arr -> arr[0], Collectors.mapping(arr -> arr[1], Collectors.toList())));
System.out.println(result2);
result2 is
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz]}
while this correctly groups strings containing the substrings into the needed groups, the strings which doesn't contain the substrings and therefore should fall in the default group are ignored. The desired result is as already mentioned above (order doesn't matter)
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
For now I'm using both result maps and doing an extra:
result2.put("DEFAULT", result1.get("DEFAULT"));
Can the above be done in one step? Is there a better approach better than what I have above?

This is ideal for using mapMulti. MapMulti takes a BiConsumer of the streamed value and a consumer.
The consumer is used to simply place something back on the stream. This was added to Java since flatMaps can incur undesirable overhead.
This works by can building a String array as you did before of Token and the containing String and collecting (also as you did before). If the key was found in the string, accept a String array with it and the containing string. Otherwise, accept a String array with the default key and the string.
List<String> strings =
List.of("Foo", "FooBar", "FooBarBaz", "XXX", "YYY");
Map<String, List<String>> result = strings.stream()
.<String[]>mapMulti((str, consumer) -> {
boolean found = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO", "BAR",
"BAZ")) {
if (temp.contains(token)) {
consumer.accept(
new String[] { token, str });
found = true;
}
}
if (!found) {
consumer.accept(
new String[] { "DEFAULT", str });
}
})
.collect(Collectors.groupingBy(arr -> arr[0],
Collectors.mapping(arr -> arr[1],
Collectors.toList())));
result.entrySet().forEach(System.out::println);
prints
BAR=[FooBar, FooBarBaz]
FOO=[Foo, FooBar, FooBarBaz]
BAZ=[FooBarBaz]
DEFAULT=[XXX, YYY]
Keep in mind that streams are meant to make your coding world easier. But sometimes, a regular loop using some Java 8 constructs is all that is needed. Outside of an academic exercise, I would probably do the task like so.
Map<String,List<String>> result2 = new HashMap<>();
for (String str : strings) {
boolean added = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO","BAR","BAZ")) {
if(temp.contains(token)) {
result2.computeIfAbsent(token, v->new ArrayList<>()).add(str);
added = true;
}
}
if (!added) {
result2.computeIfAbsent("DEFAULT", v-> new ArrayList<>()).add(str);
}
}

Instead of operating with strings "Foo", "Bar", etc. and their corresponding uppercase versions, it would be more convenient and cleaner to define an enum.
Let's call it Keys:
public enum Keys {
FOO("Foo"), BAR("Bar"), BAZ("Baz"), DEFAULT("");
private static final Set<Keys> nonDefaultKeys = EnumSet.range(FOO, BAZ); // Set of enum constants (not includes DEFAULT), needed to avoid creating EnumSet or array of constants via `values()` at every invocation of getKeys()
private String keyName;
Keys(String keyName) {
this.keyName = keyName;
}
public static List<String> getKeys(String str) {
List<String> keys = nonDefaultKeys.stream()
.filter(key -> str.contains(key.keyName))
.map(Enum::name)
.toList();
// if non-default keys not found, i.e. keys.isEmpty() - return the DEFAULT
return keys.isEmpty() ? List.of(DEFAULT.name()) : keys;
}
}
It has a method getKeys(String) which expects a string and returns a list of keys to which the given string should be mapped.
By using the functionality encapsulated in the Keys enum we can create a map of strings split into groups which correspond to the names of Keys-constants by using collect(supplier,accumulator,combiner).
main()
public static void main(String[] args) {
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
Map<String, List<String>> stringsByGroup = strings.stream()
.collect(
HashMap::new, // mutable container - which will contain results of mutable reduction
(Map<String, List<String>> map, String next) -> Keys.getKeys(next)
.forEach(key -> map.computeIfAbsent(key, k -> new ArrayList<>()).add(next)), // accumulator function - defines how to store stream elements into the container
(left, right) -> right.forEach((k, v) ->
left.merge(k, v, (oldV, newV) -> { oldV.addAll(newV); return oldV; }) // combiner function - defines how to merge container while executing the stream in parallel
));
stringsByGroup.forEach((k, v) -> System.out.println(k + " -> " + v));
}
Output:
BAR -> [FooBar, FooBarBaz]
FOO -> [Foo, FooBar, FooBarBaz]
BAZ -> [FooBarBaz]
DEFAULT -> [XXX]
A link to Online Demo

Related

Can I add element while using Java stream groupingby

For loop code is this.
Param : ArrayList userList
Map<String, User> map = new HashMap();
for (User user : userList) {
String[] arr = user.getStringSeq().split(DELIMITER);
String key = String.join(DELIMITER, arr[MENU_IDX], arr[GROUP_IDX]);
if (Objects.isNull(map.get(key))) {
Set<IOType> ioTypeSet = new HashSet<>();
ioTypeSet.add(IOType.valueOf(arr[IO_TYPE_IDX]));
user.setIoTypes(ioTypeSet);
map.put(key, user);
} else {
map.get(key).getIoTypes().add(IOType.valueOf(arr[IO_TYPE_IDX]));
}
}
and i want to modify stream
List<List<user>> userList = userList
.stream()
.collect(groupingBy(
e -> {
String[] arr = e.getStringSeq().split(DELIMITER);
return String.join(DELIMITER, arr[0], arr[1]);
},
mapping(e -> {
IOType ioType = IOType.valueOf(e.getNavAuthSeq().split(DELIMITER)[2]);
User user = new User();
user.addIoType(ioType);
return user;
}, toList())
)).values()
.stream()
.toList();
my stream code grouping list succefully
but i want to remove same key element and put splited string
like this
List<List<user>> userList = userList
.stream()
.collect(groupingBy(
e -> {
String[] arr = e.getStringSeq().split(DELIMITER);
return String.join(DELIMITER, arr[0], arr[1]);
},
mapping(e -> {
if (e.getIoTypes() != null) {
e.getIoTypes().add(IOType.NONE);
return null;
} else {
IOType ioType = IOType.valueOf(e.getStringSeq().split(DELIMITER)[2]);
UserNavAuthsLoginDTO userNavAuthsLoginDTO = new UserNavAuthsLoginDTO();
userNavAuthsLoginDTO.addIoType(ioType);
return userNavAuthsLoginDTO;
}
}, toList())
)).values()
.stream()
.toList();
but if else code doesn't work
can i resove this problem?
If you want to discard certain elements after inside the Collector after groupingBy, you can wrap mapping() with Collector filtering(). It expects a Predicate and retains only elements for which predicate gets evaluated to true.
.collect(Collectors.groupingBy(
e -> { }, // classifier Function of groupingBy
Collectors.filtering(e -> { }, // Predicate of filtering
Collectors.mapping(e -> { }, // mapper Function of mapping
Collectors.toList())
)
))
Note that there's a difference between using filter() operation and Collector filtering(). Imagine a scenario when all elements mapped to a particular Key don't pass the predicate. In this case, the entry with this Key would be present in the resulting Map (and its Value would be an empty list). And if you apply filter() in the stream - there wouldn't be such entry.
Alternatively, if it's not important to filter out elements after the grouping phase, you can use filter() operation, that would be a preferred way in such case.
Also, worth to point out that you're performing side-effects on the mutable function parameter inside mapping() (to put it simple, anything that a function doesn't besides computing its resulting value is a side-effect). I'm not claiming that it will break things somehow, but it's definitely not very clean.

How to merge two Maps based on values with Java 8 streams?

I have a Collection of Maps containing inventory information:
0
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "60"
1
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "1000"
2
"subtype" -> "FRESH"
"itemNumber" -> "EU999"
"quantity" -> "800"
3
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I need to condense this list based on the itemNumber, while summing the quantity and retaining unique subtypes in a comma separated string. Meaning, new Maps would look like this:
0
"subtype" -> "DAIRY, FRESH"
"itemNumber" -> "EU999"
"quantity" -> "1860"
1
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I've tried a variations of streams, collectors, groupby etc., and I'm lost.
This is what I have so far:
public Collection<Map> mergeInventoryPerItemNumber(Collection<Map> InventoryMap){
Map condensedInventory = null;
InventoryMap.stream()
.collect(groupingBy(inv -> new ImmutablePair<>(inv.get("itemNumber"), inv.get("subtype")))), collectingAndThen(toList(), list -> {
long count = list.stream()
.map(list.get(Integer.parseInt("quantity")))
.collect(counting());
String itemNumbers = list.stream()
.map(list.get("subtype"))
.collect(joining(" , "));
condensedInventory.put("quantity", count);
condensedInventory.put("subtype", itemNumbers);
return condensedInventory;
});
Here is one approach.
first iterate thru the list of maps.
for each map, process the keys as required
special keys are itemNumber and quantity
itemNumber is the joining element for all the values.
quantity is the value that must be treated as an integer
the others are strings and are treated as such (for all other values, if the value already exists in the string of concatenated values, then it is not added again)
Some data
List<Map<String, String>> mapList = List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999",
"quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100",
"quantity", "100"));
The building process
Map<String, Map<String, String>> result = new HashMap<>();
for (Map<String, String> m : mapList) {
result.compute(m.get("itemNumber"), (k, v) -> {
for (Entry<String, String> e : m.entrySet()) {
String key = e.getKey();
String value = e.getValue();
if (v == null) {
v = new HashMap<String, String>();
v.put(key, value);
} else {
if (key.equals("quantity")) {
v.compute(key,
(kk, vv) -> vv == null ? value :
Integer.toString(Integer
.valueOf(vv)
+ Integer.valueOf(
value)));
} else {
v.compute(key, (kk, vv) -> vv == null ?
value : (vv.contains(value) ? vv :
vv + ", " + value));
}
}
}
return v;
});
}
List<Map<String,String>> list = new ArrayList<>(result.values());
for (int i = 0; i < list.size(); i++) {
System.out.println(i + " " + list.get(i));
}
prints
0 {itemNumber=EU100, quantity=100, subtype=FRESH}
1 {itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
Note that the map of maps may be more useful that a list of maps. For example, you can retrieve the map for the itemNumber by simply specifying the desired key.
System.out.println(result.get("EU999"));
prints
{itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
You are misusing a Map here. Every map contains the same keys ("subtype", "itemNumber", "quantity"). And they are treated almost like object properties in your code. They are expected to be present in every map and each of them expected to have a specific range of values, although are stored as strings according to your example.
Side-note: avoid using row types (like Map without generic information in angle brackets <>), in such a case all elements inside a collection will be treated as Objects.
Item clearly has to be defined as a class. By storing these data inside a map, you're loosing a possibility to define an appropriate data type for each property, and also you're not able to define behaviour to manipulate with these properties (for more elaborate explanation take a look at this answer).
public class Item {
private final String itemNumber;
private Set<Subtype> subtypes;
private long quantity;
public Item combine(Item other) {
Set<Subtype> combinedSubtypes = new HashSet<>(subtypes);
combinedSubtypes.addAll(other.subtypes);
return new Item(this.itemNumber,
combinedSubtypes,
this.quantity + other.quantity);
}
// + constructor, getters, hashCode/equals, toString
}
Method combine represents the logic for merging two items together. By placing it inside this class, you could easily reuse and change it when needed.
The best choice for the type of the subtype field is an enum. Because it'll allow to avoid mistakes caused by misspelled string values and also enums have an extensive language support (switch expressions and statements, special data structures designed especially for enums, enum could be used with annotations).
This custom enum can look like this.
public enum Subtype {DAIRY, FRESH}
With all these changes, the code inside the mergeInventoryPerItemNumber() becomes concise and easier to comprehend. Collectors.groupingBy() is used to create a map by grouping items with the same itemNumber. A downstream collector Collectors.reducing() is used to combine items grouped under the same key to a single object.
Note that Collectors.reducing() produces an Optional result. Therefore, filter(Optional::isPresent) is used as a precaution to make sure that the result exists and subsequent operation map(Optional::get) extracts the item from the optional object.
public static Collection<Item> mergeInventoryPerItemNumber(Collection<Item> inventory) {
return inventory.stream()
.collect(Collectors.groupingBy(Item::getItemNumber,
Collectors.reducing(Item::combine)))
.values().stream()
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
main()
public static void main(String[] args) {
List<Item> inventory =
List.of(new Item("EU999", Set.of(Subtype.DAIRY), 60),
new Item("EU999", Set.of(Subtype.DAIRY), 1000),
new Item("EU999", Set.of(Subtype.FRESH), 800),
new Item("EU100", Set.of(Subtype.FRESH), 100));
Collection<Item> combinedItems = mergeInventoryPerItemNumber(inventory);
combinedItems.forEach(System.out::println);
}
Output
Item{itemNumber='EU100', subtypes=[FRESH], quantity=100}
Item{itemNumber='EU999', subtypes=[FRESH, DAIRY], quantity=1860}
It may be possible to do this with a single sweep, but here I have solved it with two passes: one to group like items together, and another over the items in each group to build a representative item (which seems similar in spirit to your code, where you were also attempting to stream elements from groups).
public static Collection<Map<String, String>>
mergeInventoryPerItemNumber(Collection<Map<String, String>> m){
return m.stream()
// returns a map of itemNumber -> list of products with that number
.collect(Collectors.groupingBy(o -> o.get("itemNumber")))
// for each item number, builds new representative product
.entrySet().stream().map(e -> Map.of(
"itemNumber", e.getKey(),
// ... merging non-duplicate subtypes
"subtype", e.getValue().stream()
.map(v -> v.get("subtype"))
.distinct() // avoid duplicates
.collect(Collectors.joining(", ")),
// ... adding up quantities
"quantity", ""+e.getValue().stream()
.map(v -> Integer.parseInt(v.get("quantity")))
.reduce(0, Integer::sum)))
.collect(Collectors.toList());
}
public static void main(String ... args) {
Collection<Map<String, String>> c = mkMap();
dump(c);
dump(mergeInventoryPerItemNumber(c));
}
public static Collection<Map<String, String>> mkMap() {
return List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999", "quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100", "quantity", "100"));
}
public static void dump(Collection<Map<String, String>> col) {
int i = 0;
for (Map<String, String> m : col) {
System.out.println(i++);
for (Map.Entry e : m.entrySet()) {
System.out.println("\t" + e.getKey() + " -> " + e.getValue());
}
}
}

Getting last occurrences of specific string in a list

I have a simple list of strings. My goal is to get the last occurrences of each string in the list by group.
This is mode code:
List<String> newData = new ArrayList<>();
newData.add("A-something");
newData.add("A-fdfdsfds");
newData.add("A-fdsfdsfgs");
newData.add("B-something");
newData.add("B-dsafdrsafd");
newData.add("B-dsdfsad");
I wish to get only the last occurrence of each group. In other words I wanst to get "A-fdsfdsfgs" and "B-dsdfsad" only.
How to do so?
To get last occurrences for each group you can use stream api with groupingBy:
import static java.util.stream.Collectors.*;
Map<String, Optional<String>> collect = newData.stream()
.collect(groupingBy(strings -> strings.split("-")[0],
mapping(s -> s, maxBy(Comparator.comparingInt(newData::lastIndexOf)))));
Note: map has Optional as a value
To get it without Optional use toMap instead of groupingBy:
Map<String, String> collect = newData.stream()
.collect(toMap(s -> s.split("-")[0],
Function.identity(),
(s1, s2) -> newData.lastIndexOf(s1) > newData.lastIndexOf(s2) ? s1 : s2));
Also if you want to have map values without group name, then change Function.identity() with s -> s.split("-")[1]
import java.util.*;
class Solution {
public static void main(String[] args) {
List<String> newData = new ArrayList<>();
newData.add("A-something");
newData.add("A-fdfdsfds");
newData.add("A-fdsfdsfgs");
newData.add("B-something");
newData.add("B-dsafdrsafd");
newData.add("B-dsdfsad");
System.out.println(lastOccurrences(newData).toString());
}
private static List<String> lastOccurrences(List<String> data){
Set<String> set = new HashSet<>();
List<String> ans = new ArrayList<>();
for(int i=data.size()-1;i>=0;--i){
String group = data.get(i).substring(0,data.get(i).indexOf("-"));
if(set.contains(group)) continue;
set.add(group);
ans.add(data.get(i));
}
return ans;
}
}
Output:
[B-dsdfsad, A-fdsfdsfgs]
Algorithm:
Move from last to first, instead of first to last because you want last occurrences. This will make the management easier and code a little bit clean.
Get the group the string belongs to using substring() method.
Use a set to keep track of already visited groups.
If a group is not in the set, add it to the set and current string to our answer(since this will be the last occurred) for this group.
Finally, return the list.
There are several ways to this, as the other answers already show. I’d find something like the following natural:
Collection<String> lastOfEach = newData.stream()
.collect(Collectors.groupingBy((String s) -> s.split("-")[0],
Collectors.reducing("", s -> s, (l, r) -> r)))
.values();
lastOfEach.forEach(System.out::println);
With your list the output is:
A-fdsfdsfgs
B-dsdfsad
My grouping is the same as in a couple of other answers. On the grouped values I perform a reduction, each time I got two strings taking the latter of them. In the end this will give us the last string from each group as requested. Since groupingBy produces a map, I use values to discard the keys ( A and B) and get only the original strings.
Collecting via grouping should be sufficient.
final Map<String, List<String>> grouped =
newData.stream()
.collect(groupingBy(s -> s.split("-")[0]));
final List<String> lastOccurrences =
grouped.values()
.stream()
.filter(s -> !s.isEmpty())
.map(s -> s.get(s.size() - 1))
.collect(toList());
For Java 11, the filter becomes filter(not(List::isEmpty))
This will give you fdsfdsfgs, dsdfsad
Using a temporary Map. The List finalList will have only the required values
Map<String, String> tempMap = new HashMap<>();
List<String> finalList = new ArrayList<>();
newData.forEach((val) -> tempMap.put(val.split("-")[0], val.split("-")[1]));
tempMap.forEach((key, val) -> finalList.add(key + "-" + val));

java stream Collectors.groupingBy() multiple fields

Stream<Map.Entry<String, Long>> duplicates = notificationServiceOrderItemDto.getService()
.getServiceCharacteristics()
.stream()
.collect(
Collectors.groupingBy(
ServiceCharacteristicDto::getName, Collectors.counting()
)
)
.entrySet()
.stream()
.filter(e -> e.getValue() > 1);
Optional<String> dupName = duplicates.map(Map.Entry::getKey).findFirst();
works perfect. But I wold like to find duplicates not just with name but also name + value + key
That means if name + value + key is the same this is duplicate.
I am looking Collectors.groupingBy()
http://www.technicalkeeda.com/java-8-tutorials/java-8-stream-grouping
but I can not find correct solution
Following works for me:
public class Groupingby
{
static class Obj{
String name;
String value;
String key;
Obj(String name, String val, String key)
{
this.name = name;
this.value = val;
this.key = key;
}
}
public static void main(String[] args)
{
List<Obj> objects = new ArrayList<>();
objects.add(new Obj("A", "K", "Key1"));
objects.add(new Obj("A", "K", "Key1"));
objects.add(new Obj("A", "X", "Key1"));
objects.add(new Obj("A", "Y", "Key2"));
Map<List<String>, Long> collected = objects.stream().collect(Collectors.groupingBy(x -> Arrays.asList(x.name, x.value, x.key), Collectors.counting()));
System.out.println(collected);
}
}
// Output
// {[A, K, Key1]=2, [A, Y, Key2]=1, [A, X, Key1]=1}
Note that I am using list of attributes for grouping by, not string concatenation of attributes. This will work with non-string attributes as well.
If you are doing string concatenation, you may have some corner cases like attributes (A, BC, D) and (AB, C, D) will result in same string.
Instead of
.collect(Collectors.groupingBy(ServiceCharacteristicDto::getName, Collectors.counting()))
you can write
.collect(Collectors.groupingBy(s->s.getName()+'-'+s.getValue()+'-'+s.getKey(), Collectors.counting()))
You can replace ServiceCharacteristicDto::getName with:
x -> x.getName() + x.getValue() + x.getKey()
Use a lambda instead of a method reference.
But also think of what findFirst would actually mean here... you are collecting to a HashMap that has no encounter order, streaming its entries and getting the first element - whatever that is. You do understand that this findFirst can give different results on different input, right? Even re-shuffling the HashMap could return you a different findFirst result.
EDIT
to get away from possible un-intentional duplicates because of String concat, you could use:
x -> Arrays.asList(x.getName(), x.getValue(), x.getKey())

Find the most common attribute value from a List of objects using Stream

I have two classes that are structured like this:
public class Company {
private List<Person> person;
...
public List<Person> getPerson() {
return person;
}
...
}
public class Person {
private String tag;
...
public String getTag() {
return tag;
}
...
}
Basically the Company class has a List of Person objects, and each Person object can get a Tag value.
If I get the List of the Person objects, is there a way to use Stream from Java 8 to find the one Tag value that is the most common among all the Person objects (in case of a tie, maybe just a random of the most common)?
String mostCommonTag;
if(!company.getPerson().isEmpty) {
mostCommonTag = company.getPerson().stream() //How to do this in Stream?
}
String mostCommonTag = getPerson().stream()
// filter some person without a tag out
.filter(it -> Objects.nonNull(it.getTag()))
// summarize tags
.collect(Collectors.groupingBy(Person::getTag, Collectors.counting()))
// fetch the max entry
.entrySet().stream().max(Map.Entry.comparingByValue())
// map to tag
.map(Map.Entry::getKey).orElse(null);
AND the getTag method appeared twice, you can simplify the code as further:
String mostCommonTag = getPerson().stream()
// map person to tag & filter null tag out
.map(Person::getTag).filter(Objects::nonNull)
// summarize tags
.collect(Collectors.groupingBy(Function.identity(), Collectors.counting()))
// fetch the max entry
.entrySet().stream().max(Map.Entry.comparingByValue())
// map to tag
.map(Map.Entry::getKey).orElse(null);
You could collect the counts to a Map, then get the key with the highest value
List<String> foo = Arrays.asList("a","b","c","d","e","e","e","f","f","f","g");
Map<String, Long> f = foo
.stream()
.collect(Collectors.groupingBy(v -> v, Collectors.counting()));
String maxOccurence =
Collections.max(f.entrySet(), Comparator.comparing(Map.Entry::getValue)).getKey();
System.out.println(maxOccurence);
This should work for you:
private void run() {
List<Person> list = Arrays.asList(() -> "foo", () -> "foo", () -> "foo",
() -> "bar", () -> "bar");
Map<String, Long> commonness = list.stream()
.collect(Collectors.groupingBy(Person::getTag, Collectors.counting()));
Optional<String> mostCommon = commonness.entrySet().stream()
.max(Map.Entry.comparingByValue())
.map(Map.Entry::getKey);
System.out.println(mostCommon.orElse("no elements in list"));
}
public interface Person {
String getTag();
}
The commonness map contains the information which tag was found how often. The variable mostCommon contains the tag that was found most often. Also, mostCommon is empty, if the original list was empty.
If you are open to using a third-party library, you can use Collectors2 from Eclipse Collections with a Java 8 Stream to create a Bag and request the topOccurrences, which will return a MutableList of ObjectIntPair which is the tag value and the count of the number of occurrences.
MutableList<ObjectIntPair<String>> topOccurrences = company.getPerson()
.stream()
.map(Person::getTag)
.collect(Collectors2.toBag())
.topOccurrences(1);
String mostCommonTag = topOccurrences.getFirst().getOne();
In the case of a tie, the MutableList will have more than one result.
Note: I am a committer for Eclipse Collections.
This is helpful for you,
Map<String, Long> count = persons.stream().collect(
Collectors.groupingBy(Person::getTag, Collectors.counting()));
Optional<Entry<String, Long>> maxValue = count .entrySet()
.stream().max((entry1, entry2) -> entry1.getValue() > entry2.getValue() ? 1 : -1).get().getKey();
maxValue.get().getValue();
One More solution by abacus-common
// Comparing the solution by jdk stream,
// there is no "collect(Collectors.groupingBy(Person::getTag, Collectors.counting())).entrySet().stream"
Stream.of(company.getPerson()).map(Person::getTag).skipNull() //
.groupBy(Fn.identity(), Collectors.counting()) //
.max(Comparators.comparingByValue()).map(e -> e.getKey()).orNull();
// Or by multiset
Stream.of(company.getPerson()).map(Person::getTag).skipNull() //
.toMultiset().maxOccurrences().map(e -> e.getKey()).orNull();

Categories

Resources