Can I add element while using Java stream groupingby - java

For loop code is this.
Param : ArrayList userList
Map<String, User> map = new HashMap();
for (User user : userList) {
String[] arr = user.getStringSeq().split(DELIMITER);
String key = String.join(DELIMITER, arr[MENU_IDX], arr[GROUP_IDX]);
if (Objects.isNull(map.get(key))) {
Set<IOType> ioTypeSet = new HashSet<>();
ioTypeSet.add(IOType.valueOf(arr[IO_TYPE_IDX]));
user.setIoTypes(ioTypeSet);
map.put(key, user);
} else {
map.get(key).getIoTypes().add(IOType.valueOf(arr[IO_TYPE_IDX]));
}
}
and i want to modify stream
List<List<user>> userList = userList
.stream()
.collect(groupingBy(
e -> {
String[] arr = e.getStringSeq().split(DELIMITER);
return String.join(DELIMITER, arr[0], arr[1]);
},
mapping(e -> {
IOType ioType = IOType.valueOf(e.getNavAuthSeq().split(DELIMITER)[2]);
User user = new User();
user.addIoType(ioType);
return user;
}, toList())
)).values()
.stream()
.toList();
my stream code grouping list succefully
but i want to remove same key element and put splited string
like this
List<List<user>> userList = userList
.stream()
.collect(groupingBy(
e -> {
String[] arr = e.getStringSeq().split(DELIMITER);
return String.join(DELIMITER, arr[0], arr[1]);
},
mapping(e -> {
if (e.getIoTypes() != null) {
e.getIoTypes().add(IOType.NONE);
return null;
} else {
IOType ioType = IOType.valueOf(e.getStringSeq().split(DELIMITER)[2]);
UserNavAuthsLoginDTO userNavAuthsLoginDTO = new UserNavAuthsLoginDTO();
userNavAuthsLoginDTO.addIoType(ioType);
return userNavAuthsLoginDTO;
}
}, toList())
)).values()
.stream()
.toList();
but if else code doesn't work
can i resove this problem?

If you want to discard certain elements after inside the Collector after groupingBy, you can wrap mapping() with Collector filtering(). It expects a Predicate and retains only elements for which predicate gets evaluated to true.
.collect(Collectors.groupingBy(
e -> { }, // classifier Function of groupingBy
Collectors.filtering(e -> { }, // Predicate of filtering
Collectors.mapping(e -> { }, // mapper Function of mapping
Collectors.toList())
)
))
Note that there's a difference between using filter() operation and Collector filtering(). Imagine a scenario when all elements mapped to a particular Key don't pass the predicate. In this case, the entry with this Key would be present in the resulting Map (and its Value would be an empty list). And if you apply filter() in the stream - there wouldn't be such entry.
Alternatively, if it's not important to filter out elements after the grouping phase, you can use filter() operation, that would be a preferred way in such case.
Also, worth to point out that you're performing side-effects on the mutable function parameter inside mapping() (to put it simple, anything that a function doesn't besides computing its resulting value is a side-effect). I'm not claiming that it will break things somehow, but it's definitely not very clean.

Related

Group strings into multiple groups when using stream groupingBy

A simplified example of what I am trying to do:
Suppose I have a list of strings, which need to be grouped into 4 groups according to a condition if a specific substring is contained or not. If a string contains Foo it should fall in the group FOO, if it contains Bar it should fall in the group BAR, if it contains both it should appear in both groups.
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
A naive approach for the above input doesn't work as expected since the string is grouped into the first matching group:
Map<String,List<String>> result1 =
strings.stream()
.collect(Collectors.groupingBy(
str -> str.contains("Foo") ? "FOO" :
str.contains("Bar") ? "BAR" :
str.contains("Baz") ? "BAZ" : "DEFAULT"));
result1 is
{FOO=[Foo, FooBar, FooBarBaz], DEFAULT=[XXX]}
where as the desired result should be
{FOO=[Foo, FooBar, FooBarBaz], BAR=[FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
After searching for a while I found another approach, which comes near to my desired result, but not quite fully
Map<String,List<String>> result2 =
List.of("Foo", "Bar", "Baz", "Default").stream()
.flatMap(str -> strings.stream().filter(s -> s.contains(str)).map(s -> new String[]{str.toUpperCase(), s}))
.collect(Collectors.groupingBy(arr -> arr[0], Collectors.mapping(arr -> arr[1], Collectors.toList())));
System.out.println(result2);
result2 is
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz]}
while this correctly groups strings containing the substrings into the needed groups, the strings which doesn't contain the substrings and therefore should fall in the default group are ignored. The desired result is as already mentioned above (order doesn't matter)
{BAR=[FooBar, FooBarBaz], FOO=[Foo, FooBar, FooBarBaz], BAZ=[FooBarBaz], DEFAULT=[XXX]}
For now I'm using both result maps and doing an extra:
result2.put("DEFAULT", result1.get("DEFAULT"));
Can the above be done in one step? Is there a better approach better than what I have above?
This is ideal for using mapMulti. MapMulti takes a BiConsumer of the streamed value and a consumer.
The consumer is used to simply place something back on the stream. This was added to Java since flatMaps can incur undesirable overhead.
This works by can building a String array as you did before of Token and the containing String and collecting (also as you did before). If the key was found in the string, accept a String array with it and the containing string. Otherwise, accept a String array with the default key and the string.
List<String> strings =
List.of("Foo", "FooBar", "FooBarBaz", "XXX", "YYY");
Map<String, List<String>> result = strings.stream()
.<String[]>mapMulti((str, consumer) -> {
boolean found = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO", "BAR",
"BAZ")) {
if (temp.contains(token)) {
consumer.accept(
new String[] { token, str });
found = true;
}
}
if (!found) {
consumer.accept(
new String[] { "DEFAULT", str });
}
})
.collect(Collectors.groupingBy(arr -> arr[0],
Collectors.mapping(arr -> arr[1],
Collectors.toList())));
result.entrySet().forEach(System.out::println);
prints
BAR=[FooBar, FooBarBaz]
FOO=[Foo, FooBar, FooBarBaz]
BAZ=[FooBarBaz]
DEFAULT=[XXX, YYY]
Keep in mind that streams are meant to make your coding world easier. But sometimes, a regular loop using some Java 8 constructs is all that is needed. Outside of an academic exercise, I would probably do the task like so.
Map<String,List<String>> result2 = new HashMap<>();
for (String str : strings) {
boolean added = false;
String temp = str.toUpperCase();
for (String token : List.of("FOO","BAR","BAZ")) {
if(temp.contains(token)) {
result2.computeIfAbsent(token, v->new ArrayList<>()).add(str);
added = true;
}
}
if (!added) {
result2.computeIfAbsent("DEFAULT", v-> new ArrayList<>()).add(str);
}
}
Instead of operating with strings "Foo", "Bar", etc. and their corresponding uppercase versions, it would be more convenient and cleaner to define an enum.
Let's call it Keys:
public enum Keys {
FOO("Foo"), BAR("Bar"), BAZ("Baz"), DEFAULT("");
private static final Set<Keys> nonDefaultKeys = EnumSet.range(FOO, BAZ); // Set of enum constants (not includes DEFAULT), needed to avoid creating EnumSet or array of constants via `values()` at every invocation of getKeys()
private String keyName;
Keys(String keyName) {
this.keyName = keyName;
}
public static List<String> getKeys(String str) {
List<String> keys = nonDefaultKeys.stream()
.filter(key -> str.contains(key.keyName))
.map(Enum::name)
.toList();
// if non-default keys not found, i.e. keys.isEmpty() - return the DEFAULT
return keys.isEmpty() ? List.of(DEFAULT.name()) : keys;
}
}
It has a method getKeys(String) which expects a string and returns a list of keys to which the given string should be mapped.
By using the functionality encapsulated in the Keys enum we can create a map of strings split into groups which correspond to the names of Keys-constants by using collect(supplier,accumulator,combiner).
main()
public static void main(String[] args) {
List<String> strings = List.of("Foo", "FooBar", "FooBarBaz", "XXX");
Map<String, List<String>> stringsByGroup = strings.stream()
.collect(
HashMap::new, // mutable container - which will contain results of mutable reduction
(Map<String, List<String>> map, String next) -> Keys.getKeys(next)
.forEach(key -> map.computeIfAbsent(key, k -> new ArrayList<>()).add(next)), // accumulator function - defines how to store stream elements into the container
(left, right) -> right.forEach((k, v) ->
left.merge(k, v, (oldV, newV) -> { oldV.addAll(newV); return oldV; }) // combiner function - defines how to merge container while executing the stream in parallel
));
stringsByGroup.forEach((k, v) -> System.out.println(k + " -> " + v));
}
Output:
BAR -> [FooBar, FooBarBaz]
FOO -> [Foo, FooBar, FooBarBaz]
BAZ -> [FooBarBaz]
DEFAULT -> [XXX]
A link to Online Demo

How to merge two Maps based on values with Java 8 streams?

I have a Collection of Maps containing inventory information:
0
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "60"
1
"subtype" -> "DAIRY"
"itemNumber" -> "EU999"
"quantity" -> "1000"
2
"subtype" -> "FRESH"
"itemNumber" -> "EU999"
"quantity" -> "800"
3
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I need to condense this list based on the itemNumber, while summing the quantity and retaining unique subtypes in a comma separated string. Meaning, new Maps would look like this:
0
"subtype" -> "DAIRY, FRESH"
"itemNumber" -> "EU999"
"quantity" -> "1860"
1
"subtype" -> "FRESH"
"itemNumber" -> "EU100"
"quantity" -> "100"
I've tried a variations of streams, collectors, groupby etc., and I'm lost.
This is what I have so far:
public Collection<Map> mergeInventoryPerItemNumber(Collection<Map> InventoryMap){
Map condensedInventory = null;
InventoryMap.stream()
.collect(groupingBy(inv -> new ImmutablePair<>(inv.get("itemNumber"), inv.get("subtype")))), collectingAndThen(toList(), list -> {
long count = list.stream()
.map(list.get(Integer.parseInt("quantity")))
.collect(counting());
String itemNumbers = list.stream()
.map(list.get("subtype"))
.collect(joining(" , "));
condensedInventory.put("quantity", count);
condensedInventory.put("subtype", itemNumbers);
return condensedInventory;
});
Here is one approach.
first iterate thru the list of maps.
for each map, process the keys as required
special keys are itemNumber and quantity
itemNumber is the joining element for all the values.
quantity is the value that must be treated as an integer
the others are strings and are treated as such (for all other values, if the value already exists in the string of concatenated values, then it is not added again)
Some data
List<Map<String, String>> mapList = List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999",
"quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999",
"quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100",
"quantity", "100"));
The building process
Map<String, Map<String, String>> result = new HashMap<>();
for (Map<String, String> m : mapList) {
result.compute(m.get("itemNumber"), (k, v) -> {
for (Entry<String, String> e : m.entrySet()) {
String key = e.getKey();
String value = e.getValue();
if (v == null) {
v = new HashMap<String, String>();
v.put(key, value);
} else {
if (key.equals("quantity")) {
v.compute(key,
(kk, vv) -> vv == null ? value :
Integer.toString(Integer
.valueOf(vv)
+ Integer.valueOf(
value)));
} else {
v.compute(key, (kk, vv) -> vv == null ?
value : (vv.contains(value) ? vv :
vv + ", " + value));
}
}
}
return v;
});
}
List<Map<String,String>> list = new ArrayList<>(result.values());
for (int i = 0; i < list.size(); i++) {
System.out.println(i + " " + list.get(i));
}
prints
0 {itemNumber=EU100, quantity=100, subtype=FRESH}
1 {itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
Note that the map of maps may be more useful that a list of maps. For example, you can retrieve the map for the itemNumber by simply specifying the desired key.
System.out.println(result.get("EU999"));
prints
{itemNumber=EU999, quantity=1860, subtype=DAIRY, FRESH}
You are misusing a Map here. Every map contains the same keys ("subtype", "itemNumber", "quantity"). And they are treated almost like object properties in your code. They are expected to be present in every map and each of them expected to have a specific range of values, although are stored as strings according to your example.
Side-note: avoid using row types (like Map without generic information in angle brackets <>), in such a case all elements inside a collection will be treated as Objects.
Item clearly has to be defined as a class. By storing these data inside a map, you're loosing a possibility to define an appropriate data type for each property, and also you're not able to define behaviour to manipulate with these properties (for more elaborate explanation take a look at this answer).
public class Item {
private final String itemNumber;
private Set<Subtype> subtypes;
private long quantity;
public Item combine(Item other) {
Set<Subtype> combinedSubtypes = new HashSet<>(subtypes);
combinedSubtypes.addAll(other.subtypes);
return new Item(this.itemNumber,
combinedSubtypes,
this.quantity + other.quantity);
}
// + constructor, getters, hashCode/equals, toString
}
Method combine represents the logic for merging two items together. By placing it inside this class, you could easily reuse and change it when needed.
The best choice for the type of the subtype field is an enum. Because it'll allow to avoid mistakes caused by misspelled string values and also enums have an extensive language support (switch expressions and statements, special data structures designed especially for enums, enum could be used with annotations).
This custom enum can look like this.
public enum Subtype {DAIRY, FRESH}
With all these changes, the code inside the mergeInventoryPerItemNumber() becomes concise and easier to comprehend. Collectors.groupingBy() is used to create a map by grouping items with the same itemNumber. A downstream collector Collectors.reducing() is used to combine items grouped under the same key to a single object.
Note that Collectors.reducing() produces an Optional result. Therefore, filter(Optional::isPresent) is used as a precaution to make sure that the result exists and subsequent operation map(Optional::get) extracts the item from the optional object.
public static Collection<Item> mergeInventoryPerItemNumber(Collection<Item> inventory) {
return inventory.stream()
.collect(Collectors.groupingBy(Item::getItemNumber,
Collectors.reducing(Item::combine)))
.values().stream()
.filter(Optional::isPresent)
.map(Optional::get)
.collect(Collectors.toList());
}
main()
public static void main(String[] args) {
List<Item> inventory =
List.of(new Item("EU999", Set.of(Subtype.DAIRY), 60),
new Item("EU999", Set.of(Subtype.DAIRY), 1000),
new Item("EU999", Set.of(Subtype.FRESH), 800),
new Item("EU100", Set.of(Subtype.FRESH), 100));
Collection<Item> combinedItems = mergeInventoryPerItemNumber(inventory);
combinedItems.forEach(System.out::println);
}
Output
Item{itemNumber='EU100', subtypes=[FRESH], quantity=100}
Item{itemNumber='EU999', subtypes=[FRESH, DAIRY], quantity=1860}
It may be possible to do this with a single sweep, but here I have solved it with two passes: one to group like items together, and another over the items in each group to build a representative item (which seems similar in spirit to your code, where you were also attempting to stream elements from groups).
public static Collection<Map<String, String>>
mergeInventoryPerItemNumber(Collection<Map<String, String>> m){
return m.stream()
// returns a map of itemNumber -> list of products with that number
.collect(Collectors.groupingBy(o -> o.get("itemNumber")))
// for each item number, builds new representative product
.entrySet().stream().map(e -> Map.of(
"itemNumber", e.getKey(),
// ... merging non-duplicate subtypes
"subtype", e.getValue().stream()
.map(v -> v.get("subtype"))
.distinct() // avoid duplicates
.collect(Collectors.joining(", ")),
// ... adding up quantities
"quantity", ""+e.getValue().stream()
.map(v -> Integer.parseInt(v.get("quantity")))
.reduce(0, Integer::sum)))
.collect(Collectors.toList());
}
public static void main(String ... args) {
Collection<Map<String, String>> c = mkMap();
dump(c);
dump(mergeInventoryPerItemNumber(c));
}
public static Collection<Map<String, String>> mkMap() {
return List.of(
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "60"),
Map.of("subtype", "DAIRY", "itemNumber", "EU999", "quantity", "1000"),
Map.of("subtype", "FRESH", "itemNumber", "EU999", "quantity", "800"),
Map.of("subtype", "FRESH", "itemNumber", "EU100", "quantity", "100"));
}
public static void dump(Collection<Map<String, String>> col) {
int i = 0;
for (Map<String, String> m : col) {
System.out.println(i++);
for (Map.Entry e : m.entrySet()) {
System.out.println("\t" + e.getKey() + " -> " + e.getValue());
}
}
}

How to filter based on list returned by map param using Java 8 streams

I'm trying to use Java stream to filter some values based on certain conditions. I am able to achieve the same using traditional for loops and a little bit of streams, but I want to rewrite the same logic fully in streams.
Original code:
public List <String> getProductNames(Hub hub, String requestedGroup) {
List <SupportedProduct> configuredProducts = repo.getSupportedProducts(hub);
List <String> productNames = new ArrayList <> ();
for (SupportedProduct supportedProduct: configuredProducts) {
List < String > categoryNameList = new ArrayList <> ();
String activeCategoryName = supportedProduct.getCategoryDetails().getActiveCategoryName();
if (activeCategoryName == null) {
Optional.ofNullable(supportedProduct.getCategoryDetails().getCategories())
.orElse(Collections.emptyList())
.forEach(category - > categoryNameList.add(category.getName()));
} else {
categoryNameList.add(activeCategoryName);
}
for (String catName: categoryNameList) {
Division division = divisionRepo.getDivisionByCatName(catName);
if (division != null && division.getGroup() == requestedGroup) {
productNames.add(supportedProduct.getProductName());
}
}
}
return productNames;
}
My try:
return Optional.ofNullable(configuredProducts).orElse(Collections.emptyList()).stream()
.map(supportedProduct -> {
List<String> categoryNameList = new ArrayList<>();
String activeCategoryName = supportedProduct.getCategoryDetails().getActiveCategoryName();
if (activeCategoryName == null) {
Optional.ofNullable(supportedProduct.getCategoryDetails().getCategories())
.orElse(Collections.emptyList())
.forEach(category -> categoryNameList.add(category.getName()));
} else {
categoryNameList.add(activeCategoryName);
}
return categoryNameList;
})
.filter(catName ->{
Division division = divisionRepo.getDivisionByCatName(catName);
return division != null && division.getGroup() == requestedGroup;
})........
But I'm lost beyond this.
Please help me to write the same using streams.
EDIT: Added IDEOne for testing - Link
The logic inside is quite complicated, however, try this out:
public List <String> getProductNames(Hub hub, String requestedGroup) {
List<SupportedProduct> configuredProducts = repo.getSupportedProducts(hub);
// extract pairs:
// key=SupportedProduct::getProductName
// values=List with one activeCategoryName OR names of all the categories
Map<String, List<String>> namedActiveCategoryNamesMap = configuredProducts.stream()
.collect(Collectors.toMap(
SupportedProduct::getProductName,
p -> Optional.ofNullable(p.getCategoryDetails().getActiveCategoryName())
.map(Collections::singletonList)
.orElse(Optional.ofNullable(p.getCategoryDetails().getCategories())
.stream()
.flatMap(Collection::stream)
.map(Category::getName)
.collect(Collectors.toList()))));
// look-up based on the categories' names, group equality comparison and returning a List
return namedActiveCategoryNamesMap.entrySet().stream()
.filter(entry -> entry.getValue().stream()
.map(catName -> divisionRepo.getDivisionByCatName(catName))
.filter(Objects::nonNull)
.map(Division::getGroup)
.anyMatch(requestedGroup::equals))
.map(Map.Entry::getKey)
.collect(Collectors.toList());
}
I recommend splitting into separate methods for sake of readability (the best way to go).
The verbose logics of Optional chains including two orElse calls can be surely simplified, however, it gives you the idea.
You can perform within one Stream using Collectors.collectingAndThen. In that case, I'd extract the Function finisher elsewhere, example:
public List<String> getProductNames(Hub hub, String requestedGroup) {
return repo.getSupportedProducts(hub).stream()
.collect(Collectors.collectingAndThen(
Collectors.toMap(
SupportedProduct::getProductName,
categoryNamesFunction()),
productNamesFunction(requestedGroup)));
}
private Function<Map<String, List<String>>, List<String>> productNamesFunction(String requestedGroup) {
return map -> map.entrySet().stream()
.filter(entry -> entry.getValue().stream()
.map(divisionRepo::getDivisionByCatName)
.filter(Objects::nonNull)
.map(Division::getGroup)
.anyMatch(requestedGroup::equals))
.map(Map.Entry::getKey)
.collect(Collectors.toList());
}
private Function<SupportedProduct, List<String>> categoryNamesFunction() {
return p -> Optional.ofNullable(p.getCategoryDetails().getActiveCategoryName())
.map(Collections::singletonList)
.orElse(Optional.ofNullable(p.getCategoryDetails().getCategories())
.stream()
.flatMap(Collection::stream)
.map(Category::getName)
.collect(Collectors.toList()));
}

How to return the entire list if filter is false

I have this list
List<String> lstStr = new ArrayList<>();
lstStr.add("1");
lstStr.add("2");
lstStr.add("3");
lstStr.add("4");
lstStr.add("5");
When I search for the string "1" its should return a List<String> = ["1"] and if search string is not in the list for example "0" it should return the entire List <String> =["1","2","3","4","5"]. Can this be achieved using java stream? Please show an example.
I have tried this using the code below but I could get the entire list when I search for say "0"
List<String> filteredLst = lstStr.stream()
.filter(data-> "1".equalsIgnoreCase(data))
.collect(Collectors.toList());
filteredLst.forEach(data2 -> System.out.println(data2));
Thanks in advance.
You can partition on a predicate and return the non-empty list:
Map<Boolean, List<String>> split = lstStr.stream()
.collect(Collectors.partitioningBy("1"::equalsIgnoreCase));
List<String> filteredLst = split.get(Boolean.TRUE).isEmpty() ?
split.get(Boolean.FALSE) : //can also use lstStr instead
split.get(Boolean.TRUE);
Collectors.partitioningBy("1"::equals) will return a 2-entry map, where true will be the key of entries that meet your filter, and false the key of the rest.
filteredLst should contain the value mapped to true if that is not empty, or the value of false otherwise (which would surely be the same as he original list)
If you don't need to handle duplicates then you can do the following :
static List<String> getOneOrAll(List<String> list, String element) {
return list.stream()
.filter(element::equalsIgnoreCase)
.findFirst()
.map(Collections::singletonList)
.orElse(list);
}
...
List<String> result = getOneOrAll(lstStr, "1");
Otherwise you can pass in a predicate and filter the duplicates:
static <T> List<T> getOneOrAll(List<T> list, Predicate<T> predicate) {
List<T> filteredList = list.stream()
.filter(predicate)
.collect(toList());
return filteredList.isEmpty() ? list : filteredList;
}
...
List<String> result = getOneOrAll(lstStr, "1"::equals);
// or
List<String> resultIgnoringCase = getOneOrAll(lstStr, "1"::equalsIgnoreCase);
A possible simple util for this would be using contains :
List<String> findAndReturnValue(List<String> lstStr, String value) {
return lstStr.contains(value) ? Arrays.asList(value) : lstStr;
}
and for possible duplicates in the list:
List<String> findAndReturnValue(List<String> lstStr, String value) {
return lstStr.contains(value) ?
lstStr.stream()
.filter(a -> a.equalsIgnoreCase(value)) // condition as in question
.collect(Collectors.toList()) : lstStr;
}
To reduce the complexity for the cases where the element would be present in the list, the rather simpler solution would be collecting to a list and then checking for the size :
List<String> findAndReturn(List<String> lstStr, String value) {
List<String> filteredLst = lstStr.stream()
.filter(data -> data.equalsIgnoreCase(value))
.collect(Collectors.toList());
return filteredLst.isEmpty() ? lstStr : filteredLst;
}
You can use custom Collector for this purpose.
li.stream().filter(d->"0".equalsIgnoreCase(d)).collect(
Collector.of(
ArrayList::new,
ArrayList::add,
(a, b) -> {
a.addAll(b);
return a;
},
a -> a.isEmpty() ? li : a
)
);
Look for Collector docs: https://docs.oracle.com/javase/8/docs/api/java/util/stream/Collector.html
First Check if your list contains that value .
If yes then go for filtering the list
or else just print the Previous list
if(!lstStr.contains("0")) {
lstStr.forEach(data2 -> System.out.println(data2));
}else {
List<String> filteredLst = lstStr.stream()
.filter(data-> "1".equalsIgnoreCase(data))
.collect(Collectors.toList());
filteredLst.forEach(data2 -> System.out.println(data2));
}

Accumulator not working properly in parallel stream

I made collector who can reduce a stream to a map which has the keys as the items that can be bought by certain customers and the names of customers as values, my implementation is working proberly in sequential stream
but when i try to use parallel it's not working at all, the resulting sets always contain one customer name.
List<Customer> customerList = this.mall.getCustomerList();
Supplier<Object> supplier = ConcurrentHashMap<String,Set<String>>::new;
BiConsumer<Object, Customer> accumulator = ((o, customer) -> customer.getWantToBuy().stream().map(Item::getName).forEach(
item -> ((ConcurrentHashMap<String,Set<String>>)o)
.merge(item,new HashSet<String>(Collections.singleton(customer.getName())),
(s,s2) -> {
HashSet<String> res = new HashSet<>(s);
res.addAll(s2);
return res;
})
));
BinaryOperator<Object> combiner = (o,o2) -> {
ConcurrentHashMap<String,Set<String>> res = new ConcurrentHashMap<>((ConcurrentHashMap<String,Set<String>>)o);
res.putAll((ConcurrentHashMap<String,Set<String>>)o2);
return res;
};
Function<Object, Map<String, Set<String>>> finisher = (o) -> new HashMap<>((ConcurrentHashMap<String,Set<String>>)o);
Collector<Customer, ?, Map<String, Set<String>>> toItemAsKey =
new CollectorImpl<>(supplier, accumulator, combiner, finisher, EnumSet.of(
Collector.Characteristics.CONCURRENT,
Collector.Characteristics.IDENTITY_FINISH));
Map<String, Set<String>> itemMap = customerList.stream().parallel().collect(toItemAsKey);
There is certainly a problem in my accumulator implementation or another Function but I cannot figure it out! could anyone suggest what should i do ?
Your combiner is not correctly implemented.
You overwrite all entries that has the same key. What you want is adding values to existing keys.
BinaryOperator<ConcurrentHashMap<String,Set<String>>> combiner = (o,o2) -> {
ConcurrentHashMap<String,Set<String>> res = new ConcurrentHashMap<>(o);
o2.forEach((key, set) -> set.forEach(string -> res.computeIfAbsent(key, k -> new HashSet<>())
.add(string)));
return res;
};

Categories

Resources