We have a map of Student to record Map<Student, StudentRecord>.
Student class is as follows:
Student {
String id;
String grade;
Int age;
}
Additionally, we have a list of Student Id (List<String>) provided.
Using Java streams, what would be the most efficient way to filter out records of students whose Id exists in the provided list?
The expected outcome is the filtered list mapped against the Id(String) - <Map<Id, StudentRecord>>
You can stream set of entries:
map.entrySet().stream()
.filter(e -> list.contains(e.getKey()))
.collect(toMap(Map.Entry::getKey, Map.Entry::getValue));
If you also want to map keys to id field, then:
map.entrySet().stream()
.filter(e -> list.contains(e.getKey()))
.collect(toMap(e -> e.getKey().getId(), Map.Entry::getValue));
First of all, I'd convert your List to a Set, to avoid linear search time:
List<String> ids = ...
Set<String> idsSet = new HashSet<>(ids);
Now, you can stream over the entries of the Map, filter out those having ids in the List/Set, and collect the remaining ones to an output Map:
Map<String,StudentRecord> filtered =
input.entrySet()
.stream()
.filter(e -> !idsSet.contains(e.getKey().getId()))
.collect(Collectors.toMap(e -> e.getKey().getId(),Map.Entry::getValue));
Although other answers are correct but I think they are not more efficient since they use temporary memory or its complicity is not o(n).
the other answer is like this:
provided.stream()
.map(id -> new AbstractMap.SimpleEntry<>(id, map.entrySet()
.stream().filter(st -> st.getKey().id == id)
.map(Map.Entry::getValue).findFirst()))
.filter(simpleEntry ->simpleEntry.getValue().isPresent())
.map(entry-> new AbstractMap.SimpleEntry<>(entry.getKey(), entry.getValue().get()))
.collect(Collectors.toMap(Map.Entry::getKey,Map.Entry::getValue))
Related
I have a list of students and a delegate that has a function to get a list of servers for a given student (getServersForStudent(student)). I would like to create a map for a list of students indexed for each server. A student can be in many servers.
private Map<Server, Student> getStudentsByServer(List<Student> students) {
Map<Server, List<Student>> map = new HashMap<>();
students.forEach(student ->
List<Server> servers = delegate.getServersForStudent(student);
if (!servers.isEmpty()) {
servers.forEach(server -> map.putIfAbsent(server, new ArrayList<>()).add(student));
}
);
return map
}
This works perfectly, but I would like to refactor this to use streams in order to make an immutable collection instead. I tried doing this with groupingBy, but I wasn't able to get the right result:
students
.stream()
.collect(
Collectors.groupingBy(
student -> delegate.getServersForStudent(student);
Collectors.mapping(Function.identity(), Collectors.toList())
)
);
This grouping doesn't have the same result as above since it is grouping by lists. Does anyone have any suggestions on how to best do this with Java streams?
Streams are not required to return an immutable collection; simply copy your collection into an immutable one in the end or wrap it in an unmodifiable wrapper:
private Map<Server, Student> getStudentsByServer(final List<Student> students) {
final Map<Server, List<Student>> map = new HashMap<>();
for (final Student student : students) {
for (final Server server : delegate.getServersForStudent(student)) {
map.computeIfAbsent(server, new ArrayList<>())
.add(student);
}
);
// wrap:
// return Collections.unmodifiableMap(map);
// or copy:
return Map.copyOf(map);
}
If you really want to do it stream-based, you have to first create a stream of tuples (student, server), which you can then group. Java does not have a specific tuple type, but short of creating a custom type, you can misuse Map.Entry<K, V> for that:
students
.stream()
.flatMap(student -> delegate.getServersForStudent(student)
.stream()
.map(server -> Map.entry(student, server)))
.collect(
Collectors.groupingBy(
tuple -> tuple.getValue(),
Collectors.mapping(
tuple -> tuple.getKey(),
Collectors.toList())));
Note that the collection return by Collectors don't make any promises about the (im)mutability. If you require immutability, you have to add another collection step using Collectors.collectingAndThen:
.collect(
Collectors.collectingAndThen(
Collectors.groupingBy(
tuple -> tuple.getValue(),
Collectors.mapping(
tuple -> tuple.getKey(),
Collectors.toList())),
Map::copyOf);
// or wrap with: Collections::unmodifiableMap
And it's definitely worthwhile to mention that an unmodifiable/immutable map as in the example above still allows to modify the list of servers, because that Collectors.toList() currently returns an ArrayList. If you require the value of the map to be immutable too, you have to take care of that yourself, e.g. using Collectors.toUnmodifiableList or by copying/wrapping the list again.
I need to map a list of pairs of objects into <ocurrences, list of Objs with those ocurrences>, I've tried using streams directly on the input list of pairs but I'm still kind of new to java and couldn't figure it out, so I was trying to do something like this, but it's probably not close to the best way to do it.
public Map<Integer,ArrayList<Obj>> numBorders(List<Pair<Obj,Obj>> lf) {
Map<Integer,ArrayList<Obj>> nBorders = new HashMap<>();
List<Obj> list = new ArrayList<>();
for(Pair<Obj, Obj> pair : lf) {
list.add(pair.getKey());
list.add(pair.getValue());
}
nBorders = list.stream().collect(Collectors.groupingBy(...);
return nBorders;
}
so for example, for lf = {(o1,o2),(o3,o2),(o5,o4),(o4,o1),(o3,o4),(o7,o1),(o5,o8),(o3,o10),(o4,o5),(o3,o7),(o9,o8)} the result should be {(1,{o9,o10}),(2,{o2,o7,o8,}),(3,{o1,o5}),(4,{o3,o4})}.
I'm really confused on how to do this, if someone could help, I'd appreciate it, thanks.
This can be done this way:
create a stream from the pairs to concatenate first/second values using Stream::flatMap
count the occurrences - build an intermediate map <Obj, Integer> using Collectors.groupingBy + Collectors.summingInt (to keep integer)
create an inverse map <Integer, List> from the stream of the entries in the intermediate map using Collectors.groupingBy + Collectors.mapping
Optionally, if an order in the resulting map is critical, a LinkedHashMap may be created from the entries of the intermediate frequency map sorted by value.
public Map<Integer,ArrayList<Obj>> numBorders(List<Pair<Obj,Obj>> lf) {
return lf.stream() // Stream<Pair>
.flatMap(p -> Stream.of(p.getKey(), p.getValue())) // Stream<Obj>
.collect(Collectors.groupingBy(
obj -> obj,
Collectors.summingInt(obj -> 1)
)) // Map<Obj, Integer>
.entrySet()
.stream() // Stream<Map.Entry<Obj, Integer>>
.sorted(Map.Entry.comparingByValue())
.collect(Collectors.groupingBy(
Map.Entry::getValue, // frequency is key
LinkedHashMap::new,
Collectors.mapping(Map.Entry::getKey, Collectors.toList())
)); // Map<Integer, List<Obj>>
}
I have a large list of items that I need to convert into a map of items of same type:
List<Item> items = //10^6 items of different types
Map<Type, List<Item>> itemsByType = new ConcurrentHashMap<>();
for (Item item : items) {
itemsByType.computeIfAbsent(
item.getType(),
i -> new ArrayList<>()
).add(item);
}
Each type is then ordered by long type identifier; and, each list of type-items is ordered by long item identifier. And, finally, the ordered list is processed.
This works fine, but I'm wondering if there's a more efficient way to do all of this...?
You can use java-8 groupingBy
Map<Type, List<Item>> itemsByType = items.stream()
.sorted(Comparator) //for any sorting you can use sorted with comparator
.collect(Collectors.groupingBy(Item::getType));
If you want ConcurrentHashMap you can use groupingByConcurrent
ConcurrentMap<Type, List<Item>> itemsByType = items.stream()
.collect(Collectors.groupingByConcurrent(Item::getType));
You can use the overloaded groupingBy with TreeMap so the map is already sorted based on key
TreeMap<Type, List<Item>> map = list
.stream()
.collect(Collectors.groupingBy(
Item::Type,
() -> new TreeMap<>(Comparator.comparingLong(Type::getId)),
Collectors.toList()));
You can also collect the map with sorted keys and sorted values in one chain
Map<Type, List<Item>> str = list1.stream()
.collect(
Collectors.groupingBy(
Item::Type,
() -> new TreeMap<>(Comparator.comparingLong(Type::getId)),
Collectors.collectingAndThen(
Collectors.toList(),
list -> list.stream()
.sorted(Comparator.comparingLong(Item::getId))
.collect(Collectors.toList()))));
You could use a MultiMap, e.g., guava's. Here is their code example:
ListMultimap<String, String> multimap = ArrayListMultimap.create();
for (President pres : US_PRESIDENTS_IN_ORDER) {
multimap.put(pres.firstName(), pres.lastName());
}
for (String firstName : multimap.keySet()) {
List<String> lastNames = multimap.get(firstName);
out.println(firstName + ": " + lastNames);
}
... produces output such as:
Zachary: [Taylor]
John: [Adams, Adams, Tyler, Kennedy] // Remember, Quincy!
George: [Washington, Bush, Bush]
Grover: [Cleveland, Cleveland] // Two, non-consecutive terms, rep'ing NJ!
...
A TreeMultimap has sorted keys and values, which is what you want, if I understood your title correctly.
A Multimap is particularly useful in case you need to check if a certain value is present for a certain key, because that is supported without getting the collection for the key and then searching that:
multimap.containsEntry("John", "Adams");
The question title may seem to be same as some other post but the content is different. So please don't mark it duplicate.
Problem:
I have the below class:
public class SCDTO extends RDTO {
private List<String> sCPairs = Collections.emptyList();
public SCDTO(List<String> sCPairs) {
this.sCPairs = sCPairs;
}
//Getter setter
}
I am trying to using the below lambda expression to set the sCPairs.
sCPairsObject.setSCPairs(
util.getSCMap().entrySet().stream()
.filter(entry -> entry.getValue().contains("abc"))
.collect(Collectors.toCollection(ArrayList<String>::new))
);
But I have an compilation error saying:
no instance(s) of type variable(s) exist so that Entry<String, List<String>> conforms to String
util.getSCMap returns Map<String, List<String>>.
Can anyone please explain why this is happening and how to solve it?
Thanks.
You are streaming entries from the map:
sCPairsObject.setSCPairs(util.getSCMap().entrySet().stream()
Then filtering out some of them:
.filter(entry -> entry.getValue().contains("abc"))
now you need to map the entries to List, for example:
.map(entry -> entry.getValue())
and stream the contents of all these lists as one stream:
.flatMap(List::stream)
Finally collect the values into a list:
.collect(Collectors.toList());
Your stream pipeline finds all the Map entries whose value List<String> contains the String "abc", and tries to collect them into a List<String>.
You didn't specify how you intend to convert each Map.Entry<String,List<String>> element that passes the filter into a String. Depending on the required logic, perhaps you are missing a map() step after the filter.
For example, if you wish to collect all the keys having a value that passes the filter into a List<String>:
sCPairsObject.setSCPairs(util.getSCMap()
.entrySet()
.stream()
.filter(entry -> entry.getValue().contains("abc"))
.map(Map.Entry::getKey)
.collect(Collectors.toCollection(ArrayList<String>::new)));
You don't need to iterate over the entrySet since you are not using the key at all. You can get the values of the Map and filter that. Looks like this
sCPairsObject.setSCPairs(util.getSCMap()
.values()
.stream()
.filter(entry -> entry.contains("abc"))
.flatMap(List::stream)
.collect(Collectors.toList());
Since you want to convert a Map<String, List<String>> to a List<String> where the list should be a union of all matching value lists you'll need a flatMap() to join the streams on those value lists into a single stream.
Additionally you don't seem to need the keys so just stream on the map's values:
List<String> scPairs = util.getSCMap().values().stream() //stream on the values only
.filter( l -> l.contains( "abc" ) ) //filter the lists
.flatMap( Collection::stream ) //join the individual streams
.collect( Collectors.toList() ); //collect the result into a single list
Say I have a list of People with attributes name and age. How do I get all instances of People that have the largest value for attribute age, using a stream?
Currently, I am using a two-step approach:
1) Finding the maximum value of age
int maxAge = group
.stream()
.mapToInt(person -> person.getAge())
.max()
.orElse(-1);
2) Creating a list of People with that age
List<Group> groupWithMaxAge = group
.stream()
.filter(person -> person.getAge() == maxAge)
.collect(Collectors.toList());
No worries, this works. However, consider the case that calculating the age is an expensive function. In that case, it would be nice if you could do it in one go, wouldn't it?
You can also use groupingBy with TreeMap as a mapFactory:
List<Group> list = people.stream()
.collect(groupingBy(Group::getAge, TreeMap::new, toList()))
.lastEntry()
.getValue();
A cleaner way would be to use Stream.max as:
List<Group> groupWithMaxAge = group.stream() // Stream<Group>
.collect(Collectors.groupingBy(Group::getAge)) // Map<Integer, List<Group>
.entrySet() // Set<Entry<Integer, List<Group>>>
.stream().max(Comparator.comparingInt(Entry::getKey)) // Optional<Entry<Integer, List<Group>>>
.map(Entry::getValue) // Optional<List<Person>>
.orElse(new ArrayList<>());
An alternative is to group and pick the max key (age):
List<People> peopleWithMaxAge = group.stream()
.collect(Collectors.groupingBy(People::getAge))
.entrySet()
.stream()
.sorted(Comparator.<Entry<Integer, List<People>>>comparingInt(Entry::getKey)
.reversed())
.findFirst()
.map(Entry::getValue)
.orElse(new ArrayList<>()); //empty list if original was empty