I have a text file that contains URLs and emails. I need to extract all of them from the file. Each URL and email can be found more then once, but result shouldn't contain duplicates.
I can extract all URLs using the following code:
Files.lines(filePath).
.map(urlPattern::matcher)
.filter(Matcher::find)
.map(Matcher::group)
.distinct();
I can extract all emails using the following code:
Files.lines(filePath).
.map(emailPattern::matcher)
.filter(Matcher::find)
.map(Matcher::group)
.distinct();
Can I extract all URLs and emails reading the stream returned by Files.lines(filePath) only one time?
Something like splitting stream of lines to stream of URLs and stream of emails.
You can use partitioningBy collector, though it's still not very elegant solution.
Map<Boolean, List<String>> map = Files.lines(filePath)
.filter(str -> urlPattern.matcher(str).matches() ||
emailPattern.matcher(str).matches())
.distinct()
.collect(Collectors.partitioningBy(str -> urlPattern.matcher(str).matches()));
List<String> urls = map.get(true);
List<String> emails = map.get(false);
If you don't want to apply regexp twice, you can make it using the intermediate pair object (for example, SimpleEntry):
public static String classify(String str) {
return urlPattern.matcher(str).matches() ? "url" :
emailPattern.matcher(str).matches() ? "email" : null;
}
Map<String, Set<String>> map = Files.lines(filePath)
.map(str -> new AbstractMap.SimpleEntry<>(classify(str), str))
.filter(e -> e.getKey() != null)
.collect(Collectors.groupingBy(e -> e.getKey(),
Collectors.mapping(e -> e.getValue(), Collectors.toSet())));
Using my free StreamEx library the last step would be shorter:
Map<String, Set<String>> map = StreamEx.of(Files.lines(filePath))
.mapToEntry(str -> classify(str), Function.identity())
.nonNullKeys()
.grouping(Collectors.toSet());
You can perform the matching within a Collector:
Map<String,Set<String>> map=Files.lines(filePath)
.collect(HashMap::new,
(hm,line)-> {
Matcher m=emailPattern.matcher(line);
if(m.matches())
hm.computeIfAbsent("mail", x->new HashSet<>()).add(line);
else if(m.usePattern(urlPattern).matches())
hm.computeIfAbsent("url", x->new HashSet<>()).add(line);
},
(m1,m2)-> m2.forEach((k,v)->m1.merge(k, v,
(s1,s2)->{s1.addAll(s2); return s1;}))
);
Set<String> mail=map.get("mail"), url=map.get("url");
Note that this can easily be adapted to find multiple matches within a line:
Map<String,Set<String>> map=Files.lines(filePath)
.collect(HashMap::new,
(hm,line)-> {
Matcher m=emailPattern.matcher(line);
while(m.find())
hm.computeIfAbsent("mail", x->new HashSet<>()).add(m.group());
m.usePattern(urlPattern).reset();
while(m.find())
hm.computeIfAbsent("url", x->new HashSet<>()).add(m.group());
},
(m1,m2)-> m2.forEach((k,v)->m1.merge(k, v,
(s1,s2)->{s1.addAll(s2); return s1;}))
);
Since you can't reuse a Stream, the only option would be to "do it manually" I think.
File.lines(filePath).forEach(s -> /** match and sort into two lists */ );
If there's another solution for this though I'd be happy to learn about it!
The overall question should be: Why would you want to stream only once?
Extracting the URLs and extracting the emails are different operations and thus should be handled in their own streaming operations. Even if the underlying stream source contains hundreds of thousands of records, the time for iteration can be neglected when compared to the mapping and filtering operations.
The only thing you should consider as a possible performance issue is the IO operation. The cleanest solution therefore is to read the file only once and then stream on a resulting collection twice:
List<String> allLines = Files.readAllLines(filePath);
allLines.stream() ... // here do the URLs
allLines.stream() ... // here do the emails
Of course this requires some memory.
Related
I am trying to rewrite the method below using streams but I am not sure what the best approach is? If I use flatMap on the values of the entrySet(), I lose the reference to the current key.
private List<String> asList(final Map<String, List<String>> map) {
final List<String> result = new ArrayList<>();
for (final Entry<String, List<String>> entry : map.entrySet()) {
final List<String> values = entry.getValue();
values.forEach(value -> result.add(String.format("%s-%s", entry.getKey(), value)));
}
return result;
}
The best I managed to do is the following:
return map.keySet().stream()
.flatMap(key -> map.get(key).stream()
.map(value -> new AbstractMap.SimpleEntry<>(key, value)))
.map(e -> String.format("%s-%s", e.getKey(), e.getValue()))
.collect(Collectors.toList());
Is there a simpler way without resorting to creating new Entry objects?
A stream is a sequence of values (possibly unordered / parallel). map() is what you use when you want to map a single value in the sequence to some single other value. Say, map "alturkovic" to "ALTURKOVIC". flatMap() is what you use when you want to map a single value in the sequence to 0, 1, or many other values. Hence why a flatMap lambda needs to turn a value into a stream of values. flatMap can thus be used to take, say, a list of lists of string, and turn that into a stream of just strings.
Here, you want to map a single entry from your map (a single key/value pair) into a single element (a string describing it). 1 value to 1 value. That means flatMap is not appropriate. You're looking for just map.
Furthermore, you need both key and value to perform your mapping op, so, keySet() is also not appropriate. You're looking for entrySet(), which gives you a set of all k/v pairs, juts what we need.
That gets us to:
map.entrySet().stream()
.map(e -> String.format("%s-%s", e.getKey(), e.getValue()))
.collect(Collectors.toList());
Your original code makes no effort to treat a single value from a map (which is a List<String>) as separate values; you just call .toString() on the entire ordeal, and be done with it. This means the produced string looks like, say, [Hello, World] given a map value of List.of("Hello", "World"). If you don't want this, you still don't want flatmap, because streams are also homogenous - the values in a stream are all of the same kind, and thus a stream of 'key1 value1 value2 key2 valueA valueB' is not what you'd want:
map.entrySet().stream()
.map(e -> String.format("%s-%s", e.getKey(), myPrint(e.getValue())))
.collect(Collectors.toList());
public static String myPrint(List<String> in) {
// write your own algorithm here
}
Stream API just isn't the right tool to replace that myPrint method.
A third alternative is that you want to smear out the map; you want each string in a mapvalue's List<String> to first be matched with the key (so that's re-stating that key rather a lot), and then do something to that. NOW flatMap IS appropriate - you want a stream of k/v pairs first, and then do something to that, and each element is now of the same kind. You want to turn the map:
key1 = [value1, value2]
key2 = [value3, value4]
first into a stream:
key1:value1
key1:value2
key2:value3
key2:value4
and take it from there. This explodes a single k/v entry in your map into more than one, thus, flatmapping needed:
return map.entrySet().stream()
.flatMap(e -> e.getValue().stream()
.map(v -> String.format("%s-%s", e.getKey(), v))
.collect(Collectors.toList());
Going inside-out, it maps a single entry within a list that belongs to a single k/v pair into the string Key-SingleItemFromItsList.
Adding my two cents to excellent answer by #rzwitserloot. Already flatmap and map is explained in his answer.
List<String> resultLists = myMap.entrySet().stream()
.flatMap(mapEntry -> printEntries(mapEntry.getKey(),mapEntry.getValue())).collect(Collectors.toList());
System.out.println(resultLists);
Splitting this to a separate method gives good readability IMO,
private static Stream<String> printEntries(String key, List<String> values) {
return values.stream().map(val -> String.format("%s-%s",key,val));
}
Using Java 8 (if that matters), I have a behavior I struggle to understand.
Let's say I have an Entry class as such :
static class Entry {
String key;
List<String> values;
public Entry(String key, String... values) {
this.key = key;
this.values = Arrays.asList(values);
}
}
And a list of instances :
List<Entry> entries = Arrays.asList(
new Entry("a", "a1"),
new Entry("b", "b1"),
new Entry("a", "a2"));
);
Now I want to collect all entries having the same key (and keep distinct values), and I stumbled upon a "IllegalStateException: stream has already been operated upon or closed".
The minimal code for producing it is :
entries.stream().collect(
Collectors.groupingBy(
e -> e.key,
Collectors.mapping(
e -> e.values.stream(),
Collectors.reducing(Stream.<String>empty(), Stream::concat))
)
);
(I'd add a collectingAndThen to meet my requirement, but it's not the point of my question)
I fail to see which part of the code consumes / acts on the streams. Furthermore, if I change the code to the following, it works :
entries.stream().collect(
Collectors.groupingBy(
e -> e.key,
Collectors.mapping(
e -> e.values.stream(),
Collectors.reducing(Stream::concat))
)
);
I'd rather use the former code, because the later gives me a Map<K, Optional<V>> while the former gives a Map<K, V>.
But the question is : what difference does the usage of a neutral element does in the reduction, that ultimately causes (at least) one of the stream to be consumed ?
The main problem can be reduced to this similar example:
Stream<String> identity = Stream.empty();
Stream<String> stream1 = Stream.of("1");
Stream<String> stream2 = Stream.of("2");
Stream.concat(identity, stream1); //works
Stream.concat(identity, stream2); //java.lang.IllegalStateException
In other words,
Collectors.reducing(Stream.<String>empty(), Stream::concat)
Creates one stream object with Stream.<String>empty(), and reuses it as the identity value in your multi-level reduction. Fortunately, you already have a workaround.
As warned against in the docs, and also pointed out in comments, repeated stream concatenation is discouraged:
Use caution when constructing streams from repeated concatenation. Accessing an element of a deeply concatenated stream can result in deep call chains, or even StackOverflowException.
One alternative approach I can think of is to flatten the stream before grouping:
//This yields a Map<String, List<String>>
entries.stream()
.flatMap(v -> v.values.stream().map(val -> new SimpleEntry<>(v.key, val)))
.collect(Collectors.groupingBy(
Map.Entry::getKey,
Collectors.mapping(Map.Entry::getValue,
Collectors.toList())));
The main problem is you cannot have a stream as identity element because streams cannot be reused, so when it tries to reuse it, throws saying it is operated upon or closed.
This is an alternative to the approach (returning List instead of Optional):
Map<String, List<String>> collect = entries.stream().collect(
Collectors.groupingBy(
e -> e.key,
Collectors.flatMapping(e -> e.values.stream(), Collectors.toList())))
Let's say I have one list with elements like:
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
And I have another large list of strings from which I would want to select all elements ending with any of the strings from the above list.
List<String> fullList= Arrays.asList("111.AAA", "222.AAA", "111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD");
Ideally I would want a way to partition the second list so that it contains four groups, each group containing only those elements ending with one of the strings from first list. So in the above case the results would be 4 groups of 2 elements each.
I found this example but I am still missing the part where I can filter by all endings which are contained in a different list.
Map<Boolean, List<String>> grouped = fullList.stream().collect(Collectors.partitioningBy((String e) -> !e.endsWith("AAA")));
UPDATE: MC Emperor's Answer does work, but it crashes on lists containing millions of strings, so doesn't work that well in practice.
Update
This one is similar to the approach from the original answer, but now fullList is no longer traversed many times. Instead, it is traversed once, and for each element, the list of endings is searched for a match. This is mapped to an Entry(ending, fullListItem), and then grouped by the list item. While grouping, the value elements are unwrapped to a List.
Map<String, List<String>> obj = fullList.stream()
.map(item -> endings.stream()
.filter(item::endsWith)
.findAny()
.map(ending -> new AbstractMap.SimpleEntry<>(ending, item))
.orElse(null))
.filter(Objects::nonNull)
.collect(groupingBy(Map.Entry::getKey, mapping(Map.Entry::getValue, toList())));
Original answer
You could use this:
Map<String, List<String>> obj = endings.stream()
.map(ending -> new AbstractMap.SimpleEntry<>(ending, fullList.stream()
.filter(str -> str.endsWith(ending))
.collect(Collectors.toList())))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
It takes all endings and traverses the fullList for elements ending with the value.
Note that with this approach, for each element it traverses the full list. This is rather inefficient, and I think you are better off using another way to map the elements. For instance, if you know something about the structure of the elements in fullList, then you can group it immediately.
To partition a stream, means putting each element into one of two groups. Since you have more suffixes, you want grouping instead, i.e. use groupingBy instead of partitioningBy.
If you want to support an arbitrary endings list, you might prefer something better than a linear search.
One approach is using a sorted collection, using a suffix-based comparator.
The comparator can be implemented like
Comparator<String> backwards = (s1, s2) -> {
for(int p1 = s1.length(), p2 = s2.length(); p1 > 0 && p2 > 0;) {
int c = Integer.compare(s1.charAt(--p1), s2.charAt(--p2));
if(c != 0) return c;
}
return Integer.compare(s1.length(), s2.length());
};
The logic is similar to the natural order of string, with the only difference that it runs from the end to the beginning. In other words, it’s equivalent to Comparator.comparing(s -> new StringBuilder(s).reverse().toString()), but more efficient.
Then, given an input like
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
List<String> fullList= Arrays.asList("111.AAA", "222.AAA",
"111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD");
you can perform the task as
// prepare collection with faster lookup
TreeSet<String> suffixes = new TreeSet<>(backwards);
suffixes.addAll(endings);
// use it for grouping
Map<String, List<String>> map = fullList.stream()
.collect(Collectors.groupingBy(suffixes::floor));
But if you are only interested in the count of each group, you should count right while grouping, avoiding to store lists of elements:
Map<String, Long> map = fullList.stream()
.collect(Collectors.groupingBy(suffixes::floor, Collectors.counting()));
If the list can contain strings which match no suffix of the list, you have to replace suffixes::floor with s -> { String g = suffixes.floor(s); return g!=null && s.endsWith(g)? g: "_None"; } or a similar function.
Use groupingBy.
Map<String, List<String>> grouped = fullList
.stream()
.collect(Collectors.groupingBy(s -> s.split("\\.")[1]));
s.split("\\.")[1] will take the yyy part of xxx.yyy.
EDIT : if you want to empty the values for which the ending is not in the list, you can filter them out:
grouped.keySet().forEach(key->{
if(!endings.contains(key)){
grouped.put(key, Collections.emptyList());
}
});
If your fullList have some elements which have suffixes that are not present in your endings you could try something like:
List<String> endings= Arrays.asList("AAA", "BBB", "CCC", "DDD");
List<String> fullList= Arrays.asList("111.AAA", "222.AAA", "111.BBB", "222.BBB", "111.CCC", "222.CCC", "111.DDD", "222.DDD", "111.EEE");
Function<String,String> suffix = s -> endings.stream()
.filter(e -> s.endsWith(e))
.findFirst().orElse("UnknownSuffix");
Map<String,List<String>> grouped = fullList.stream()
.collect(Collectors.groupingBy(suffix));
System.out.println(grouped);
If you create a helper method getSuffix() that accepts a String and returns its suffix (for example getSuffix("111.AAA") will return "AAA"), you can filter the Strings having suffix contained in the other list and then group them:
Map<String,List<String>> grouped =
fullList.stream()
.filter(s -> endings.contains(getSuffix(s)))
.collect(Collectors.groupingBy(s -> getSuffix(s)));
For example, if the suffix always begins at index 4, you can have:
public static String getSuffix(String s) {
return s.substring(4);
}
and the above Stream pipeline will return the Map:
{AAA=[111.AAA, 222.AAA], CCC=[111.CCC, 222.CCC], BBB=[111.BBB, 222.BBB], DDD=[111.DDD, 222.DDD]}
P.S. note that the filter step would be more efficient if you change the endings List to a HashSet.
One can use groupingBy of substrings with filter to ensure that the final Map has just the Collection of relevant values. This could be sone as :
Map<String, List<String>> grouped = fullList.stream()
.collect(Collectors.groupingBy(a -> getSuffix(a)))
.entrySet().stream()
.filter(e -> endings.contains(e.getKey()))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
private static String getSuffix(String a) {
return a.split(".")[1];
}
You can use groupingBy with filter on endings list as,
fullList.stream()
.collect(groupingBy(str -> endings.stream().filter(ele -> str.endsWith(ele)).findFirst().get()))
How can i convert the below condition to Java 8 streams way ?
List<String> name = Arrays.asList("A", "B", "C");
String id;
if(name.contains("A")){
id = "123";
}else if(name.contains("B")){
id = "234";
}else if(name.contains("C")){
id = "345";
}
I am in process of learning Streams and was wondering how i can convert this one. I tried with foreach, map, filter but it was not getting at it
Yet another (but compact) solution:
Arrays.asList("B", "C", "A", "D").stream()
.map(s -> s.equals("A") ? new SimpleEntry<>(1, "123")
: s.equals("B") ? new SimpleEntry<>(2, "234")
: s.equals("C") ? new SimpleEntry<>(3, "345")
: null)
.filter(x -> x != null)
.reduce((a, b) -> a.getKey() < b.getKey() ? a : b)
.map(Entry::getValue)
.ifPresent(System.out::println);
I cannot see why do you have to convert it to stream. This doesn't seem to be stream API case for me.
But if you want to easily add new items and make code more readable, I can suggest you to use map instead.
private static final ImmutableMap<String, String> nameToId = new ImmutableMap.Builder<String, String>()
.put("A", "123")
.put("B", "234")
.put("C", "345")
.build();
Now you can add new items without changing much code and just call nameToId.get(name) to fetch id by name.
You can add more flexibility here using streams
Stream.of("A", "B", "C").map(nameToId::get)collect(Collectors.toList());
Inspired by Serghey Bishyr's answer to use a map I also used a map (but ordered) and I will rather go through the keys of the map instead of the list to find the appropriate id. That might of course not be the best solution, but you can play with Streams that way ;-)
Map<String, String> nameToId = new LinkedHashMap<>();
// the following order reflects the order of your conditions! (if your first condition would contain "B", you would move "B" at the first position)
nameToId.put("A", "123");
nameToId.put("B", "234");
nameToId.put("C", "345");
List<String> name = Arrays.asList("A", "B", "C");
String id = nameToId.keySet()
.stream()
.filter(name::contains)
.findFirst()
.map(nameToId::get)
.orElse(null)
You gain nothing really... don't try to put too much into the filtering predicates or mapping functions, because then your Stream solution might not be that readable anymore.
The problem you describe is to get a single value (id) from application of a function to two input sets: the input values and the mappings.
id = f(list,mappings)
So basically your question is, to find a f that is based on streams (in other words, solutions that return a list don't solve your problem).
First of all, the original if-else-if-else construct mixes three concerns:
input validation (only considering the value set "A","B","C")
mapping an input value to an output value ("A" -> "123", "B" -> "234", "C" -> "345")
defining an implicit prioritization of input values according to their natural order (not sure if that is intentional or conincidental), "A" before "B" before "C"
When you want to apply this to a stream of input value, you have to make all of them explicit:
a Filter function, that ignores all input value without a mapping
a Mapper function, that maps the input to the id
a Reduce function (BinaryOperator) the performs the prioritization logic implied by the if-else-if-else construct
Mapping Function
The mapper is a discrete function mapping the input values to a one-element-stream of outputput values:
Function<String,Optional<String>> idMapper = s -> {
if("A".equals(s)){
return Optional.of("123");
} else if("B".equals(s)){
return Optional.of("234");
} else if("C".equals(s)){
return Optional.of("345");
}
return Optional.empty();
} ;
For more mappings an immutable map should be used:
Map<String,String> mapping = Collections.unmodifiableMap(new HashMap<String,String>(){{
put("A", "123");
put("B", "234");
put("C", "345");
}}); //the instance initializer is just one way to initialize the map :)
Function<String,Optional<String>> idMapper = s -> Optional.ofNullable(mapping.get(s));
Filter Function
As we only allow input values for which we have a mapping, we could use the keyset of the mapping map:
Predicate<String> filter = s -> mapping.containsKey(s);
Reduce Function
For find the top-priority element of the stream using their natural order, use this BinaryOperator:
BinaryOperator<String> prioritizer = (a, b) -> a.compareTo(b) < 0 ? a : b;
If there is another logic to prioritize, you have to adapt the implementation accordingly.
This operator is used in a .reduce() call. If you prioritize based on natural order, you could use .min(Comparator.naturalOrder()) on the stream instead.
Because the natur
Stream Pipeline
Now you first have to reduce the stream to a single value, using the prioritizer, the result is an Optional which you flatMap by applying the idMapper function (flatMap to not end with Optional>
Optional<String> id = Arrays.asList("C", "B", "A")
.stream()
.filter(filter) //concern: input validation
.reduce(prioritizer) //concern: prioritization
.flatMap(idMapper); //concern: id-mapping
Final Result
To wrap it up, for your particular problem, the most concise version (without defining functions first) using a stream and input validation would be:
//define the mapping in an immutable map (that's just one way to do it)
final Map<String,String> mapping = Collections.unmodifiableMap(
new HashMap<String,String>(){{
put("A", "123");
put("B", "234");
put("C", "345");
}});
Optional<String> result = Arrays.asList("C", "D", "A", "B")
.stream()
.filter(mapping::containsKey)
.min(Comparator.naturalOrder())
.flatMap(s -> Optional.ofNullable(mapping.get(s)));
which is the sought-for f:
BiFunction<List<String>,Map<String,String>,Optional<String>> f =
(list,map) -> list.stream()
.filter(map::containsKey)
.min(Comparator.naturalOrder())
.flatMap(s -> Optional.ofNullable(mapping.get(s)));
There is certainly some appeal to this approach, but the elegance-through-simplicity of the if-else approach cannot be denied either ;)
But for the sake of completeness, let's look at complexity. Assuming the number of mappings and the number of input values is rather large (otherwise it wouldn't really matter).
Solutions based on iterating over the map and searching using contains (as in your if-else construct):
Best-Case: o(1) (first branch in the if-else construct, first item in list)
Worst-Case: O(n^2) (last branch in the if-else construct, last item in list)
For the streaming solution with reduce, you have to iterate completely through the input list (O(n)) while the map lookup is O(1)
Best-Case: o(n)
Worst-Case: O(n)
Thx to Hamlezz for the reduce idea and Holger for pointing out that applying the mapper function directly to the stream does not yield the same result (as first match wins and not the first entry in the if-else construct) and the min(Comparator.naturalOrder()) option.
I have a Java lambda stream that parses a file and stores the results into a collection, based on some basic filtering.
I'm just learning lambdas so bear with me here if this is ridiculously bad. But please feel free to point out my mistakes.
For a given file:
#ignored
this
is
#ignored
working
fine
The code:
List<String> matches;
Stream<String> g = Files.lines(Paths.get(givenFile));
matches = g.filter(line -> !line.startsWith("#"))
.collect(Collectors.toList());
["this", "is", "working", "fine"]
Now, how would I go about collecting the ignored lines into a second list within this same stream? Something like:
List<String> matches;
List<String> ignored; // to store lines that start with #
Stream<String> g = Files.lines(Paths.get(exclusionFile.toURI()));
matches = g.filter(line -> !line.startsWith("#"))
// how can I add a condition to throw these
// non-matching lines into the ignored collection?
.collect(Collectors.toList());
I realize it would be pretty trivial to open a new stream, alter the logic a bit, and .collect() the ignored lines easily enough. But I don't want to have to loop through this file twice if I can do it all in one stream.
Instead of two streams you can use partitioningBy in Collector
List<String> strings = Arrays.asList("#ignored", "this", "is", "#ignored", "working", "fine");
Map<Boolean, List<String>> map = strings.stream().collect(Collectors.partitioningBy(s -> s.startsWith("#")));
System.out.println(map);
output
{false=[this, is, working, fine], true=[#ignored, #ignored]}
here I used key as Boolean but you can change it to a meaningful string or enum
EDIT
If the strings can starts with some other special characters you could use groupingBy
List<String> strings = Arrays.asList("#ignored", "this", "is", "#ignored", "working", "fine", "!Someother", "*star");
Function<String, String> classifier = s -> {
if (s.matches("^[!##$%^&*]{1}.*")) {
return Character.toString(s.charAt(0));
} else {
return "others";
}
};
Map<String, List<String>> maps = strings.stream().collect(Collectors.groupingBy(classifier));
System.out.println(maps);
Output
{!=[!Someother], #=[#ignored, #ignored], *=[*star], others=[this, is, working, fine]}
also you can nest groupingBy and partitioningBy
I think the closest you could come to a generic approach for this would be something like peek:
g.peek(line -> if (line.startsWith("#")) {
ignored.add(line);
})
.filter(line -> !line.startsWith("#"))
// how can I add a condition to throw these
// non-matching lines into the ignored collection?
.collect(Collectors.toList());
I mention it because unlike with the partitioning Collector you could, at least in theory, change together however many peeks you want--but, as you can see, you have to duplicate logic, so it's not ideal.