I thought that all stream pipelines written using flatMap() can be converted to use mapMulti. Looks like I was wrong when the flatMap() or mapMulti() returns/operates on an infinite stream.
Note: this is for educational purpose only
When we map an element to an infinite stream inside a flatMap() followed by a limit(), then the stream pipeline is lazy and evaluates as per the required number of elements.
list.stream()
.flatMap(element -> Stream.generate(() -> 1))
.limit(3)
.forEach(System.out::println);
Output:
1
1
1
But when doing the same in a mapMulti(), the pipeline is still lazy i.e., it doesn't consume the infinite stream. But when running this in IDE (Intellij), it hangs and doesn't terminate (I guess waiting for other elements consumption) and doesn't come out of the stream pipeline execution.
With a mapMulti(),
list.stream()
.mapMulti((element, consumer) -> {
Stream.generate(() -> 1)
.forEach(consumer);
})
.limit(3)
.forEach(System.out::println);
System.out.println("Done"); //Never gets here
Output:
1
1
1
But the last print (Done) doesn't get executed.
Is this the expected behaviour?
I couldn't find any warning or points on infinite stream and mapMulti() in Javadoc.
The advantage of mapMulti() is that it consumes new elements which became a part of the stream, replacing the initial element (opposed to flatMap() which internally generates a new stream for each element). If you're generating a fully-fledged stream with a terminal operation inside the mapMulti() it should be executed. And you've created an infinite stream which can't terminate (as #Lino has pointed out in the comment).
On the contrary, flatMap() expects a function producing a stream, i.e. function only returns it not processes.
Here's a quote from the API note that emphasizes the difference between the two operations:
API Note:
This method is similar to flatMap in that it applies a one-to-many
transformation to the elements of the stream and flattens the result
elements into a new stream. This method is preferable to flatMap in
the following circumstances:
When replacing each stream element with a small (possibly zero) number of elements. Using this method avoids the overhead of creating
a new Stream instance for every group of result elements, as required
by flatMap.
Related
I have a question on the intermediate stages sequential state - are the operations from a stage applied to all the input stream (items) or are all the stages / operations applied to each stream item?
I'm aware the question might not be easy to understand, so I'll give an example. On the following stream processing:
List<String> strings = Arrays.asList("Are Java streams intermediate stages sequential?".split(" "));
strings.stream()
.filter(word -> word.length() > 4)
.peek(word -> System.out.println("f: " + word))
.map(word -> word.length())
.peek(length -> System.out.println("m: " + length))
.forEach(length -> System.out.println("-> " + length + "\n"));
My expectation for this code is that it will output:
f: streams
f: intermediate
f: stages
f: sequential?
m: 7
m: 12
m: 6
m: 11
-> 7
-> 12
-> 6
-> 11
Instead, the output is:
f: streams
m: 7
-> 7
f: intermediate
m: 12
-> 12
f: stages
m: 6
-> 6
f: sequential?
m: 11
-> 11
Are the items just displayed for all the stages, due to the console output? Or are they also processed for all the stages, one at a time?
I can further detail the question, if it's not clear enough.
This behaviour enables optimisation of the code. If each intermediate operation were to process all elements of a stream before proceeding to the next intermediate operation then there would be no chance of optimisation.
So to answer your question, each element moves along the stream pipeline vertically one at a time (except for some stateful operations discussed later), therefore enabling optimisation where possible.
Explanation
Given the example you've provided, each element will move along the stream pipeline vertically one by one as there is no stateful operation included.
Another example, say you were looking for the first String whose length is greater than 4, processing all the elements prior to providing the result is unnecessary and time-consuming.
Consider this simple illustration:
List<String> stringsList = Arrays.asList("1","12","123","1234","12345","123456","1234567");
int result = stringsList.stream()
.filter(s -> s.length() > 4)
.mapToInt(Integer::valueOf)
.findFirst().orElse(0);
The filter intermediate operation above will not find all the elements whose length is greater than 4 and return a new stream of them but rather what happens is as soon as we find the first element whose length is greater than 4, that element goes through to the .mapToInt which then findFirst says "I've found the first element" and execution stops there. Therefore the result will be 12345.
Behaviour of stateful and stateless intermediate operations
Note that when a stateful intermediate operation as such of sorted is included in a stream pipeline then that specific operation will traverse the entire stream. If you think about it, this makes complete sense as in order to sort elements you'll need to see all the elements to determine which elements come first in the sort order.
The distinct intermediate operation is also a stateful operation, however, as #Holger has mentioned unlike sorted, it does not require traversing the entire stream as each distinct element can get passed down the pipeline immediately and may fulfil a short-circuiting condition.
stateless intermediate operations such as filter , map etc do not have to traverse the entire stream and can freely process one element at a time vertically as mentioned above.
Lastly, but not least it's also important to note that, when the terminal operation is a short-circuiting operation the terminal-short-circuiting methods can finish before traversing all the elements of the underlying stream.
reading: Java 8 stream tutorial
Your answer is loop fusion. What we see is that the four
intermediate operations filter() – peek() – map() – peek() – println using forEach() which is a kinda terminal operation have been logically
joined together to constitute a single pass. They are executed in
order for each of the individual element. This joining
together of operations in a single pass is an optimization technique
known as loop fusion.
More for reading: Source
An intermediate operation is always lazily executed. That is to say
they are not run until the point a terminal operation is reached.
A few of the most popular intermediate operations used in a stream
filter – the filter operation returns a stream of elements that
satisfy the predicate passed in as a parameter to the operation. The
elements themselves before and after the filter will have the same
type, however the number of elements will likely change
map – the map operation returns a stream of elements after they have
been processed by the function passed in as a parameter. The
elements before and after the mapping may have a different type, but
there will be the same total number of elements.
distinct – the distinct operation is a special case of the filter
operation. Distinct returns a stream of elements such that each
element is unique in the stream, based on the equals method of the
elements
.java-8-streams-cheat-sheet
Apart from optimisation, the order of processing you'd describe wouldn't work for streams of indeterminate length, like this:
DoubleStream.generate(Math::random).filter(d -> d > 0.9).findFirst();
Admittedly this example doesn't make much sense in practice, but the point is that rather than backed by a fixed-size collection,DoubleStream.generate() creates a potentially infinite stream. The only way to process this is element by element.
I have run the following code in Eclipse:
Stream.generate(() -> "Elsa")
.filter(n -> n.length() ==4)
.sorted()
.limit(2)
.forEach(System.out::println);
The output is:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
What I was expecting since the limit is two:
Elsa
Elsa
Can someone please explain why this is an infinite stream?
The first thing is that Stream::generate creates an infinite stream. That's why the stream is initially infinite.
You limit the stream to two elements by using Stream::limit, which would make it finite.
However, the problem is that you call sorted(), which tries to consume the whole stream. You need to limit the stream before you sort:
Stream.generate(() -> "Elsa")
.filter(n -> n.length() == 4)
.limit(2)
.sorted()
.forEach(System.out::println);
The documentation says that Stream::sorted() "is a stateful intermediate operation". The Streams documentation about a stateful intermediate operation explains it very well:
Stateful operations may need to process the entire input before producing a result. For example, one cannot produce any results from sorting a stream until one has seen all elements of the stream.
Emphasis mine.
There it is. Also note that for all Stream operations, their operation type is mentioned in the Javadocs.
Can someone please explain why this is an infinite stream?
Because the javadoc says that is precisely what Stream.generate() creates:
Returns an infinite sequential unordered stream where each element is generated by the provided Supplier
Then when you combine that with sorted(), you tell it to start a sort on an infinite sequence which will obviously cause the JVM to run out of memory.
I have a question on the intermediate stages sequential state - are the operations from a stage applied to all the input stream (items) or are all the stages / operations applied to each stream item?
I'm aware the question might not be easy to understand, so I'll give an example. On the following stream processing:
List<String> strings = Arrays.asList("Are Java streams intermediate stages sequential?".split(" "));
strings.stream()
.filter(word -> word.length() > 4)
.peek(word -> System.out.println("f: " + word))
.map(word -> word.length())
.peek(length -> System.out.println("m: " + length))
.forEach(length -> System.out.println("-> " + length + "\n"));
My expectation for this code is that it will output:
f: streams
f: intermediate
f: stages
f: sequential?
m: 7
m: 12
m: 6
m: 11
-> 7
-> 12
-> 6
-> 11
Instead, the output is:
f: streams
m: 7
-> 7
f: intermediate
m: 12
-> 12
f: stages
m: 6
-> 6
f: sequential?
m: 11
-> 11
Are the items just displayed for all the stages, due to the console output? Or are they also processed for all the stages, one at a time?
I can further detail the question, if it's not clear enough.
This behaviour enables optimisation of the code. If each intermediate operation were to process all elements of a stream before proceeding to the next intermediate operation then there would be no chance of optimisation.
So to answer your question, each element moves along the stream pipeline vertically one at a time (except for some stateful operations discussed later), therefore enabling optimisation where possible.
Explanation
Given the example you've provided, each element will move along the stream pipeline vertically one by one as there is no stateful operation included.
Another example, say you were looking for the first String whose length is greater than 4, processing all the elements prior to providing the result is unnecessary and time-consuming.
Consider this simple illustration:
List<String> stringsList = Arrays.asList("1","12","123","1234","12345","123456","1234567");
int result = stringsList.stream()
.filter(s -> s.length() > 4)
.mapToInt(Integer::valueOf)
.findFirst().orElse(0);
The filter intermediate operation above will not find all the elements whose length is greater than 4 and return a new stream of them but rather what happens is as soon as we find the first element whose length is greater than 4, that element goes through to the .mapToInt which then findFirst says "I've found the first element" and execution stops there. Therefore the result will be 12345.
Behaviour of stateful and stateless intermediate operations
Note that when a stateful intermediate operation as such of sorted is included in a stream pipeline then that specific operation will traverse the entire stream. If you think about it, this makes complete sense as in order to sort elements you'll need to see all the elements to determine which elements come first in the sort order.
The distinct intermediate operation is also a stateful operation, however, as #Holger has mentioned unlike sorted, it does not require traversing the entire stream as each distinct element can get passed down the pipeline immediately and may fulfil a short-circuiting condition.
stateless intermediate operations such as filter , map etc do not have to traverse the entire stream and can freely process one element at a time vertically as mentioned above.
Lastly, but not least it's also important to note that, when the terminal operation is a short-circuiting operation the terminal-short-circuiting methods can finish before traversing all the elements of the underlying stream.
reading: Java 8 stream tutorial
Your answer is loop fusion. What we see is that the four
intermediate operations filter() – peek() – map() – peek() – println using forEach() which is a kinda terminal operation have been logically
joined together to constitute a single pass. They are executed in
order for each of the individual element. This joining
together of operations in a single pass is an optimization technique
known as loop fusion.
More for reading: Source
An intermediate operation is always lazily executed. That is to say
they are not run until the point a terminal operation is reached.
A few of the most popular intermediate operations used in a stream
filter – the filter operation returns a stream of elements that
satisfy the predicate passed in as a parameter to the operation. The
elements themselves before and after the filter will have the same
type, however the number of elements will likely change
map – the map operation returns a stream of elements after they have
been processed by the function passed in as a parameter. The
elements before and after the mapping may have a different type, but
there will be the same total number of elements.
distinct – the distinct operation is a special case of the filter
operation. Distinct returns a stream of elements such that each
element is unique in the stream, based on the equals method of the
elements
.java-8-streams-cheat-sheet
Apart from optimisation, the order of processing you'd describe wouldn't work for streams of indeterminate length, like this:
DoubleStream.generate(Math::random).filter(d -> d > 0.9).findFirst();
Admittedly this example doesn't make much sense in practice, but the point is that rather than backed by a fixed-size collection,DoubleStream.generate() creates a potentially infinite stream. The only way to process this is element by element.
In the OCP studybook there is a line of code I don't entirely understand. It goes like this:
Stream<String> infinite = Stream.generate(() -> "chimp");
Does this create an infinite stream with just one element called chimp or does it infinitly generate chimp strings. Thank you.
Stream java.util.stream.Stream.generate(Supplier s)
Returns an infinite sequential unordered stream where each element is generated by the provided Supplier. This is suitable for generating constant streams, streams of random elements, etc.
It will create an infinite Stream, which means a Stream with infinite number of elements. All the elements will be the same String instance, since "chimp" will always return the same String instance from the String pool.
If you change it to
Stream<String> infinite = Stream.generate(() -> new String("chimp"));
each String element of this Stream will be a unique instance.
Nothing will happen unless there is a terminal operation in the stream pipeline please see Stream operations and pipelines section. For example this code:
infinite.forEach(System.out::println); // chimp ....
will print infinitive number of chimp Strings.
However this line will print only one String
infinite.limit(1).forEach(System.out::println); // chimp
On the other side
Stream.generate(() -> "chimp");
Has no effect, there no terminal operation in the stream's pipeline.
Intermediate operations return a new stream. They are always lazy;
executing an intermediate operation such as filter() does not actually
perform any filtering, but instead creates a new stream that, when
traversed, contains the elements of the initial stream that match the
given predicate. Traversal of the pipeline source does not begin until
the terminal operation of the pipeline is executed.
Why this code in java 8:
IntStream.range(0, 10)
.peek(System.out::print)
.limit(3)
.count();
outputs:
012
I'd expect it to output 0123456789, because peek preceeds limit.
It seems to me even more peculiar because of the fact that this:
IntStream.range(0, 10)
.peek(System.out::print)
.map(x -> x * 2)
.count();
outputs 0123456789 as expected (not 02481012141618).
P.S.: .count() here is used just to consume stream, it can be replaced with anything else
The most important thing to know about streams are that they do not contain elements themselves (like collections) but are working like a pipe whose values are lazily evaluated. That means that the statements that build up a stream - including mapping, filtering, or whatever - are not evaluated until the terminal operation runs.
In your first example, the stream tries to count from 0 to 9, one at each time doing the following:
print out the value
check whether 3 values are passed (if yes, terminate)
So you really get the output 012.
In your second example, the stream again counts from 0 to 9, one at each time doing the following:
print out the value
maping x to x*2, thus forwarding the double of the value to the next step
As you can see the output comes before the mapping and thus you get the result 0123456789. Try to switch the peek and the map calls. Then you will get your expected output.
From the docs:
limit() is a short-circuiting stateful intermediate operation.
map() is an intermediate operation
Again from the docs what that essentially means is that limit() will return a stream with x values from the stream it received.
An intermediate operation is short-circuiting if, when presented with infinite input, it may produce a finite stream as a result.
Streams are defined to do lazy processing. So in order to complete your count() operation it doesn’t need to look at the other items. Otherwise, it would be broken, as limit(…) is defined to be a proper way of processing infinite streams in a finite time (by not processing more than limit items).
In principle, it would be possible to complete your request without ever looking at the int values at all, as the operation chain limit(3).count() doesn’t need any processing of the previous operations (other than verifying whether the stream has at least 3 items).
Streams use lazy evaluation, the intermediate operations, i.e. peek() are not executed till the terminal operation runs.
For instances, the following code will just print 1 .In fact, as soon as the first element of the stream,1, will reach the terminal operation, findAny(), the stream execution will be ended.
Arrays.asList(1,2,3)
.stream()
.peek(System.out::print)
.filter((n)->n<3)
.findAny();
Viceversa, in the following example, will be printed 123. In fact the terminal operation, noneMatch(), needs to evaluate all the elements of the stream in order to make sure there is no match with its Predicate: n>4
Arrays.asList(1, 2, 3)
.stream()
.peek(System.out::print)
.noneMatch(n -> n > 4);
For future readers struggling to understand how the count method doesn't execute the peek method before it, I thought I add this additional note:
As per Java 9, the Java documentation for the count method states that:
An implementation may choose to not execute the stream pipeline
(either sequentially or in parallel) if it is capable of computing the
count directly from the stream source.
This means terminating the stream with count is no longer enough to ensure the execution of all previous steps, such as peek.