I am looking for an operation on a Stream that enables me to perform a non-terminal (and/or terminal) operation every nth item. Although I use a stream of primes for example, the stream could just as easily be web-requests, user actions, or some other cold data or live feed being produced.
From this:
Duration start = Duration.ofNanos(System.nanoTime());
IntStream.iterate(2, n -> n + 1)
.filter(Findprimes::isPrime)
.limit(1_000_1000 * 10)
.forEach(System.out::println);
System.out.println("Duration: " + Duration.ofNanos(System.nanoTime()).minus(start));
To a stream function like this:
IntStream.iterate(2, n -> n + 1)
.filter(Findprimes::isPrime)
.limit(1_000_1000 * 10)
.peekEvery(10, System.out::println)
.forEach( it -> {});
Create a helper method to wrap the peek() consumer:
public static IntConsumer every(int count, IntConsumer consumer) {
if (count <= 0)
throw new IllegalArgumentException("Count must be >1: Got " + count);
return new IntConsumer() {
private int i;
#Override
public void accept(int value) {
if (++this.i == count) {
consumer.accept(value);
this.i = 0;
}
}
};
}
You can now use it almost exactly like you wanted:
IntStream.rangeClosed(1, 20)
.peek(every(5, System.out::println))
.count();
Output
5
10
15
20
The helper method can be put in a utility class and statically imported, similar to how the Collectors class is nothing but static helper methods.
As noted by #user140547 in a comment, this code is not thread-safe, so it cannot be used with parallel streams. Besides, the output order would be messed up, so it doesn't really make sense to use it with parallel streams anyway.
It is not a good idea to rely on peek() and count() as it is possible that the operation is not invoked at all if count() can be calculated without going over the whole stream. Even if it works now, it does not mean that it is also going to work in future. See the javadoc of Stream.count() in Java 9.
Better use forEach().
For the problem itself: In special cases like a simple iteration, you could just filter your objects like.
Stream.iterate(2, n->n+1)
.limit(20)
.filter(n->(n-2)%5==0 && n!=2)
.forEach(System.out::println);
This of course won't work for other cases, where you might use a stateful IntConsumer. If iterate() is used, it is probably not that useful to use parallel streams anyway.
If you want a generic solution, you could also try to use a "normal" Stream, which may not be as efficient as an IntStream, but should still suffice in many cases:
class Tuple{ // ctor, getter/setter omitted
int index;
int value;
}
Then you could do:
Stream.iterate( new Tuple(1,2),t-> new Tuple(t.index+1,t.value*2))
.limit(30)
.filter(t->t.index %5 == 0)
.forEach(System.out::println);
If you have to use peek(), you can also do
.peek(t->{if (t.index %5 == 0) System.out.println(t);})
Or if you add methods
static Tuple initialTuple(int value){
return new Tuple(1,value);
}
static UnaryOperator<Tuple> createNextTuple(IntUnaryOperator f){
return current -> new Tuple(current.index+1,f.applyAsInt(current.value));
}
static Consumer<Tuple> every(int n,IntConsumer consumer){
return tuple -> {if (tuple.index % n == 0) consumer.accept(tuple.value);};
}
you can also do (with static imports):
Stream.iterate( initialTuple(2), createNextTuple(x->x*2))
.limit(30)
.peek(every(5,System.out::println))
.forEach(System.out::println);
Try this.
int[] counter = {0};
long result = IntStream.iterate(2, n -> n + 1)
.filter(Findprimes::isPrime)
.limit(100)
.peek(x -> { if (counter[0]++ % 10 == 0) System.out.print(x + " ");} )
.count();
result:
2 31 73 127 179 233 283 353 419 467
Related
I was looking for a right answer, but found nothing that fill my purpose.
I have a simple for loop like this:
String test = "hi";
for(Something something : somethingList) {
if(something.getSomething() != null) {
test = cleaner.clean(test, something.getSomething());
} else if(something.getOther() != null) {
test = StaticClass.clean(test, something.getOther());
}
}
and I never understood if the same result can be achieved using java stream. With reduce maybe? I need to pass the response of the previuos loop (saved in the "test" variable) to the next loop (see clean method, where I pass test). How can I do that?
If you want to do something for each element in a list (with a for each loop) I would suggest using the forEach or forEachOrdered functions. These should represent your for(Object o : objects). You can easily define your own Consumer class, which handles all the stuff for you:
class CustomConsumer implements Consumer<Integer> {
private Integer previous;
public CustomConsumer(Integer initialValue) {
previous = initialValue;
}
#Override
public void accept(Integer current) {
// do stuff with your current / previous object :)
System.out.println("previous: " + previous);
previous = current;
}
}
List<Integer> values = getValues();
values.stream()
.forEachOrdered(new CustomConsumer(-1));
This example uses Integer as a provided class, if you want to use your own just replace Integer. You can even use generics:
class CustomConsumer<T> implements Consumer<T> {
private T previous;
public CustomConsumer(T initialValue) {
previous = initialValue;
}
#Override
public void accept(T current) {
// do stuff with your current / previous object :)
System.out.println("previous: " + previous);
previous = current;
}
}
List<Integer> values = new ArrayList<>();
for(int i = 0; i < 6; i++)
values.add(i);
values.stream()
.forEachOrdered(new CustomConsumer<>("hello"));
Output:
previous: hello
previous: 0
previous: 1
previous: 2
previous: 3
previous: 4
If you want to learn more about streams the oracle docs provide some good stuff.
To expand on my comment, with streams you basically could use reduction, e.g. by using the reduce() method.
Example:
//some list simulating your somethingList
List<Integer> list = List.of(2,4,6,1,3,5);
String result = list.stream()
//make sure the stream is sequential to keep processing order
.sequential()
//start reduction with an initial value
.reduce("initial",
//in the accumulator you get the previous reduction result and the current element
(test, element) -> {
//simulates your conditions, just adding the new element for demonstration purposes
// test could also be replaced
if( element % 2 == 0 ) {
test += ", even:" + element;
} else {
test += ", odd: " + element;
}
//return the new reduction result
return test;
},
//combiner is not used in sequential streams so just one of the elements
(l, r) -> l);
This would result in:
initial, even:2, even:4, even:6, odd: 1, odd: 3, odd: 5
Note, however, that streams are not a silver bullet and sometimes a simple loop like your initial code is just fine or even better. This seems to be such a case.
Using Reactor, I'm trying to validate the beginning of a cold Flux stream and then become a pass-through.
For example, say I need to validate the first N elements. If (and only if) it passes, these and further elements are forwarded. If it fails, only an error is emitted.
This is what I have so far. It works, but is there a better or more correct way to do this? I was tempted to implement my own operator, but I'm told it's complicated and not recommended.
flux
.bufferUntil(new Predicate<>() {
private int count = 0;
#Override
public boolean test(T next) {
return ++count >= N;
}
})
// Zip with index to know the first element
.zipWith(Flux.<Integer, Integer>generate(() -> 0, (cur, s) -> {
s.next(cur);
return cur + 1;
}))
.map(t -> {
if (t.getT2() == 0 && !validate(t.getT1()))
throw new RuntimeException("Invalid");
return t.getT1();
})
// Flatten buffered elements
.flatMapIterable(identity())
I could have used doOnNext instead of the second map since it doesn't map anything, but I'm not sure it's an acceptable use of the peek methods.
I could also have used a stateful mapper in the second map to run only once instead of zipping with index, I guess that's acceptable since I'm already using a stateful predicate...
Your requirement sounds interesting! We have switchOnFirst which could be useful for validating the first element. But if you have N number of elements to validate, we can try something like this.
Here I assume that I have to validate the first 5 elements which should be <= 5. Then it is a valid stream. Otherwise we would simply throw error saying validation failed.
Flux<Integer> integerFlux = Flux.range(1, 10).delayElements(Duration.ofSeconds(1));
integerFlux
.buffer(5)
.switchOnFirst((signal, flux) -> {
//first 5 elements are <= 5, then it is a valid stream
return signal.get().stream().allMatch(i -> i <= 5) ? flux : Flux.error(new RuntimeException("validation failed"));
})
.flatMapIterable(Function.identity())
.subscribe(System.out::println,
System.out::println);
However this approach is not good as it keeps collecting 5 elements every time even after the first validation is done which we might not want.
To avoid buffering N elements after the validation, we can use bufferUntil. Once we had collected the first N elements and validated, it would just pass the 1 element as and when it receives to the downstream.
AtomicInteger atomicInteger = new AtomicInteger(1);
integerFlux
.bufferUntil(i -> {
if(atomicInteger.get() < 5){
atomicInteger.incrementAndGet();
return false;
}
return true;
})
.switchOnFirst((signal, flux) -> {
return signal.get().stream().allMatch(i -> i <= 5) ? flux : Flux.error(new RuntimeException("validation failed"));
})
.flatMapIterable(Function.identity())
.subscribe(System.out::println,
System.out::println);
I am trying to learn Java 8. Is there a way to turn the method below into Java 8 Streams, filter, and forEach. If so, how?
String[] couponList = coupons.split(",");
for(String coupon:couponList) {
singleCouponUsageCount = getSingleCouponUsageCount(coupon);
if(singleCouponUsageCount >= totalUsageCount)
return 0;
}
return 1;
//
for(String coupon:couponList) {
singleCouponUsageCount = getSingleCouponUsageCount(coupon);
if(singleCouponUsageCount >= totalUsageCount)
return singleCouponUsageCount;
}
return singleCouponUsageCount;
You can Stream over the elements of the array, map them to their usage count, and use anyMatch to determine if any of the usage counts meets the criteria that should result in returning 0:
return Arrays.stream(coupons.split(","))
.map(coupon -> getSingleCouponUsageCount(coupon))
.anyMatch(count -> count >= totalUsageCount) ? 0 : 1;
EDIT:
For you second snippet, if you want to return the first count that matches the condition, you can write:
return Arrays.stream(coupons.split(","))
.map(coupon -> getSingleCouponUsageCount(coupon))
.filter(count -> count >= totalUsageCount)
.findFirst()
.orElse(someDefaultValue);
Usually, you want a search operation to be short-circuiting, in other words, to return immediately when a match has been found. But unlike operations like collect, the short-circuiting operations of the Stream API can’t be customized easily.
For you specific operation, you can split the operation into two, which can still be formulated as a single expression:
String[] couponList = coupons.split(",");
return Arrays.stream(couponList, 0, couponList.length-1)
.map(coupon -> getSingleCouponUsageCount(coupon))
.filter(singleCouponUsageCount -> singleCouponUsageCount >= totalUsageCount)
.findFirst()
.orElseGet(() -> getSingleCouponUsageCount(couponList[couponList.length-1]));
This does a short-circuiting search over all but the last element, returning immediately when a match has been found. Only if no match has been found there, the last element will be processed and its result returned unconditionally.
You can do it like this
return Stream.of(coupons.split(","))
.anyMatch(coupon -> getSingleCouponUsageCount(coupon) >= totalUsageCount) ? 0 : 1;
Yes, you can do it with streams:
List<Integer> results = couponList
.stream()
.map(coupon -> getSingleCouponUsageCount(coupon))
.filter(count -> count >= totalUsageCount ? 0 : 1)
.collect(Collectors.toList());
You can also use Pattern.splitAsStream() for this, which directly returns a Stream:
// as a constant
private static final Pattern COMMA = Pattern.compile(",");
// somewhere else in a method
boolean found = COMMA.splitAsStream(coupons)
// effectively the same as coupon -> getSingleCouponCount(count)
.map(this::getSingleCouponCount)
.anyMatch(count -> count >= totalUsageCount);
return found ? 0 : 1;
Given the code that you've shared. An important utility for you would be to create a lookup map for coupon usage.
Map<String, Long> couponUsageCount(String[] couponList) {
return Arrays.stream(couponList)
.collect(Collectors.toMap(Function.identity(),
coupon ->> getSingleCouponUsageCount(coupon)));
}
Further, it eases to incorporate this into the other two implementations.
// note: boolean instead of 0 and 1
boolean countUsageExceeds(String[] couponList, Long totalUsageCount) {
return couponUsageCount(couponList).values()
.stream()
.anyMatch(usage -> usage >= totalUsageCount);
}
// better to use Optional since you might not find any such value
// same as above method returning false
Optional<Long> exceededValue(String[] couponList, Long totalUsageCount) {
Map<String, Long> couponUsageCount = couponUsageCount(couponList);
// use this with orElse if you want to return an absolute value from this method
long lastValue = couponUsageCount.get(couponList[couponList.length - 1]);
return couponUsageCount.values()
.stream()
.filter(usage -> usage >= totalUsageCount)
.findFirst();
}
Java 8 introduced a Stream class that resembles Scala's Stream, a powerful lazy construct using which it is possible to do something like this very concisely:
def from(n: Int): Stream[Int] = n #:: from(n+1)
def sieve(s: Stream[Int]): Stream[Int] = {
s.head #:: sieve(s.tail filter (_ % s.head != 0))
}
val primes = sieve(from(2))
primes takeWhile(_ < 1000) print // prints all primes less than 1000
I wondered if it is possible to do this in Java 8, so I wrote something like this:
IntStream from(int n) {
return IntStream.iterate(n, m -> m + 1);
}
IntStream sieve(IntStream s) {
int head = s.findFirst().getAsInt();
return IntStream.concat(IntStream.of(head), sieve(s.skip(1).filter(n -> n % head != 0)));
}
IntStream primes = sieve(from(2));
Fairly simple, but it produces java.lang.IllegalStateException: stream has already been operated upon or closed because both findFirst() and skip() are terminal operations on Stream which can be done only once.
I don't really have to use up the stream twice since all I need is the first number in the stream and the rest as another stream, i.e. equivalent of Scala's Stream.head and Stream.tail. Is there a method in Java 8 Stream that I can use to achieve this?
Thanks.
Even if you hadn’t the problem that you can’t split an IntStream, you code didn’t work because you are invoking your sieve method recursively instead of lazily. So you had an infinity recursion before you could query your resulting stream for the first value.
Splitting an IntStream s into a head and a tail IntStream (which has not yet consumed) is possible:
PrimitiveIterator.OfInt it = s.iterator();
int head = it.nextInt();
IntStream tail = IntStream.generate(it::next).filter(i -> i % head != 0);
At this place you need a construct of invoking sieve on the tail lazily. Stream does not provide that; concat expects existing stream instances as arguments and you can’t construct a stream invoking sieve lazily with a lambda expression as lazy creation works with mutable state only which lambda expressions do not support. If you don’t have a library implementation hiding the mutable state you have to use a mutable object. But once you accept the requirement of mutable state, the solution can be even easier than your first approach:
IntStream primes = from(2).filter(i -> p.test(i)).peek(i -> p = p.and(v -> v % i != 0));
IntPredicate p = x -> true;
IntStream from(int n)
{
return IntStream.iterate(n, m -> m + 1);
}
This will recursively create a filter but in the end it doesn’t matter whether you create a tree of IntPredicates or a tree of IntStreams (like with your IntStream.concat approach if it did work). If you don’t like the mutable instance field for the filter you can hide it in an inner class (but not in a lambda expression…).
My StreamEx library has now headTail() operation which solves the problem:
public static StreamEx<Integer> sieve(StreamEx<Integer> input) {
return input.headTail((head, tail) ->
sieve(tail.filter(n -> n % head != 0)).prepend(head));
}
The headTail method takes a BiFunction which will be executed at most once during the stream terminal operation execution. So this implementation is lazy: it does not compute anything until traversal starts and computes only as much prime numbers as requested. The BiFunction receives a first stream element head and the stream of the rest elements tail and can modify the tail in any way it wants. You may use it with predefined input:
sieve(IntStreamEx.range(2, 1000).boxed()).forEach(System.out::println);
But infinite stream work as well
sieve(StreamEx.iterate(2, x -> x+1)).takeWhile(x -> x < 1000)
.forEach(System.out::println);
// Not the primes till 1000, but 1000 first primes
sieve(StreamEx.iterate(2, x -> x+1)).limit(1000).forEach(System.out::println);
There's also alternative solution using headTail and predicate concatenation:
public static StreamEx<Integer> sieve(StreamEx<Integer> input, IntPredicate isPrime) {
return input.headTail((head, tail) -> isPrime.test(head)
? sieve(tail, isPrime.and(n -> n % head != 0)).prepend(head)
: sieve(tail, isPrime));
}
sieve(StreamEx.iterate(2, x -> x+1), i -> true).limit(1000).forEach(System.out::println);
It interesting to compare recursive solutions: how many primes they capable to generate.
#John McClean solution (StreamUtils)
John McClean solutions are not lazy: you cannot feed them with infinite stream. So I just found by trial-and-error the maximal allowed upper bound (17793) (after that StackOverflowError occurs):
public void sieveTest(){
sieve(IntStream.range(2, 17793).boxed()).forEach(System.out::println);
}
#John McClean solution (Streamable)
public void sieveTest2(){
sieve(Streamable.range(2, 39990)).forEach(System.out::println);
}
Increasing upper limit above 39990 results in StackOverflowError.
#frhack solution (LazySeq)
LazySeq<Integer> ints = integers(2);
LazySeq primes = sieve(ints); // sieve method from #frhack answer
primes.forEach(p -> System.out.println(p));
Result: stuck after prime number = 53327 with enormous heap allocation and garbage collection taking more than 90%. It took several minutes to advance from 53323 to 53327, so waiting more seems impractical.
#vidi solution
Prime.stream().forEach(System.out::println);
Result: StackOverflowError after prime number = 134417.
My solution (StreamEx)
sieve(StreamEx.iterate(2, x -> x+1)).forEach(System.out::println);
Result: StackOverflowError after prime number = 236167.
#frhack solution (rxjava)
Observable<Integer> primes = Observable.from(()->primesStream.iterator());
primes.forEach((x) -> System.out.println(x.toString()));
Result: StackOverflowError after prime number = 367663.
#Holger solution
IntStream primes=from(2).filter(i->p.test(i)).peek(i->p=p.and(v->v%i!=0));
primes.forEach(System.out::println);
Result: StackOverflowError after prime number = 368089.
My solution (StreamEx with predicate concatenation)
sieve(StreamEx.iterate(2, x -> x+1), i -> true).forEach(System.out::println);
Result: StackOverflowError after prime number = 368287.
So three solutions involving predicate concatenation win, because each new condition adds only 2 more stack frames. I think, the difference between them is marginal and should not be considered to define a winner. However I like my first StreamEx solution more as it more similar to Scala code.
The solution below does not do state mutations, except for the head/tail deconstruction of the stream.
The lazyness is obtained using IntStream.iterate. The class Prime is used to keep the generator state
import java.util.PrimitiveIterator;
import java.util.stream.IntStream;
import java.util.stream.Stream;
public class Prime {
private final IntStream candidates;
private final int current;
private Prime(int current, IntStream candidates)
{
this.current = current;
this.candidates = candidates;
}
private Prime next()
{
PrimitiveIterator.OfInt it = candidates.filter(n -> n % current != 0).iterator();
int head = it.next();
IntStream tail = IntStream.generate(it::next);
return new Prime(head, tail);
}
public static Stream<Integer> stream() {
IntStream possiblePrimes = IntStream.iterate(3, i -> i + 1);
return Stream.iterate(new Prime(2, possiblePrimes), Prime::next)
.map(p -> p.current);
}
}
The usage would be this:
Stream<Integer> first10Primes = Prime.stream().limit(10)
You can essentially implement it like this:
static <T> Tuple2<Optional<T>, Seq<T>> splitAtHead(Stream<T> stream) {
Iterator<T> it = stream.iterator();
return tuple(it.hasNext() ? Optional.of(it.next()) : Optional.empty(), seq(it));
}
In the above example, Tuple2 and Seq are types borrowed from jOOλ, a library that we developed for jOOQ integration tests. If you don't want any additional dependencies, you might as well implement them yourself:
class Tuple2<T1, T2> {
final T1 v1;
final T2 v2;
Tuple2(T1 v1, T2 v2) {
this.v1 = v1;
this.v2 = v2;
}
static <T1, T2> Tuple2<T1, T2> tuple(T1 v1, T2 v2) {
return new Tuple<>(v1, v2);
}
}
static <T> Tuple2<Optional<T>, Stream<T>> splitAtHead(Stream<T> stream) {
Iterator<T> it = stream.iterator();
return tuple(
it.hasNext() ? Optional.of(it.next()) : Optional.empty,
StreamSupport.stream(Spliterators.spliteratorUnknownSize(
it, Spliterator.ORDERED
), false)
);
}
If you don't mind using 3rd party libraries cyclops-streams, I library I wrote has a number of potential solutions.
The StreamUtils class has large number of static methods for working directly with java.util.stream.Streams including headAndTail.
HeadAndTail<Integer> headAndTail = StreamUtils.headAndTail(Stream.of(1,2,3,4));
int head = headAndTail.head(); //1
Stream<Integer> tail = headAndTail.tail(); //Stream[2,3,4]
The Streamable class represents a replayable Stream and works by building a lazy, caching intermediate data-structure. Because it is caching and repayable - head and tail can be implemented directly and separately.
Streamable<Integer> replayable= Streamable.fromStream(Stream.of(1,2,3,4));
int head = repayable.head(); //1
Stream<Integer> tail = replayable.tail(); //Stream[2,3,4]
cyclops-streams also provides a sequential Stream extension that in turn extends jOOλ and has both Tuple based (from jOOλ) and domain object (HeadAndTail) solutions for head and tail extraction.
SequenceM.of(1,2,3,4)
.splitAtHead(); //Tuple[1,SequenceM[2,3,4]
SequenceM.of(1,2,3,4)
.headAndTail();
Update per Tagir's request -> A Java version of the Scala sieve using SequenceM
public void sieveTest(){
sieve(SequenceM.range(2, 1_000)).forEach(System.out::println);
}
SequenceM<Integer> sieve(SequenceM<Integer> s){
return s.headAndTailOptional().map(ht ->SequenceM.of(ht.head())
.appendStream(sieve(ht.tail().filter(n -> n % ht.head() != 0))))
.orElse(SequenceM.of());
}
And another version via Streamable
public void sieveTest2(){
sieve(Streamable.range(2, 1_000)).forEach(System.out::println);
}
Streamable<Integer> sieve(Streamable<Integer> s){
return s.size()==0? Streamable.of() : Streamable.of(s.head())
.appendStreamable(sieve(s.tail()
.filter(n -> n % s.head() != 0)));
}
Note - neither Streamable of SequenceM have an Empty implementation - hence the size check for Streamable and the use of headAndTailOptional.
Finally a version using plain java.util.stream.Stream
import static com.aol.cyclops.streams.StreamUtils.headAndTailOptional;
public void sieveTest(){
sieve(IntStream.range(2, 1_000).boxed()).forEach(System.out::println);
}
Stream<Integer> sieve(Stream<Integer> s){
return headAndTailOptional(s).map(ht ->Stream.concat(Stream.of(ht.head())
,sieve(ht.tail().filter(n -> n % ht.head() != 0))))
.orElse(Stream.of());
}
Another update - a lazy iterative based on #Holger's version using objects rather than primitives (note a primitive version is also possible)
final Mutable<Predicate<Integer>> predicate = Mutable.of(x->true);
SequenceM.iterate(2, n->n+1)
.filter(i->predicate.get().test(i))
.peek(i->predicate.mutate(p-> p.and(v -> v%i!=0)))
.limit(100000)
.forEach(System.out::println);
There are many interesting suggestions provided here, but if someone needs a solution without dependencies to third party libraries I came up with this:
import java.util.AbstractMap;
import java.util.Optional;
import java.util.Spliterators;
import java.util.stream.StreamSupport;
/**
* Splits a stream in the head element and a tail stream.
* Parallel streams are not supported.
*
* #param stream Stream to split.
* #param <T> Type of the input stream.
* #return A map entry where {#link Map.Entry#getKey()} contains an
* optional with the first element (head) of the original stream
* and {#link Map.Entry#getValue()} the tail of the original stream.
* #throws IllegalArgumentException for parallel streams.
*/
public static <T> Map.Entry<Optional<T>, Stream<T>> headAndTail(final Stream<T> stream) {
if (stream.isParallel()) {
throw new IllegalArgumentException("parallel streams are not supported");
}
final Iterator<T> iterator = stream.iterator();
return new AbstractMap.SimpleImmutableEntry<>(
iterator.hasNext() ? Optional.of(iterator.next()) : Optional.empty(),
StreamSupport.stream(Spliterators.spliteratorUnknownSize(iterator, 0), false)
);
}
To get head and tail you need a Lazy Stream implementation. Java 8 stream or RxJava are not suitable.
You can use for example LazySeq as follows.
Lazy sequence is always traversed from the beginning using very cheap
first/rest decomposition (head() and tail())
LazySeq implements java.util.List interface, thus can be used in
variety of places. Moreover it also implements Java 8 enhancements to
collections, namely streams and collectors
package com.company;
import com.nurkiewicz.lazyseq.LazySeq;
public class Main {
public static void main(String[] args) {
LazySeq<Integer> ints = integers(2);
LazySeq primes = sieve(ints);
primes.take(10).forEach(p -> System.out.println(p));
}
private static LazySeq<Integer> sieve(LazySeq<Integer> s) {
return LazySeq.cons(s.head(), () -> sieve(s.filter(x -> x % s.head() != 0)));
}
private static LazySeq<Integer> integers(int from) {
return LazySeq.cons(from, () -> integers(from + 1));
}
}
Here is another recipe using the way suggested by Holger.
It use RxJava just to add the possibility to use the take(int) method and many others.
package com.company;
import rx.Observable;
import java.util.function.IntPredicate;
import java.util.stream.IntStream;
public class Main {
public static void main(String[] args) {
final IntPredicate[] p={(x)->true};
IntStream primesStream=IntStream.iterate(2,n->n+1).filter(i -> p[0].test(i)).peek(i->p[0]=p[0].and(v->v%i!=0) );
Observable primes = Observable.from(()->primesStream.iterator());
primes.take(10).forEach((x) -> System.out.println(x.toString()));
}
}
This should work with parallel streams as well:
public static <T> Map.Entry<Optional<T>, Stream<T>> headAndTail(final Stream<T> stream) {
final AtomicReference<Optional<T>> head = new AtomicReference<>(Optional.empty());
final var spliterator = stream.spliterator();
spliterator.tryAdvance(x -> head.set(Optional.of(x)));
return Map.entry(head.get(), StreamSupport.stream(spliterator, stream.isParallel()));
}
If you want to get head of a stream, just:
IntStream.range(1, 5).first();
If you want to get tail of a stream, just:
IntStream.range(1, 5).skip(1);
If you want to get both head and tail of a stream, just:
IntStream s = IntStream.range(1, 5);
int head = s.head();
IntStream tail = s.tail();
If you want to find the prime, just:
LongStream.range(2, n)
.filter(i -> LongStream.range(2, (long) Math.sqrt(i) + 1).noneMatch(j -> i % j == 0))
.forEach(N::println);
If you want to know more, go to get abacus-common
Declaration: I'm the developer of abacus-common.
package com.spse.pricing.client.main;
import java.util.stream.IntStream;
public class NestedParalleStream {
int total = 0;
public static void main(String[] args) {
NestedParalleStream nestedParalleStream = new NestedParalleStream();
nestedParalleStream.test();
}
void test(){
try{
IntStream stream1 = IntStream.range(0, 2);
stream1.parallel().forEach(a ->{
IntStream stream2 = IntStream.range(0, 2);
stream2.parallel().forEach(b ->{
IntStream stream3 = IntStream.range(0, 2);
stream3.parallel().forEach(c ->{
//2 * 2 * 2 = 8;
total ++;
});
});
});
//It should display 8
System.out.println(total);
}catch(Exception e){
e.printStackTrace();
}
}
}
Pls help how to customize parallestream to make sure we will get consistency results.
Since multiple threads are incrementing total, you must declare it volatile to avoid race conditions
Edit: volatile makes read / write operations atomic, but total++ requires mores than one operation. For that reason, you should use an AtomicInteger:
AtomicInteger total = new AtomicInteger();
...
total.incrementAndGet();
Problem in statement total ++; it is invoked in multiple threads simultaneously.
You should protect it with synchronized or use AtomicInteger
LongAdder or LongAccumulator are preferable to AtomicLong or AtomicInteger where multiple threads are mutating the value and it's intended to be read relatively few times, such as once at the end of the computation. The adder/accumulator objects avoid contention problems that can occur with the atomic objects. (There are corresponding adder/accumulator objects for double values.)
There is usually a way to rewrite accumulations using reduce() or collect(). These are often preferable, especially if the value being accumulated (or collected) isn't a long or a double.
There is a major problem regarding mutability with the way you are solving it. A better way to solve it the way you want would be as follows:
int total = IntStream.range(0,2)
.parallel()
.map(i -> {
return IntStream.range(0,2)
.map(j -> {
return IntStream.range(0,2)
.map(k -> i * j * k)
.reduce(0,(acc, val) -> acc + 1);
}).sum();
}).sum();