How to iterate x times using Java 8 stream? [duplicate] - java

This question already has answers here:
Is it possible to use Streams.intRange function?
(3 answers)
Closed 6 years ago.
I have an old style for loop to do some load tests:
For (int i = 0 ; i < 1000 ; ++i) {
if (i+1 % 100 == 0) {
System.out.println("Test number "+i+" started.");
}
// The test itself...
}
How can I use new Java 8 stream API to be able to do this without the for?
Also, the use of the stream would make it easy to switch to parallel stream. How to switch to parallel stream?
* I'd like to keep the reference to i.

IntStream.range(0, 1000)
/* .parallel() */
.filter(i -> i+1 % 100 == 0)
.peek(i -> System.out.println("Test number " + i + " started."))
/* other operations on the stream including a terminal one */;
If the test is running on each iteration regardless of the condition (take the filter out):
IntStream.range(0, 1000)
.peek(i -> {
if (i + 1 % 100 == 0) {
System.out.println("Test number " + i + " started.");
}
}).forEach(i -> {/* the test */});
Another approach (if you want to iterate over an index with a predefined step, as #Tunaki mentioned) is:
IntStream.iterate(0, i -> i + 100)
.limit(1000 / 100)
.forEach(i -> { /* the test */ });
There is an awesome overloaded method Stream.iterate(seed, condition, unaryOperator) in JDK 9 which perfectly fits your situation and is designed to make a stream finite and might replace a plain for:
Stream<Integer> stream = Stream.iterate(0, i -> i < 1000, i -> i + 100);

You can use IntStream as shown below and explained in the comments:
(1) Iterate IntStream range from 1 to 1000
(2) Convert to parallel stream
(3) Apply Predicate condition to allow integers with (i+1)%100 == 0
(4) Now convert the integer to a string "Test number "+i+" started."
(5) Output to console
IntStream.range(1, 1000). //iterates 1 to 1000
parallel().//converts to parallel stream
filter( i -> ((i+1)%100 == 0)). //filters numbers & allows like 99, 199, etc..)
mapToObj((int i) -> "Test number "+i+" started.").//maps integer to String
forEach(System.out::println);//prints to the console

Related

Flux.range waits to emit more element once 256 elements are reached

I wrote this code:
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000))
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
When running it, the first System.out.println shows that the Flux stop emitting numbers at the 256th element, then it waits for the older to be completed before emitting new ones.
Why is this happening?
Why 256?
Why this happening?
The flatMap operator can be characterized as operator that (rephrased from javadoc):
subscribes to its inners eagerly
does not preserve ordering of elements.
lets values from different inners interleave.
For this question the first point is important. Project Reactor restricts the
number of in-flight inner sequences via concurrency parameter.
While flatMap(mapper) uses the default parameter the flatMap(mapper, concurrency) overload accepts this parameter explicitly.
The flatMaps javadoc describes the parameter as:
The concurrency argument allows to control how many Publisher can be subscribed to and merged in parallel
Consider the following code using concurrency = 500
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000)),
500
// ^^^^^^^^^^
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
In this case there is no waiting:
i = 297
i = 298
i = 299
end 0
end 1
end 2
In contrast if you pass 1 as concurrency the output will be similar to:
i = 0
end 0
i = 1
end 1
Awaiting one second before emitting the next element.
Why 256?
256 is the default value for concurrency of flatMap.
Take a look at Queues.SMALL_BUFFER_SIZE:
public static final int SMALL_BUFFER_SIZE = Math.max(16,
Integer.parseInt(System.getProperty("reactor.bufferSize.small", "256")));

Flux parallel-serial execution with groupBy

Say I have this:
Flux<GroupedFlux<Integer, Integer>> intsGrouped = Flux.range(0, 12)
.groupBy(i -> i % 3);
and say I have a method:
Mono<Integer> getFromService(Integer i);
I want to call getFromService in parallel for each of the groups, but make sure the calls are serial within each group.
For the above example that would be three parallel streams with these input values:
stream 1: 0 -> 3 -> 6 -> 9
stream 2: 1 -> 4 -> 7 -> 10
stream 3: 2 -> 5 -> 8 -> 11
I tried this, but it's not doing what I want:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.flatMap(i -> getFromService(g.key(), i)))
This is calling the service in parallel for all the ints at once. How do I proceed?
Use either concatMap or flatMapSequential instead of the inner .flatMap
If you want sequential execution within each group (i.e. only one subscription to getFromService at a single time within each group), then use .concatMap, like this:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.concatMap(i -> getFromService(g.key(), i)))
If parallel execution within a group is ok, but you just care about the order in which the sequence is emitted, then use flatMapSequential, like this:
Flux.range(0, 12)
.groupBy(i -> i % 3)
.flatMap(g -> g.flatMapSequential(i -> getFromService(g.key(), i)))
Another option is to use .flatMap with the concurrency argument set to 1, but I'd recommend one of the above instead.

Recursive Sum of digits of number(until digit is less than 10) java 8 lambdas only

I am just practicing lamdas java 8. My problem is as follows
Sum all the digits in an integer until its less than 10(means single digit left) and checks if its 1
Sample Input 1
100
Sample Output 1
1 // true because its one
Sample Input 2
55
Sample Output 2
1 ie 5+5 = 10 then 1+0 = 1 so true
I wrote a code
System.out.println(Arrays.asList( String.valueOf(number).split("") ).stream()
.map(Integer::valueOf)
.mapToInt(i->i)
.sum() == 1);
It works for the input 1 ie 100 but not for input 2 ie 55 which I clearly understand that in second case 10 is the output because the iteration is not recursive .
So how can I make this lambdas expression recursive so that it can work in second case also? I can create a method with that lambda expression and call it each time until return value is< 10 but I was thinking if there is any approach within lambdas.
Thanks
If you want a pure lambda solution, you should forget about making it recursive, as there is absolutely no reason to implement an iterative process as a recursion:
Stream.iterate(String.valueOf(number),
n -> String.valueOf(n.codePoints().map(Character::getNumericValue).sum()))
.filter(s -> s.length()==1)
.findFirst().ifPresent(System.out::println);
Demo
Making lambdas recursive in Java is not easy because of the "variable may be uninitialized" error, but it can be done. Here is a link to an answer describing one way of doing it.
When applied to your task, this can be done as follows:
// This comes from the answer linked above
class Recursive<I> {
public I func;
}
public static void main (String[] args) throws java.lang.Exception {
Recursive<Function<Integer,Integer>> sumDigits = new Recursive<>();
sumDigits.func = (Integer number) -> {
int s = Arrays.asList( String.valueOf(number).split("") )
.stream()
.map(Integer::valueOf)
.mapToInt(i->i)
.sum();
return s < 10 ? s : sumDigits.func.apply(s);
};
System.out.println(sumDigits.func.apply(100) == 1);
System.out.println(sumDigits.func.apply(101) == 1);
System.out.println(sumDigits.func.apply(55) == 1);
System.out.println(sumDigits.func.apply(56) == 1);
}
I took your code, wrapped it in { ... }s, and added a recursive invocation on the return line.
Demo.

Learning Java - Do not fully understand how this sequence is calculated (Fibonacci) [duplicate]

This question already has answers here:
Java recursive Fibonacci sequence
(37 answers)
Closed 8 years ago.
I am learning Java and I have this code from the internet and running it in Eclipse:
public class Fibonacci {
public static void main (String [] args) {
for (int counter = 0; counter <= 3; counter++){
System.out.printf("Fibonacci of %d is: %d\n", counter, fibonacci(counter));
}
public static long fibonacci(long number) {
if ((number == 0) || (number == 1))
return number;
else
return fibonacci(number - 1) + fibonacci(number - 2);
}
}
I've tried to understand it but cannot get it. So I run through the code and counter gets passed in through the fibonacci method. As counter starts at 0 and this is what gets passed first, then 1 and I understand the method passes back 0 and then 1.
When it reaches 2: it will return 2-1 + 2-2 = 2 and it does return this.
When it reaches 3: it will return 3-1 + 3-2 = 3 but it does not return 3 it returns 2.
Please can someone explain to me why as I cannot figure this out?
Thanks
First, I have to tell you that this recursive version has a dramatic exponential cost. Once you understand how it works, my advice for you would be to learn about tail recursivity, write a tail-recursive solution, an iterative solution, and compare them to your current method for high values of "number".
Then, your function basically uses the mathematical definition of the Fibonacci sequence :
f0 = 1, f1 = 1, fn = fn-1 + fn-2 for all n >= 2
For example if we call fibonacci(3), this will return fibonacci(2) + fibonacci(1). fibonacci(2) will be executed first and will return fibonacci(1) + fibonnacci(0). Then fibonacci(1) will return immediately 1 since it is a terminal case. It happens the same thing with fibonnacci(0), so now we have computed fibonnacci(2) = 1 + 0 = 1. Let's go back to fibonacci(3) which has been partially evaluated at this point : 1 + fibonnacci(1). We just have to compute fibonnacci(1) and we can finally return 1 + 1 = 2.
Even in this little example, you can see that we evaluated twice fibonacci(1), that is why this version is so slow, it computes many times the same values of the sequence, and it gets worth when "number" is high.

Introduce a counter into a loop within scala

I'm writing a small program which will convert a very large file into multiple smaller files, each file will contain 100 lines.
I'm iterating over a lines iteration :
while (lines.hasNext) {
val line = lines.next()
}
I want to introduce a counter and when it reaches a certain value, reset the counter and proceed. In java I would do something like :
int counter = 0;
while (lines.hasNext) {
val line = lines.next()
if(counter == 100){
counter = 0;
}
++counter
}
Is there something similar in scala or an alternative method ?
traditionally in scala you use .zipWithIndex
scala> List("foo","bar")
res0: List[java.lang.String] = List(foo, bar)
scala> for((x,i) <- res0.zipWithIndex) println(i + " : " +x)
0 : foo
1 : bar
(this will work with your lines too, as far as they are in Iterator, e.g. has hasNext and next() methods, or some other scala collection)
But if you need a complicated logic, like resetting counter, you may write it the same way as in java:
var counter = 0
while (lines.hasNext) {
val line = lines.next()
if(counter % 100 == 0) {
// now write to another file
}
}
Maybe you can tell us why you want to reset counter, so we may say how to do it better?
EDIT
according to your update, that is better to do using grouped method, as #pr1001 proposed:
lines.grouped(100).foreach(l => l.foreach(/* write line to file*/))
If your resetting counter represents the fact that there are repeated groups of data in the original list, you might want to use the grouped method:
scala> val l = List("one", "two", "three", "four")
l: List[java.lang.String] = List(one, two, three, four)
scala> l.grouped(2).toList
res0: List[List[java.lang.String]] = List(List(one, two), List(three, four))
Update: Since you're reading from a file, you should be able to pretty efficiently iterate over the file:
val bigFile = io.Source.fromFile("/tmp/verybigfile")
val groupedLines = bigFile.getLines.grouped(2).zipWithIndex
groupedLines.foreach(group => {
val (lines, index) = group
val p = new java.io.PrintWriter("/tmp/" + index)
lines.foreach(p.println)
p.close()
})
Of course this could also be written as a for comprehension...
You might even be able to get better performance by converting groupedLines to a parallel collection with .par before writing out each group of lines to its own file.
This would work:
lines grouped 100 flatMap (_.zipWithIndex) foreach {
case (line, count) => //whatever
}
You may use zipWithIndex along with some transformation.
scala> List(10, 20, 30, 40, 50).zipWithIndex.map(p => (p._1, p._2 % 3))
res0: List[(Int, Int)] = List((10,0), (20,1), (30,2), (40,0), (50,1))

Categories

Resources