I have seen in some projects that people use Predicates instead of pure if statements, as illustrated with a simple example below:
int i = 5;
// Option 1
if (i == 5) {
// Do something
System.out.println("if statement");
}
// Option 2
Predicate<Integer> predicate = integer -> integer == 5;
if (predicate.test(i)) {
// Do something
System.out.println("predicate");
}
What's the point of preferring Predicates over if statements?
Using a predicate makes your code more flexible.
Instead of writing a condition that always checks if i == 5, you can write a condition that evaluates a Predicate, which allows you to pass different Predicates implementing different conditions.
For example, the Predicate can be passed as an argument to a method :
public void someMethod (Predicate<Integer> predicate) {
if(predicate.test(i)) {
// do something
System.out.println("predicate");
}
...
}
This is how the filter method of Stream works.
For the exact example that you provided, using a Predicate is a big over-kill. The compiler and then the runtime will create:
a method (de-sugared predicate)
a .class that will implement java.util.Predicate
an instance of the class created at 2
all this versus a simple if statement.
And all this for a stateless Predicate. If your predicate is statefull, like:
Predicate<Integer> p = (Integer j) -> this.isJGood(j); // you are capturing "this"
then every time you will use this Predicate, a new instance will be created (at least under the current JVM).
The only viable option IMO to create such a Predicate is, of course, to re-use it in multiple places (like passing as arguments to methods).
Using if statements is the best (read: most performant) way to check binary conditions.
The switch statement may be faster for more complex situations.
A Predicate are a special form of Function. In fact the java language architect work on a way to allow generic primitive types. This will make Predicate<T> roughly equivalent to Function<T, boolean> (modulo the test vs apply method name).
If a function (resp. method) takes one or more functions as argument(s), we call it higher-order function. We say that we are passing behaviour to a function. This allows us to create powerful APIs.
String result = Match(arg).of(
Case(isIn("-h", "--help"), help()),
Case(isIn("-v", "--version"), version()),
Case($(), cmd -> "unknown command: " + cmd)
);
This example is taken from Javaslang, a library for object-functional programming in Java 8+.
Disclaimer: I'm the creator of Javaslang.
Thi is an old question, but I'll give it a try, since I am battling with it myself...
In my attempt to excuse my own usage of predicates I have made a self-rule.
I believe Predicates are useful where the "logic point" - is NOT the: leaf | corner | the end - of a: graph | tree | straight line, which would make the logic point effectively a "logic joint".
By it being a joint (aka node) it has a state, a re-usable and mutable state, that serves as a means towards an end.
In a stream, where the data is supposed to traverse a path, predicates are useful since they grant a degree of access while keeping the integrity of the stream, this is why the best predicates IMO are only method references minimizing side effects.
Even though the most common form of Predicate is newObject.equal(old), which is in itself a BiPredicate, but CAN be used with a single Predicate with side effect lambda -> lambda.equal(localCache) (so this may be an exception to the Only Method References rule).
IF, the logic serves as the output/exit point towards a different architectural design, or component, or a code that is not written by you, or even if it is written by you, one that differs on its functionality, then an if-else is my way to go.
Another benefit of predicates in the case of reactive programming is that multiple subscribers can make use of the same defined logic gate.
But if the end point of a publisher will be a single lone subscriber (which would be a case similar to your example if I'm reaching), then the logic is better done with an if-else.
Related
Vavr's Either seems to solve one of my problems were some method does a lot of checks and returns either CalculationError or CalculationResult.
Either<CalculationError, CalculationResult> calculate (CalculationData calculationData) {
// either returns Either.left(new CalculationError()) or Either.right(new CalculationResult())
}
I have a wrapper which stores both errors and results
class Calculation {
List<CalculationResult> calculationResults;
List<CalculationError> calculationErrors;
}
Is there any neat solution to transform stream from Collection<CalculationData> data to Calculation?
This can be easily done using a custom collector. With a bit of pseudo code representing the Either:
Collector<Either<CalculationError, CalculationResult>, ?, Calculation> collector = Collector.of(
Calculation::new,
(calc, either) -> {
if (either has error) {
calc.calculationErrors.add(either.error);
} else {
calc.calculationResults.add(either.result);
}
},
(calc1, calc2) -> {
calc1.calculationErrors.addAll(calc2.calculationErrors);
calc1.calculationResults.addAll(calc2.calculationResults);
return calc1;
}
);
Calculation calc = data.stream()
.map(this::calculate)
.collect(collector);
Note that Calculation should initialize its two lists (in the declaration or a new constructor).
Well, you're using vavr, so 'neat' is right out. Tends to happen when you use tools that are hostile to the idiomatic form of the language. But, then again, 'neat' is a nebulous term with no clear defined meaning, so, I guess, whatever you think is 'neat', is therefore 'neat'. Neat, huh?
Either itself has the sequence method - but both of them work the way Either is supposed to work: They are left-biased in the sense that any Lefts present is treated as erroneous conditions, and that means all the Right values are discarded if even one of your Eithers is a Left. Thus, you cannot use either of the sequence methods to let Either itself bake you a list of the Right values. Even sequenceRight won't do this for you (it stops on the first Left in the list and returns that instead). The filter stuff similarly doesn't work like that - Either very much isn't really an Either in the sense of what that word means if you open a dictionary: It does not mean: A homogenous mix of 2 types. It's solely a non-java-like take on exception management: Right contains the 'answer', left contains the 'error' (you're using it correctly), but as a consequence there's nothing in the Either API to help with this task - which in effect involves 'please filter out the errors and then do something' ("Silently ignore errors" is rarely the right move. It is what is needed here, but it makes sense that the Either API isn't going to hand you a footgun. Even if you need it here).
Thus, we just write it plain jane java:
var calculation = new Calculation();
for (var e : mix) {
if (e.isLeft()) calculation.calculationErrors.add(e.getLeft());
if (e.isRight()) calculation.calculationResult.add(e.getRight());
}
(This presumes your Calculation constructor at least initializes those lists to empty mutables).
NB: Rob Spoor's answer also assumes this and is much, much longer. Sometimes the functional way is the silly, slow, unwieldy, hard to read, way.
NB2: Either.sequence(mix).orElseRun(s -> calculation.errors = s.asJava()); is a rather 'neat' way (perhaps - it's in the eye of the beholder) of setting up the errors field of your Calculation class. No joy for such a 'neat' trick to fill the 'results' part of it all, however. That's what the bulk of my answer is trying to explain: There is no nice API for that in Either, and it's probably by design, as that involves intentionally ignoring the errors in the list of Eithers.
Since you are using VAVr, you may consider using Traversable instead of Collection. This will give you the method partition, which can be used to classify your list of Eithers into two groups like so:
Traversable<Either<CalculationError, CalculationResult>> calculations = ...;
var partitionedCalcs = calculations.partition(Either::isRight);
var results = partitionedCalcs._1.map(Either::getRight);
var errors = partitionedCalcs._2.map(Either::getLeft);
Calculation calcs = new Calculation(results, errors);
If you don't want to change your existing use of Collection to use a Traversable, then you can easily convert between them by using, for example, List.ofAll(Iterator) and Value.toJavaCollection(Function).
In this program, let’s say I have a class Leader that I want to assign to a class Mission. The Mission requires a class Skill, which has a type and a strength. The Leader has a List of Skills. I want to write a method that assigns a Leader (or a number of leaders) to a Mission and check if the Leaders’ combined skill strength is enough to accomplish the Mission.
public void assignLeaderToMission(Mission m, Leader... leaders) {
List<Leader> selectedLeaders = new ArrayList(Arrays.asList(leaders));
int combinedStrength = selectedLeaders
.stream()
.mapToInt(l -> l.getSkills()
.stream()
.filter(s -> s.getType() == m.getSkillRequirement().getType())
.mapToInt(s -> s.getStrength())
.sum())
.sum();
if(m.getSkillRequirement().getStrength() > combinedStrength)
System.out.println("Leader(s) do not meet mission requirements");
else {
// assign leader to mission
}
}
Is this the appropriate way to use a stream with lambda operations? NetBeans is giving a suggestion that I use an anonymous class, but I thought that lambas and aggregate operations were supposed to replace the need for anonymous classes with a single method, or maybe I am interpreting this incorrectly.
In this case, I am accessing a List<> within a List<> and I am not sure this is the correct way to do so. Some help would be much appreciated.
There is nothing wrong with using lambda expressions here. Netbeans just offers that code transformation, since is is possible (and Netbeans can do the transformation for you). If you accept the offer and let it convert the code, it very likely starts offering converting the anonymous class to a lambda expression as soon as the conversion has been done, simply because it is (now) possible.
But if you want to improve your code, you should not use raw types, i.e. use
List<Leader> selectedLeaders = new ArrayList<>(Arrays.asList(leaders));
instead. But if you just want a List<Leader> without needing support for add or remove, there is no need to copy the list into an ArrayList, so you can use
List<Leader> selectedLeaders = Arrays.asList(leaders);
instead. But if all you want to do, is to stream over an array, you don’t need a List detour at all. You can simply use Arrays.stream(leaders) in the first place.
You may also use flatMap to reduce the amount of nested code, i.e.
int combinedStrength = Arrays.stream(leaders)
.flatMap(l -> l.getSkills().stream())
.filter(s -> s.getType() == m.getSkillRequirement().getType())
.mapToInt(s -> s.getStrength())
.sum();
Lambda must be concise so that it is easy to maintain. If the lambda expression is lengthy, then the code will become hard to maintain and understand. Even debugging will be harder.
More details on Why the perfect lambda expression is just one line can be read here.
The perilously long lambda
To better understand the benefits of writing short, concise lambda expressions, consider the opposite: a sprawling lambda that unfolds over several lines of code:
System.out.println(
values.stream()
.mapToInt(e -> {
int sum = 0;
for(int i = 1; i <= e; i++) {
if(e % i == 0) {
sum += i;
}
}
return sum;
})
.sum());
Even though this code is written in the functional style, it misses the benefits of functional-style programming. Let's consider the reasons why.
1. It's hard to read
Good code should be inviting to read. This code takes mental effort to read: your eyes strain to find the beginning and end of the different parts.
2. Its purpose isn't clear
Good code should read like a story, not like a puzzle. A long, anonymous piece of code like this one hides the details of its purpose, costing the reader time and effort. Wrapping this piece of code into a named function would make it modular, while also bringing out its purpose through the associated name.
3. Poor code quality
Whatever your code does, it's likely that you'll want to reuse it sometime. The logic in this code is embedded within the lambda, which in turn is passed as an argument to another function, mapToInt. If we needed the code elsewhere in our program, we might be tempted to rewrite it, thus introducing inconsistencies in our code base. Alternatively, we might just copy and paste the code. Neither option would result in good code or quality software.
4. It's hard to test
Code always does what was typed and not necessarily what was intended, so it stands that any nontrivial code must be tested. If the code within the lambda expression can't be reached as a unit, it can't be unit tested. You could run integration tests, but that is no substitute for unit testing, especially when that code does significant work.
5. Poor code coverage
Lambdas that were embedded in arguments were not easily extracted as units, and many showed up red on the coverage report. With no insight, the team simply had to assume that those pieces worked.
Do lambda expressions have any use other than saving lines of code?
Are there any special features provided by lambdas which solved problems which weren't easy to solve? The typical usage I've seen is that instead of writing this:
Comparator<Developer> byName = new Comparator<Developer>() {
#Override
public int compare(Developer o1, Developer o2) {
return o1.getName().compareTo(o2.getName());
}
};
We can use a lambda expression to shorten the code:
Comparator<Developer> byName =
(Developer o1, Developer o2) -> o1.getName().compareTo(o2.getName());
Lambda expressions do not change the set of problems you can solve with Java in general, but definitely make solving certain problems easier, just for the same reason we’re not programming in assembly language anymore. Removing redundant tasks from the programmer’s work makes life easier and allows to do things you wouldn’t even touch otherwise, just for the amount of code you would have to produce (manually).
But lambda expressions are not just saving lines of code. Lambda expressions allow you to define functions, something for which you could use anonymous inner classes as a workaround before, that’s why you can replace anonymous inner classes in these cases, but not in general.
Most notably, lambda expressions are defined independently to the functional interface they will be converted to, so there are no inherited members they could access, further, they can not access the instance of the type implementing the functional interface. Within a lambda expression, this and super have the same meaning as in the surrounding context, see also this answer. Also, you can not create new local variables shadowing local variables of the surrounding context. For the intended task of defining a function, this removes a lot of error sources, but it also implies that for other use cases, there might be anonymous inner classes which can not be converted to a lambda expression, even if implementing a functional interface.
Further, the construct new Type() { … } guarantees to produce a new distinct instance (as new always does). Anonymous inner class instances always keep a reference to their outer instance if created in a non-static context¹. In contrast, lambda expressions only capture a reference to this when needed, i.e. if they access this or a non-static member. And they produce instances of an intentionally unspecified identity, which allows the implementation to decide at runtime whether to reuse existing instances (see also “Does a lambda expression create an object on the heap every time it's executed?”).
These differences apply to your example. Your anonymous inner class construct will always produce a new instance, also it may capture a reference to the outer instance, whereas your (Developer o1, Developer o2) -> o1.getName().compareTo(o2.getName()) is a non-capturing lambda expression that will evaluate to a singleton in typical implementations. Further, it doesn’t produce a .class file on your hard drive.
Given the differences regarding both, semantic and performance, lambda expressions may change the way programmers will solve certain problems in the future, of course, also due to the new APIs embracing ideas of functional programming utilizing the new language features. See also Java 8 lambda expression and first-class values.
¹ From JDK 1.1 to JDK 17. Starting with JDK 18, inner classes may not retain a reference to the outer instance if it is not used. For compatibility reasons, this requires the inner class not be serializable. This only applies if you (re)compile the inner class under JDK 18 or newer with target JDK 18 or newer. See also JDK-8271717
Programming languages are not for machines to execute.
They are for programmers to think in.
Languages are a conversation with a compiler to turn our thoughts into something a machine can execute. One of the chief complaints about Java from people who come to it from other languages (or leave it for other languages) used to be that it forces a certain mental model on the programmer (i.e. everything is a class).
I'm not going to weigh in on whether that's good or bad: everything is trade-offs. But Java 8 lambdas allow programmers to think in terms of functions, which is something you previously could not do in Java.
It's the same thing as a procedural programmer learning to think in terms of classes when they come to Java: you see them gradually move from classes that are glorified structs and have 'helper' classes with a bunch of static methods and move on to something that more closely resembles a rational OO design (mea culpa).
If you just think of them as a shorter way to express anonymous inner classes then you are probably not going to find them very impressive in the same way that the procedural programmer above probably didn't think classes were any great improvement.
Saving lines of code can be viewed as a new feature, if it enables you to write a substantial chunk of logic in a shorter and clearer manner, which takes less time for others to read and understand.
Without lambda expressions (and/or method references) Stream pipelines would have been much less readable.
Think, for example, how the following Stream pipeline would have looked like if you replaced each lambda expression with an anonymous class instance.
List<String> names =
people.stream()
.filter(p -> p.getAge() > 21)
.map(p -> p.getName())
.sorted((n1,n2) -> n1.compareToIgnoreCase(n2))
.collect(Collectors.toList());
It would be:
List<String> names =
people.stream()
.filter(new Predicate<Person>() {
#Override
public boolean test(Person p) {
return p.getAge() > 21;
}
})
.map(new Function<Person,String>() {
#Override
public String apply(Person p) {
return p.getName();
}
})
.sorted(new Comparator<String>() {
#Override
public int compare(String n1, String n2) {
return n1.compareToIgnoreCase(n2);
}
})
.collect(Collectors.toList());
This is much harder to write than the version with lambda expressions, and it's much more error prone. It's also harder to understand.
And this is a relatively short pipeline.
To make this readable without lambda expressions and method references, you would have had to define variables that hold the various functional interface instances being used here, which would have split the logic of the pipeline, making it harder to understand.
Internal iteration
When iterating Java Collections, most developers tend to get an element and then process it. This is, take that item out and then use it, or reinsert it, etc. With pre-8 versions of Java, you can implement an inner class and do something like:
numbers.forEach(new Consumer<Integer>() {
public void accept(Integer value) {
System.out.println(value);
}
});
Now with Java 8 you can do better and less verbose with:
numbers.forEach((Integer value) -> System.out.println(value));
or better
numbers.forEach(System.out::println);
Behaviors as arguments
Guess the following case:
public int sumAllEven(List<Integer> numbers) {
int total = 0;
for (int number : numbers) {
if (number % 2 == 0) {
total += number;
}
}
return total;
}
With Java 8 Predicate interface you can do better like so:
public int sumAll(List<Integer> numbers, Predicate<Integer> p) {
int total = 0;
for (int number : numbers) {
if (p.test(number)) {
total += number;
}
}
return total;
}
Calling it like:
sumAll(numbers, n -> n % 2 == 0);
Source: DZone - Why We Need Lambda Expressions in Java
There are many benefits of using lambdas instead of inner class following as below:
Make the code more compactly and expressive without introducing more language syntax semantics. you already gave an example in your question.
By using lambdas you are happy to programming with functional-style operations on streams of elements, such as map-reduce transformations on collections. see java.util.function & java.util.stream packages documentation.
There is no physical classes file generated for lambdas by compiler. Thus, it makes your delivered applications smaller. How Memory assigns to lambda?
The compiler will optimize lambda creation if the lambda doesn't access variables out of its scope, which means the lambda instance only create once by the JVM. for more details you can see #Holger's answer of the question Is method reference caching a good idea in Java 8?
.
Lambdas can implements multi marker interfaces besides the functional interface, but the anonymous inner classes can't implements more interfaces, for example:
// v--- create the lambda locally.
Consumer<Integer> action = (Consumer<Integer> & Serializable) it -> {/*TODO*/};
Lambdas are just syntactic sugar for anonymous classes.
Before lambdas, anonymous classes can be used to achieve the same thing. Every lambda expression can be converted to an anonymous class.
If you are using IntelliJ IDEA, it can do the conversion for you:
Put the cursor in the lambda
Press alt/option + enter
To answer your question, the matter of fact is lambdas don’t let you do anything that you couldn’t do prior to java-8, rather it enables you to write more concise code. The benefits of this, is that your code will be clearer and more flexible.
One thing I don't see mentioned yet is that a lambda lets you define functionality where it's used.
So if you have some simple selection function you don't need to put it in a separate place with a bunch of boilerplate, you just write a lambda that's concise and locally relevant.
Yes many advantages are there.
No need to define whole class we can pass implementation of function it self as reference.
Internally creation of class will create .class file while if you use lambda then class creation is avoided by compiler because in lambda you are passing function implementation instead of class.
Code re-usability is higher then before
And as you said code is shorter then normal implementation.
Function composition and higher order functions.
Lambda functions can be used as building blocks towards building "higher order functions" or performing "function composition". Lambda functions can be seen as reusable building blocks in this sense.
Example of Higher Order Function via lambda:
Function<IntUnaryOperator, IntUnaryOperator> twice = f -> f.andThen(f);
IntUnaryOperator plusThree = i -> i + 3;
var g = twice.apply(plusThree);
System.out.println(g.applyAsInt(7))
Example Function Composition
Predicate<String> startsWithA = (text) -> text.startsWith("A");
Predicate<String> endsWithX = (text) -> text.endsWith("x");
Predicate<String> startsWithAAndEndsWithX =
(text) -> startsWithA.test(text) && endsWithX.test(text);
String input = "A hardworking person must relax";
boolean result = startsWithAAndEndsWithX.test(input);
System.out.println(result);
One benefit not yet mentioned is my favorite: lambdas make deferred execution really easy to write.
Log4j2 uses this for example, where instead of passing a value to conditionally log (a value that may have been expensive to calculate), you can now pass a lambda to calculate that expensive value. The difference being that before, that value was being calculated every time whether it got used or not, whereas now with lambdas if your log level decides not to log that statement, then the lambda never gets called, and that expensive calculation never takes place -- a performance boost!
Could that be done without lambdas? Yes, by surrounding each log statement with if() checks, or using verbose anonymous class syntax, but at the cost of horrible code noise.
Similar examples abound. Lambdas are like having your cake and eating it too: all the efficiency of gnarly multi-line optimized code squeezed down into the visual elegance of one-liners.
Edit: As requested by commenter, an example:
Old way, where expensiveCalculation() always gets called regardless of whether this log statement will actually use it:
logger.trace("expensive value was {}", expensiveCalculation());
New lambda efficient way, where expensiveCalculation() call won't happen unless trace log level is enabled:
logger.trace("expensive value was {}", () -> expensiveCalculation());
What is considered idiomatic iteration of a Collection in Java 8, and why?
for (String foo : foos) {
String bar = bars.get(foo);
if (bar != null)
System.out.println(foo);
}
or
foos.forEach(foo -> {
String bar = bars.get(foo);
if (bar != null)
System.out.println(foo);
});
In the comment thread to this answer, user Bringer128 mentioned these questions regarding a similar issue in C#:
foreach vs someList.Foreach(){}
Generic lists: foreach or list.ForEach?
I would caution against applying the C# discussion to Java. The discussion is interesting, to be sure, and the issues are superficially similar. However, Java and C# are different languages and thus different considerations apply.
For example, this answer mentions that the C# foreach statement is preferable, because the compiler might be able to optimize the loop better in the future. This is not true of Java. In Java, the "enhanced for" loop is defined to be syntactic sugar for getting an Iterator and calling its hasNext and next methods repeatedly. This pretty much guarantees a minimum of two method calls per loop iteration (although there is a possibility for the JIT to inline small methods).
Another example is from this answer, which mentions that in C# it is legal for the delegate invoked by a list's ForEach method to modify the list that it's iterating. In Java there is a blanket prohibition of "interference" with the stream source for the Stream.forEach method, whereas for the enhanced-for loop, the behavior of modifying the underlying list (or whatever) is determined by the Iterator. Many are fail-fast and will throw ConcurrentModificationException if the underlying list is modified during iteration. Others will silently give unexpected results.
In any case, don't read the C# discussion and assume that similar reasoning applies to Java.
Now, to answer the question. :-)
I think it's too early to declare one style to be idiomatic or preferable to another at this point. Java 8 has just been released and very few people have much experience with it. Lambdas are new and unfamiliar, and this will make many programmers uncomfortable. They'll thus want to stick to their tried-and-true for-loops. That's perfectly sensible. In a few years, though, after everyone gets used to lambdas, it might be that for-loops will start to look distinctly old-fashioned. Time will tell.
(I think this happened with generics. When they were new, they were intimidating and scary, especially wildcards. Nowadays, though, non-generic code looks distinctly old-fashioned, and to me it has a musty odor about it.)
I have an early sense of how this might turn out. Of course, I might be wrong though.
I'd say that for short loops where the computation is fixed, such as the question posted initially:
for (String foo : foos)
System.out.println(foo);
it just doesn't matter. This could be rewritten as
foos.forEach(foo -> System.out.println(foo));
or even
foos.forEach(System.out::println);
But really, this code is so simple that it's hard to argue that one way is clearly better.
There are situations where the scales tip in one direction or another. If the loop body can throw a checked exception, a for-loop is clearly better. If the loop body is pluggable (e.g., the Consumer is passed in as a parameter) or if internal iteration has different semantics (e.g., locking of a synchronized list during the entire call to forEach) then the new forEach approach has the edge.
The updated example,
for (String foo : foos) {
String bar = bars.get(foo);
if (bar != null)
System.out.println(foo);
}
is a bit more complicated, but only slightly. I would not write this using a multi-line lambda:
foos.forEach(foo -> {
String bar = bars.get(foo);
if (bar != null)
System.out.println(foo);
});
This offers no advantage over the straight for-loop, in my opinion, and the different semantics of the lambda are signaled by the little arrow way up in the corner of the first line. However, (similar to Bringer128's answer) I would recast this from a big forEach block into a stream pipeline:
foos.stream()
.filter(foo -> bars.get(foo) != null)
.forEach(System.out::println)
I think the lambda/streams approach starts to show a bit of an advantage here, but only a bit, as this is still a really simple example. Using lambda/streams replaces some conditional control logic with a data filtering operation. This might make sense for some operations, but not for others.
The difference between the approaches starts to become clearer as things get more complicated. The simple examples are so simple that it's obvious what they do. Real-world examples can be considerably more complex. Consider this code from the method Class.getEnclosingMethod of the JDK (scroll to lines 1023-1052):
Class<?> enclosingCandidate = enclosingInfo.getEnclosingClass();
// ...
for(Method m: enclosingCandidate.getDeclaredMethods()) {
if (m.getName().equals(enclosingInfo.getName()) ) {
Class<?>[] candidateParamClasses = m.getParameterTypes();
if (candidateParamClasses.length == parameterClasses.length) {
boolean matches = true;
for(int i = 0; i < candidateParamClasses.length; i++) {
if (!candidateParamClasses[i].equals(parameterClasses[i])) {
matches = false;
break;
}
}
if (matches) { // finally, check return type
if (m.getReturnType().equals(returnType) )
return m;
}
}
}
}
throw new InternalError("Enclosing method not found");
(Some security checks and comments have been omitted for the sake of the example.)
Here we have a couple nested for-loops with a couple levels of conditional logic and a boolean flag. Read through this code for a while and see if you can figure out what it does.
Using lambda and streams, this code can be rewritten as follows:
return Arrays.stream(enclosingInfo.getEnclosingClass().getDeclaredMethods())
.filter(m -> Objects.equals(m.getName(), enclosingInfo.getName()))
.filter(m -> Arrays.equals(m.getParameterTypes(), parameterClasses))
.filter(m -> Objects.equals(m.getReturnType(), returnType))
.findFirst()
.orElseThrow(() -> new InternalError("Enclosing method not found");
What's going on in the classic version is that the loop control and conditional logic is all about searching a data structure for a match. It's a bit contorted because it breaks early out of the inner loop if it detects a non-match, but returns early from the method if it does find a match. But once you stare at this code long enough, you can see that it's searching for the first element that matches a series of criteria, and returns it; and if it doesn't find one, it throws an error. Once you realize that, the lambda/streams approach just pops right out. Not only is it a lot shorter, it's much easier to understand what it's doing.
There are certainly for-loops that will have weird conditions and side effects that can't be turned easily into streams. But there are a lot of for-loops that are just searching data structures, processing elements conditionally, returning the first match, or accumulating a collection of matches, or accumulating transformed elements. These operations naturally lend themselves to being rewritten into streams, and dare I say, in an idiomatic fashion.
In general the lambda form is more idiomatic for single-statement loops, whereas the non-lambda makes more sense for multi-statement loops. (This ignores composing into a more functional style if possible).
One more style you didn't mention is the method reference:
foos.forEach(System.out::println);
EDIT:
As you're looking for a more general answer; you might find that since lambdas are new in Java, the List.forEach method is less used in practice.
In response to "So why is non-lambda more idiomatic for multi-statement?", it's more the reverse, that multi-statement lambdas are not idiomatic in most languages. Lambdas tend to be used for composition, so if I was to take the example from your question and compose it into a functional style:
// Thanks to #skiwi for fixing this code
foos.stream().filter(foo -> bars.get(foo) != null).forEach(System.out::println);
In the above example, using multi-statement lambdas would make it harder to read rather than easier.
You should only be using the new stream/list's forEach if it really makes your code more concise, else stick with the old version, especially for code that gets executed linearly.
I would rewrite your statement to the following, which does make sense with streams:
foos.stream()
.filter(foo -> (bars.get(foo) != null))
.forEach(System.out::println);
This is a functional approach, that will:
Turn your List<String> into a Stream<String>.
Filter the objects such that you retain all elements of which bars.get(foo) is not null, which is of type Predicate<String>.
Then you call System.out::println on the Stream<String>, which resolves to bar -> System.out.println(bar), which is of type Consumer<String>.
So in more normal words:
Obtain a stream.
Filter out all unwanted elements, retain the wanted ones.
Consume all elements from the stream.
I'm looking for the inverse of Supplier<T> in Guava. I hoped it would be called Consumer – nope – or Sink – exists, but is for primitive values.
Is it hidden somewhere and I'm missing it?
I'd like to see it for the same kinds of reasons that Supplier is useful. Admittedly, uses are less common, but many of the static methods of Suppliers, for example, would apply in an analogous way, and it would be useful to express in one line things like "send this supplier every value in this iterable".
In the meantime, Predicate and Function<T,Void> are ugly workarounds.
Your alternatives are:
Java 8 introduces a Consumer interface which you can compose.
Xtend's standard library contains Procedures.
Scala has Function*; if a function's return type is Unit, it is considered a side effect.
In all of these languages, you can use functional interfaces conveniently, so you could also use e.g. Functional Java's Effect.
Otherwise, you better rely on existing language constructs for performing side effects, e.g. the built-in for loop. Java < 8 inflicts tremendous syntactic overhead when using lambdas. See this question and this discussion.
You can use a Function and set the second Argument to java.lang.Void this Function can only return null.
You have already found the answer. If you just want to visit, you can use filter with a predicate that always returns true; if you are super defensive you can use any predicate and use an or function with an alwaysTrue in the filter itself; just add the or at the end to avoid shortcircuiting.
The problem is that even though I agree that conceptually Predicate and Consumer are different since a Predicate should be as stateless as possible and not have side effects while a consumer is only about the side effects, in practice the only syntactic difference is that one returns a boolean (that can be ignored) and the other void. If Guava had a Consumer, it would need to either duplicate several of the methods that take a Predicate to also take a Consumer or have Consumer inherit from Predicate.