Can use functional operations - java

I'm using the for-each construct in Java as follows:
public int getNumRStations() {
int numRoutes = 0;
for (ArrayList<Route> route : routes) {
numRoutes += route.size();
}
return numRoutes;
}
NetBeans shows a warning/notice that says "Can use functional operations". Upon automatically resolving it, the newly generated code shows this:
public int getNumRStations() {
int numRoutes = 0;
numRoutes = routes.stream().map((route) -> route.size()).reduce(numRoutes, Integer::sum);
return numRoutes;
}
Why is NetBeans warning me of this? I know I'm not supposed to blindly trust IDEs, so that's why I'm asking.
What is that new line supposed to do? I haven't seen anything like it, in real life or in class.

That looks like NetBeans suggesting refactoring your sum operation as a Java 8 "lambda" or functional programming operation using the map and reduce functions from the Stream interface. You must be using a Java 8 JDK with NetBeans.
Breaking down what it's doing:
the "map()" function is performing a count of all of your route sizes in your routes array list,
the "reduce()" function is then performing a sum of those individual sizes to get the final result for the total number of routes.
The map() and reduce() functions are documented in the Java 8 documentation for the Stream interface available at this link
This answer addresses "what it is" but doesn't address "why it's better". I will admit to still learning about these constructs myself.

So #paisanco already explained about what each function does.
I agree that the modification the IDE suggested is more complex than the original.
If I were asked to select between the original one and the IDE's recommendation, then I will choose the original one.
However, here is a simpler (and more preferred) way for your example.
public int getNumRStations() {
return routes.stream().mapToInt(x -> x.size()).sum();
}
Explanation is simpler in this case, too.
For each routes' element x, change it into x.size() and sum them up.
x -> x.size() is called a lambda expression, or anonymous function.
It's like
int function(x) {
return x.size();
}
(I omitted the parameter type. The type is implicitly selected by the Java compiler.)
This function is applied to each of the collection's element. This is what mapToInt(lambda exp) method does.
sum() method doesn't seem to need explanation.
Simple, isn't it?

Related

Java Aggregate Operations vs Anonymous class suggestion

In this program, let’s say I have a class Leader that I want to assign to a class Mission. The Mission requires a class Skill, which has a type and a strength. The Leader has a List of Skills. I want to write a method that assigns a Leader (or a number of leaders) to a Mission and check if the Leaders’ combined skill strength is enough to accomplish the Mission.
public void assignLeaderToMission(Mission m, Leader... leaders) {
List<Leader> selectedLeaders = new ArrayList(Arrays.asList(leaders));
int combinedStrength = selectedLeaders
.stream()
.mapToInt(l -> l.getSkills()
.stream()
.filter(s -> s.getType() == m.getSkillRequirement().getType())
.mapToInt(s -> s.getStrength())
.sum())
.sum();
if(m.getSkillRequirement().getStrength() > combinedStrength)
System.out.println("Leader(s) do not meet mission requirements");
else {
// assign leader to mission
}
}
Is this the appropriate way to use a stream with lambda operations? NetBeans is giving a suggestion that I use an anonymous class, but I thought that lambas and aggregate operations were supposed to replace the need for anonymous classes with a single method, or maybe I am interpreting this incorrectly.
In this case, I am accessing a List<> within a List<> and I am not sure this is the correct way to do so. Some help would be much appreciated.
There is nothing wrong with using lambda expressions here. Netbeans just offers that code trans­for­ma­tion, since is is possible (and Netbeans can do the transformation for you). If you accept the offer and let it convert the code, it very likely starts offering converting the anonymous class to a lambda expression as soon as the conversion has been done, simply because it is (now) possible.
But if you want to improve your code, you should not use raw types, i.e. use
List<Leader> selectedLeaders = new ArrayList<>(Arrays.asList(leaders));
instead. But if you just want a List<Leader> without needing support for add or remove, there is no need to copy the list into an ArrayList, so you can use
List<Leader> selectedLeaders = Arrays.asList(leaders);
instead. But if all you want to do, is to stream over an array, you don’t need a List detour at all. You can simply use Arrays.stream(leaders) in the first place.
You may also use flatMap to reduce the amount of nested code, i.e.
int combinedStrength = Arrays.stream(leaders)
.flatMap(l -> l.getSkills().stream())
.filter(s -> s.getType() == m.getSkillRequirement().getType())
.mapToInt(s -> s.getStrength())
.sum();
Lambda must be concise so that it is easy to maintain. If the lambda expression is lengthy, then the code will become hard to maintain and understand. Even debugging will be harder.
More details on Why the perfect lambda expression is just one line can be read here.
The perilously long lambda
To better understand the benefits of writing short, concise lambda expressions, consider the opposite: a sprawling lambda that unfolds over several lines of code:
System.out.println(
values.stream()
.mapToInt(e -> {
int sum = 0;
for(int i = 1; i <= e; i++) {
if(e % i == 0) {
sum += i;
}
}
return sum;
})
.sum());
Even though this code is written in the functional style, it misses the benefits of functional-style programming. Let's consider the reasons why.
1. It's hard to read
Good code should be inviting to read. This code takes mental effort to read: your eyes strain to find the beginning and end of the different parts.
2. Its purpose isn't clear
Good code should read like a story, not like a puzzle. A long, anonymous piece of code like this one hides the details of its purpose, costing the reader time and effort. Wrapping this piece of code into a named function would make it modular, while also bringing out its purpose through the associated name.
3. Poor code quality
Whatever your code does, it's likely that you'll want to reuse it sometime. The logic in this code is embedded within the lambda, which in turn is passed as an argument to another function, mapToInt. If we needed the code elsewhere in our program, we might be tempted to rewrite it, thus introducing inconsistencies in our code base. Alternatively, we might just copy and paste the code. Neither option would result in good code or quality software.
4. It's hard to test
Code always does what was typed and not necessarily what was intended, so it stands that any nontrivial code must be tested. If the code within the lambda expression can't be reached as a unit, it can't be unit tested. You could run integration tests, but that is no substitute for unit testing, especially when that code does significant work.
5. Poor code coverage
Lambdas that were embedded in arguments were not easily extracted as units, and many showed up red on the coverage report. With no insight, the team simply had to assume that those pieces worked.

Scala to Java (functional programming)

I have been asked to 'translate' some Scala code to Java for a course. However, the requirements of the assignment are that Java 8 and external libraries, such as Functional Java and Totally Lazy, are not allowed. The line in Scala is:
charges.groupBy(_.cc).values.map(_.reduce(_ combine _)).toList
I have been able to write groupBy and values but .map and _.reduce still elude me. I have looked at the source code of those two libraries as well as the Scala source to try and find something to help me with putting these together but I have not been able to make any headway.
GroupBy is implemented as follows:
public Map<CreditCard, List<Charge>> groupBy(List<Charge> list)
{
Map<CreditCard, List<Charge>> map = new TreeMap<CreditCard, List<Charge>>();
for(Charge c: list)
{
List<Charge> group = map.get(c.cc);
if(group == null)
{
group = new ArrayList();
map.put(c.cc, group);
}
group.add(c);
}
return map;
}
you can use Google Guava for this. it doesn't require java8. you would especially be interested in class which call FluentIterable. here's some methods that could help you:
index(Function keyFunction) - the function used to produce the key for each value
transform(Function function) - applies {#code function} to each element of this fluent iterable
and there are a lot more.
You'll have to iterate over the values. Ordinarily I'd suggest using a new style for loop. Something like:
for (ValuesType value : values) {
// do your map and reduce here
}
The problem with that is you need to have access to more than one value at a time. (See discussion of .reduce(), below.) So perhaps the old style for would be better:
for (int i = 0; i < values.length - 1; i++) {
// do something with values.get(i) or values[i] as needed
}
Point to ponder: why values.length - 1?
.map() simply transforms one thing into another. What's the transformation in this case? It's the .reduce()! So that should be easy enough.
The _ in _.reduce() is the equivalent of value in the for statement above. It's the one value that you're dealing with on this iteration.
.reduce() takes two values and does something to them to turn them into a single value. To make that work you'll need to deal with values.get(i) and values.get(i+1) somehow. And _ combine _, well, you tell me.

Why should we prefer forEach in Java 8? [duplicate]

Which of the following is better practice in Java 8?
Java 8:
joins.forEach(join -> mIrc.join(mSession, join));
Java 7:
for (String join : joins) {
mIrc.join(mSession, join);
}
I have lots of for loops that could be "simplified" with lambdas, but is there really any advantage of using them? Would it improve their performance and readability?
EDIT
I'll also extend this question to longer methods. I know that you can't return or break the parent function from a lambda and this should also be taken into consideration when comparing them, but is there anything else to be considered?
The better practice is to use for-each. Besides violating the Keep It Simple, Stupid principle, the new-fangled forEach() has at least the following deficiencies:
Can't use non-final variables. So, code like the following can't be turned into a forEach lambda:
Object prev = null;
for(Object curr : list)
{
if( prev != null )
foo(prev, curr);
prev = curr;
}
Can't handle checked exceptions. Lambdas aren't actually forbidden from throwing checked exceptions, but common functional interfaces like Consumer don't declare any. Therefore, any code that throws checked exceptions must wrap them in try-catch or Throwables.propagate(). But even if you do that, it's not always clear what happens to the thrown exception. It could get swallowed somewhere in the guts of forEach()
Limited flow-control. A return in a lambda equals a continue in a for-each, but there is no equivalent to a break. It's also difficult to do things like return values, short circuit, or set flags (which would have alleviated things a bit, if it wasn't a violation of the no non-final variables rule). "This is not just an optimization, but critical when you consider that some sequences (like reading the lines in a file) may have side-effects, or you may have an infinite sequence."
Might execute in parallel, which is a horrible, horrible thing for all but the 0.1% of your code that needs to be optimized. Any parallel code has to be thought through (even if it doesn't use locks, volatiles, and other particularly nasty aspects of traditional multi-threaded execution). Any bug will be tough to find.
Might hurt performance, because the JIT can't optimize forEach()+lambda to the same extent as plain loops, especially now that lambdas are new. By "optimization" I do not mean the overhead of calling lambdas (which is small), but to the sophisticated analysis and transformation that the modern JIT compiler performs on running code.
If you do need parallelism, it is probably much faster and not much more difficult to use an ExecutorService. Streams are both automagical (read: don't know much about your problem) and use a specialized (read: inefficient for the general case) parallelization strategy (fork-join recursive decomposition).
Makes debugging more confusing, because of the nested call hierarchy and, god forbid, parallel execution. The debugger may have issues displaying variables from the surrounding code, and things like step-through may not work as expected.
Streams in general are more difficult to code, read, and debug. Actually, this is true of complex "fluent" APIs in general. The combination of complex single statements, heavy use of generics, and lack of intermediate variables conspire to produce confusing error messages and frustrate debugging. Instead of "this method doesn't have an overload for type X" you get an error message closer to "somewhere you messed up the types, but we don't know where or how." Similarly, you can't step through and examine things in a debugger as easily as when the code is broken into multiple statements, and intermediate values are saved to variables. Finally, reading the code and understanding the types and behavior at each stage of execution may be non-trivial.
Sticks out like a sore thumb. The Java language already has the for-each statement. Why replace it with a function call? Why encourage hiding side-effects somewhere in expressions? Why encourage unwieldy one-liners? Mixing regular for-each and new forEach willy-nilly is bad style. Code should speak in idioms (patterns that are quick to comprehend due to their repetition), and the fewer idioms are used the clearer the code is and less time is spent deciding which idiom to use (a big time-drain for perfectionists like myself!).
As you can see, I'm not a big fan of the forEach() except in cases when it makes sense.
Particularly offensive to me is the fact that Stream does not implement Iterable (despite actually having method iterator) and cannot be used in a for-each, only with a forEach(). I recommend casting Streams into Iterables with (Iterable<T>)stream::iterator. A better alternative is to use StreamEx which fixes a number of Stream API problems, including implementing Iterable.
That said, forEach() is useful for the following:
Atomically iterating over a synchronized list. Prior to this, a list generated with Collections.synchronizedList() was atomic with respect to things like get or set, but was not thread-safe when iterating.
Parallel execution (using an appropriate parallel stream). This saves you a few lines of code vs using an ExecutorService, if your problem matches the performance assumptions built into Streams and Spliterators.
Specific containers which, like the synchronized list, benefit from being in control of iteration (although this is largely theoretical unless people can bring up more examples)
Calling a single function more cleanly by using forEach() and a method reference argument (ie, list.forEach (obj::someMethod)). However, keep in mind the points on checked exceptions, more difficult debugging, and reducing the number of idioms you use when writing code.
Articles I used for reference:
Everything about Java 8
Iteration Inside and Out (as pointed out by another poster)
EDIT: Looks like some of the original proposals for lambdas (such as http://www.javac.info/closures-v06a.html Google Cache) solved some of the issues I mentioned (while adding their own complications, of course).
The advantage comes into account when the operations can be executed in parallel. (See http://java.dzone.com/articles/devoxx-2012-java-8-lambda-and - the section about internal and external iteration)
The main advantage from my point of view is that the implementation of what is to be done within the loop can be defined without having to decide if it will be executed in parallel or sequential
If you want your loop to be executed in parallel you could simply write
joins.parallelStream().forEach(join -> mIrc.join(mSession, join));
You will have to write some extra code for thread handling etc.
Note: For my answer I assumed joins implementing the java.util.Stream interface. If joins implements only the java.util.Iterable interface this is no longer true.
When reading this question one can get the impression, that Iterable#forEach in combination with lambda expressions is a shortcut/replacement for writing a traditional for-each loop. This is simply not true. This code from the OP:
joins.forEach(join -> mIrc.join(mSession, join));
is not intended as a shortcut for writing
for (String join : joins) {
mIrc.join(mSession, join);
}
and should certainly not be used in this way. Instead it is intended as a shortcut (although it is not exactly the same) for writing
joins.forEach(new Consumer<T>() {
#Override
public void accept(T join) {
mIrc.join(mSession, join);
}
});
And it is as a replacement for the following Java 7 code:
final Consumer<T> c = new Consumer<T>() {
#Override
public void accept(T join) {
mIrc.join(mSession, join);
}
};
for (T t : joins) {
c.accept(t);
}
Replacing the body of a loop with a functional interface, as in the examples above, makes your code more explicit: You are saying that (1) the body of the loop does not affect the surrounding code and control flow, and (2) the body of the loop may be replaced with a different implementation of the function, without affecting the surrounding code. Not being able to access non final variables of the outer scope is not a deficit of functions/lambdas, it is a feature that distinguishes the semantics of Iterable#forEach from the semantics of a traditional for-each loop. Once one gets used to the syntax of Iterable#forEach, it makes the code more readable, because you immediately get this additional information about the code.
Traditional for-each loops will certainly stay good practice (to avoid the overused term "best practice") in Java. But this doesn't mean, that Iterable#forEach should be considered bad practice or bad style. It is always good practice, to use the right tool for doing the job, and this includes mixing traditional for-each loops with Iterable#forEach, where it makes sense.
Since the downsides of Iterable#forEach have already been discussed in this thread, here are some reasons, why you might probably want to use Iterable#forEach:
To make your code more explicit: As described above, Iterable#forEach can make your code more explicit and readable in some situations.
To make your code more extensible and maintainable: Using a function as the body of a loop allows you to replace this function with different implementations (see Strategy Pattern). You could e.g. easily replace the lambda expression with a method call, that may be overwritten by sub-classes:
joins.forEach(getJoinStrategy());
Then you could provide default strategies using an enum, that implements the functional interface. This not only makes your code more extensible, it also increases maintainability because it decouples the loop implementation from the loop declaration.
To make your code more debuggable: Seperating the loop implementation from the declaration can also make debugging more easy, because you could have a specialized debug implementation, that prints out debug messages, without the need to clutter your main code with if(DEBUG)System.out.println(). The debug implementation could e.g. be a delegate, that decorates the actual function implementation.
To optimize performance-critical code: Contrary to some of the assertions in this thread, Iterable#forEach does already provide better performance than a traditional for-each loop, at least when using ArrayList and running Hotspot in "-client" mode. While this performance boost is small and negligible for most use cases, there are situations, where this extra performance can make a difference. E.g. library maintainers will certainly want to evaluate, if some of their existing loop implementations should be replaced with Iterable#forEach.
To back this statement up with facts, I have done some micro-benchmarks with Caliper. Here is the test code (latest Caliper from git is needed):
#VmOptions("-server")
public class Java8IterationBenchmarks {
public static class TestObject {
public int result;
}
public #Param({"100", "10000"}) int elementCount;
ArrayList<TestObject> list;
TestObject[] array;
#BeforeExperiment
public void setup(){
list = new ArrayList<>(elementCount);
for (int i = 0; i < elementCount; i++) {
list.add(new TestObject());
}
array = list.toArray(new TestObject[list.size()]);
}
#Benchmark
public void timeTraditionalForEach(int reps){
for (int i = 0; i < reps; i++) {
for (TestObject t : list) {
t.result++;
}
}
return;
}
#Benchmark
public void timeForEachAnonymousClass(int reps){
for (int i = 0; i < reps; i++) {
list.forEach(new Consumer<TestObject>() {
#Override
public void accept(TestObject t) {
t.result++;
}
});
}
return;
}
#Benchmark
public void timeForEachLambda(int reps){
for (int i = 0; i < reps; i++) {
list.forEach(t -> t.result++);
}
return;
}
#Benchmark
public void timeForEachOverArray(int reps){
for (int i = 0; i < reps; i++) {
for (TestObject t : array) {
t.result++;
}
}
}
}
And here are the results:
Results for -client
Results for -server
When running with "-client", Iterable#forEach outperforms the traditional for loop over an ArrayList, but is still slower than directly iterating over an array. When running with "-server", the performance of all approaches is about the same.
To provide optional support for parallel execution: It has already been said here, that the possibility to execute the functional interface of Iterable#forEach in parallel using streams, is certainly an important aspect. Since Collection#parallelStream() does not guarantee, that the loop is actually executed in parallel, one must consider this an optional feature. By iterating over your list with list.parallelStream().forEach(...);, you explicitly say: This loop supports parallel execution, but it does not depend on it. Again, this is a feature and not a deficit!
By moving the decision for parallel execution away from your actual loop implementation, you allow optional optimization of your code, without affecting the code itself, which is a good thing. Also, if the default parallel stream implementation does not fit your needs, no one is preventing you from providing your own implementation. You could e.g. provide an optimized collection depending on the underlying operating system, on the size of the collection, on the number of cores, and on some preference settings:
public abstract class MyOptimizedCollection<E> implements Collection<E>{
private enum OperatingSystem{
LINUX, WINDOWS, ANDROID
}
private OperatingSystem operatingSystem = OperatingSystem.WINDOWS;
private int numberOfCores = Runtime.getRuntime().availableProcessors();
private Collection<E> delegate;
#Override
public Stream<E> parallelStream() {
if (!System.getProperty("parallelSupport").equals("true")) {
return this.delegate.stream();
}
switch (operatingSystem) {
case WINDOWS:
if (numberOfCores > 3 && delegate.size() > 10000) {
return this.delegate.parallelStream();
}else{
return this.delegate.stream();
}
case LINUX:
return SomeVerySpecialStreamImplementation.stream(this.delegate.spliterator());
case ANDROID:
default:
return this.delegate.stream();
}
}
}
The nice thing here is, that your loop implementation doesn't need to know or care about these details.
forEach() can be implemented to be faster than for-each loop, because the iterable knows the best way to iterate its elements, as opposed to the standard iterator way. So the difference is loop internally or loop externally.
For example ArrayList.forEach(action) may be simply implemented as
for(int i=0; i<size; i++)
action.accept(elements[i])
as opposed to the for-each loop which requires a lot of scaffolding
Iterator iter = list.iterator();
while(iter.hasNext())
Object next = iter.next();
do something with `next`
However, we also need to account for two overhead costs by using forEach(), one is making the lambda object, the other is invoking the lambda method. They are probably not significant.
see also http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/ for comparing internal/external iterations for different use cases.
TL;DR: List.stream().forEach() was the fastest.
I felt I should add my results from benchmarking iteration.
I took a very simple approach (no benchmarking frameworks) and benchmarked 5 different methods:
classic for
classic foreach
List.forEach()
List.stream().forEach()
List.parallelStream().forEach
the testing procedure and parameters
private List<Integer> list;
private final int size = 1_000_000;
public MyClass(){
list = new ArrayList<>();
Random rand = new Random();
for (int i = 0; i < size; ++i) {
list.add(rand.nextInt(size * 50));
}
}
private void doIt(Integer i) {
i *= 2; //so it won't get JITed out
}
The list in this class shall be iterated over and have some doIt(Integer i) applied to all it's members, each time via a different method.
in the Main class I run the tested method three times to warm up the JVM. I then run the test method 1000 times summing the time it takes for each iteration method (using System.nanoTime()). After that's done i divide that sum by 1000 and that's the result, average time.
example:
myClass.fored();
myClass.fored();
myClass.fored();
for (int i = 0; i < reps; ++i) {
begin = System.nanoTime();
myClass.fored();
end = System.nanoTime();
nanoSum += end - begin;
}
System.out.println(nanoSum / reps);
I ran this on a i5 4 core CPU, with java version 1.8.0_05
classic for
for(int i = 0, l = list.size(); i < l; ++i) {
doIt(list.get(i));
}
execution time: 4.21 ms
classic foreach
for(Integer i : list) {
doIt(i);
}
execution time: 5.95 ms
List.forEach()
list.forEach((i) -> doIt(i));
execution time: 3.11 ms
List.stream().forEach()
list.stream().forEach((i) -> doIt(i));
execution time: 2.79 ms
List.parallelStream().forEach
list.parallelStream().forEach((i) -> doIt(i));
execution time: 3.6 ms
I feel that I need to extend my comment a bit...
About paradigm\style
That's probably the most notable aspect. FP became popular due to what you can get avoiding side-effects. I won't delve deep into what pros\cons you can get from this, since this is not related to the question.
However, I will say that the iteration using Iterable.forEach is inspired by FP and rather result of bringing more FP to Java (ironically, I'd say that there is no much use for forEach in pure FP, since it does nothing except introducing side-effects).
In the end I would say that it is rather a matter of taste\style\paradigm you are currently writing in.
About parallelism.
From performance point of view there is no promised notable benefits from using Iterable.forEach over foreach(...).
According to official docs on Iterable.forEach :
Performs the given action on the contents of the Iterable, in the
order elements occur when iterating, until all elements have been
processed or the action throws an exception.
... i.e. docs pretty much clear that there will be no implicit parallelism. Adding one would be LSP violation.
Now, there are "parallell collections" that are promised in Java 8, but to work with those you need to me more explicit and put some extra care to use them (see mschenk74's answer for example).
BTW: in this case Stream.forEach will be used, and it doesn't guarantee that actual work will be done in parallell (depends on underlying collection).
UPDATE: might be not that obvious and a little stretched at a glance but there is another facet of style and readability perspective.
First of all - plain old forloops are plain and old. Everybody already knows them.
Second, and more important - you probably want to use Iterable.forEach only with one-liner lambdas. If "body" gets heavier - they tend to be not-that readable.
You have 2 options from here - use inner classes (yuck) or use plain old forloop.
People often gets annoyed when they see the same things (iteratins over collections) being done various vays/styles in the same codebase, and this seems to be the case.
Again, this might or might not be an issue. Depends on people working on code.
One of most upleasing functional forEach's limitations is lack of checked exceptions support.
One possible workaround is to replace terminal forEach with plain old foreach loop:
Stream<String> stream = Stream.of("", "1", "2", "3").filter(s -> !s.isEmpty());
Iterable<String> iterable = stream::iterator;
for (String s : iterable) {
fileWriter.append(s);
}
Here is list of most popular questions with other workarounds on checked exception handling within lambdas and streams:
Java 8 Lambda function that throws exception?
Java 8: Lambda-Streams, Filter by Method with Exception
How can I throw CHECKED exceptions from inside Java 8 streams?
Java 8: Mandatory checked exceptions handling in lambda expressions. Why mandatory, not optional?
The advantage of Java 1.8 forEach method over 1.7 Enhanced for loop is that while writing code you can focus on business logic only.
forEach method takes java.util.function.Consumer object as an argument, so It helps in having our business logic at a separate location that you can reuse it anytime.
Have look at below snippet,
Here I have created new Class that will override accept class method from Consumer Class,
where you can add additional functionility, More than Iteration..!!!!!!
class MyConsumer implements Consumer<Integer>{
#Override
public void accept(Integer o) {
System.out.println("Here you can also add your business logic that will work with Iteration and you can reuse it."+o);
}
}
public class ForEachConsumer {
public static void main(String[] args) {
// Creating simple ArrayList.
ArrayList<Integer> aList = new ArrayList<>();
for(int i=1;i<=10;i++) aList.add(i);
//Calling forEach with customized Iterator.
MyConsumer consumer = new MyConsumer();
aList.forEach(consumer);
// Using Lambda Expression for Consumer. (Functional Interface)
Consumer<Integer> lambda = (Integer o) ->{
System.out.println("Using Lambda Expression to iterate and do something else(BI).. "+o);
};
aList.forEach(lambda);
// Using Anonymous Inner Class.
aList.forEach(new Consumer<Integer>(){
#Override
public void accept(Integer o) {
System.out.println("Calling with Anonymous Inner Class "+o);
}
});
}
}

Java 8 Iterable.forEach() vs foreach loop

Which of the following is better practice in Java 8?
Java 8:
joins.forEach(join -> mIrc.join(mSession, join));
Java 7:
for (String join : joins) {
mIrc.join(mSession, join);
}
I have lots of for loops that could be "simplified" with lambdas, but is there really any advantage of using them? Would it improve their performance and readability?
EDIT
I'll also extend this question to longer methods. I know that you can't return or break the parent function from a lambda and this should also be taken into consideration when comparing them, but is there anything else to be considered?
The better practice is to use for-each. Besides violating the Keep It Simple, Stupid principle, the new-fangled forEach() has at least the following deficiencies:
Can't use non-final variables. So, code like the following can't be turned into a forEach lambda:
Object prev = null;
for(Object curr : list)
{
if( prev != null )
foo(prev, curr);
prev = curr;
}
Can't handle checked exceptions. Lambdas aren't actually forbidden from throwing checked exceptions, but common functional interfaces like Consumer don't declare any. Therefore, any code that throws checked exceptions must wrap them in try-catch or Throwables.propagate(). But even if you do that, it's not always clear what happens to the thrown exception. It could get swallowed somewhere in the guts of forEach()
Limited flow-control. A return in a lambda equals a continue in a for-each, but there is no equivalent to a break. It's also difficult to do things like return values, short circuit, or set flags (which would have alleviated things a bit, if it wasn't a violation of the no non-final variables rule). "This is not just an optimization, but critical when you consider that some sequences (like reading the lines in a file) may have side-effects, or you may have an infinite sequence."
Might execute in parallel, which is a horrible, horrible thing for all but the 0.1% of your code that needs to be optimized. Any parallel code has to be thought through (even if it doesn't use locks, volatiles, and other particularly nasty aspects of traditional multi-threaded execution). Any bug will be tough to find.
Might hurt performance, because the JIT can't optimize forEach()+lambda to the same extent as plain loops, especially now that lambdas are new. By "optimization" I do not mean the overhead of calling lambdas (which is small), but to the sophisticated analysis and transformation that the modern JIT compiler performs on running code.
If you do need parallelism, it is probably much faster and not much more difficult to use an ExecutorService. Streams are both automagical (read: don't know much about your problem) and use a specialized (read: inefficient for the general case) parallelization strategy (fork-join recursive decomposition).
Makes debugging more confusing, because of the nested call hierarchy and, god forbid, parallel execution. The debugger may have issues displaying variables from the surrounding code, and things like step-through may not work as expected.
Streams in general are more difficult to code, read, and debug. Actually, this is true of complex "fluent" APIs in general. The combination of complex single statements, heavy use of generics, and lack of intermediate variables conspire to produce confusing error messages and frustrate debugging. Instead of "this method doesn't have an overload for type X" you get an error message closer to "somewhere you messed up the types, but we don't know where or how." Similarly, you can't step through and examine things in a debugger as easily as when the code is broken into multiple statements, and intermediate values are saved to variables. Finally, reading the code and understanding the types and behavior at each stage of execution may be non-trivial.
Sticks out like a sore thumb. The Java language already has the for-each statement. Why replace it with a function call? Why encourage hiding side-effects somewhere in expressions? Why encourage unwieldy one-liners? Mixing regular for-each and new forEach willy-nilly is bad style. Code should speak in idioms (patterns that are quick to comprehend due to their repetition), and the fewer idioms are used the clearer the code is and less time is spent deciding which idiom to use (a big time-drain for perfectionists like myself!).
As you can see, I'm not a big fan of the forEach() except in cases when it makes sense.
Particularly offensive to me is the fact that Stream does not implement Iterable (despite actually having method iterator) and cannot be used in a for-each, only with a forEach(). I recommend casting Streams into Iterables with (Iterable<T>)stream::iterator. A better alternative is to use StreamEx which fixes a number of Stream API problems, including implementing Iterable.
That said, forEach() is useful for the following:
Atomically iterating over a synchronized list. Prior to this, a list generated with Collections.synchronizedList() was atomic with respect to things like get or set, but was not thread-safe when iterating.
Parallel execution (using an appropriate parallel stream). This saves you a few lines of code vs using an ExecutorService, if your problem matches the performance assumptions built into Streams and Spliterators.
Specific containers which, like the synchronized list, benefit from being in control of iteration (although this is largely theoretical unless people can bring up more examples)
Calling a single function more cleanly by using forEach() and a method reference argument (ie, list.forEach (obj::someMethod)). However, keep in mind the points on checked exceptions, more difficult debugging, and reducing the number of idioms you use when writing code.
Articles I used for reference:
Everything about Java 8
Iteration Inside and Out (as pointed out by another poster)
EDIT: Looks like some of the original proposals for lambdas (such as http://www.javac.info/closures-v06a.html Google Cache) solved some of the issues I mentioned (while adding their own complications, of course).
The advantage comes into account when the operations can be executed in parallel. (See http://java.dzone.com/articles/devoxx-2012-java-8-lambda-and - the section about internal and external iteration)
The main advantage from my point of view is that the implementation of what is to be done within the loop can be defined without having to decide if it will be executed in parallel or sequential
If you want your loop to be executed in parallel you could simply write
joins.parallelStream().forEach(join -> mIrc.join(mSession, join));
You will have to write some extra code for thread handling etc.
Note: For my answer I assumed joins implementing the java.util.Stream interface. If joins implements only the java.util.Iterable interface this is no longer true.
When reading this question one can get the impression, that Iterable#forEach in combination with lambda expressions is a shortcut/replacement for writing a traditional for-each loop. This is simply not true. This code from the OP:
joins.forEach(join -> mIrc.join(mSession, join));
is not intended as a shortcut for writing
for (String join : joins) {
mIrc.join(mSession, join);
}
and should certainly not be used in this way. Instead it is intended as a shortcut (although it is not exactly the same) for writing
joins.forEach(new Consumer<T>() {
#Override
public void accept(T join) {
mIrc.join(mSession, join);
}
});
And it is as a replacement for the following Java 7 code:
final Consumer<T> c = new Consumer<T>() {
#Override
public void accept(T join) {
mIrc.join(mSession, join);
}
};
for (T t : joins) {
c.accept(t);
}
Replacing the body of a loop with a functional interface, as in the examples above, makes your code more explicit: You are saying that (1) the body of the loop does not affect the surrounding code and control flow, and (2) the body of the loop may be replaced with a different implementation of the function, without affecting the surrounding code. Not being able to access non final variables of the outer scope is not a deficit of functions/lambdas, it is a feature that distinguishes the semantics of Iterable#forEach from the semantics of a traditional for-each loop. Once one gets used to the syntax of Iterable#forEach, it makes the code more readable, because you immediately get this additional information about the code.
Traditional for-each loops will certainly stay good practice (to avoid the overused term "best practice") in Java. But this doesn't mean, that Iterable#forEach should be considered bad practice or bad style. It is always good practice, to use the right tool for doing the job, and this includes mixing traditional for-each loops with Iterable#forEach, where it makes sense.
Since the downsides of Iterable#forEach have already been discussed in this thread, here are some reasons, why you might probably want to use Iterable#forEach:
To make your code more explicit: As described above, Iterable#forEach can make your code more explicit and readable in some situations.
To make your code more extensible and maintainable: Using a function as the body of a loop allows you to replace this function with different implementations (see Strategy Pattern). You could e.g. easily replace the lambda expression with a method call, that may be overwritten by sub-classes:
joins.forEach(getJoinStrategy());
Then you could provide default strategies using an enum, that implements the functional interface. This not only makes your code more extensible, it also increases maintainability because it decouples the loop implementation from the loop declaration.
To make your code more debuggable: Seperating the loop implementation from the declaration can also make debugging more easy, because you could have a specialized debug implementation, that prints out debug messages, without the need to clutter your main code with if(DEBUG)System.out.println(). The debug implementation could e.g. be a delegate, that decorates the actual function implementation.
To optimize performance-critical code: Contrary to some of the assertions in this thread, Iterable#forEach does already provide better performance than a traditional for-each loop, at least when using ArrayList and running Hotspot in "-client" mode. While this performance boost is small and negligible for most use cases, there are situations, where this extra performance can make a difference. E.g. library maintainers will certainly want to evaluate, if some of their existing loop implementations should be replaced with Iterable#forEach.
To back this statement up with facts, I have done some micro-benchmarks with Caliper. Here is the test code (latest Caliper from git is needed):
#VmOptions("-server")
public class Java8IterationBenchmarks {
public static class TestObject {
public int result;
}
public #Param({"100", "10000"}) int elementCount;
ArrayList<TestObject> list;
TestObject[] array;
#BeforeExperiment
public void setup(){
list = new ArrayList<>(elementCount);
for (int i = 0; i < elementCount; i++) {
list.add(new TestObject());
}
array = list.toArray(new TestObject[list.size()]);
}
#Benchmark
public void timeTraditionalForEach(int reps){
for (int i = 0; i < reps; i++) {
for (TestObject t : list) {
t.result++;
}
}
return;
}
#Benchmark
public void timeForEachAnonymousClass(int reps){
for (int i = 0; i < reps; i++) {
list.forEach(new Consumer<TestObject>() {
#Override
public void accept(TestObject t) {
t.result++;
}
});
}
return;
}
#Benchmark
public void timeForEachLambda(int reps){
for (int i = 0; i < reps; i++) {
list.forEach(t -> t.result++);
}
return;
}
#Benchmark
public void timeForEachOverArray(int reps){
for (int i = 0; i < reps; i++) {
for (TestObject t : array) {
t.result++;
}
}
}
}
And here are the results:
Results for -client
Results for -server
When running with "-client", Iterable#forEach outperforms the traditional for loop over an ArrayList, but is still slower than directly iterating over an array. When running with "-server", the performance of all approaches is about the same.
To provide optional support for parallel execution: It has already been said here, that the possibility to execute the functional interface of Iterable#forEach in parallel using streams, is certainly an important aspect. Since Collection#parallelStream() does not guarantee, that the loop is actually executed in parallel, one must consider this an optional feature. By iterating over your list with list.parallelStream().forEach(...);, you explicitly say: This loop supports parallel execution, but it does not depend on it. Again, this is a feature and not a deficit!
By moving the decision for parallel execution away from your actual loop implementation, you allow optional optimization of your code, without affecting the code itself, which is a good thing. Also, if the default parallel stream implementation does not fit your needs, no one is preventing you from providing your own implementation. You could e.g. provide an optimized collection depending on the underlying operating system, on the size of the collection, on the number of cores, and on some preference settings:
public abstract class MyOptimizedCollection<E> implements Collection<E>{
private enum OperatingSystem{
LINUX, WINDOWS, ANDROID
}
private OperatingSystem operatingSystem = OperatingSystem.WINDOWS;
private int numberOfCores = Runtime.getRuntime().availableProcessors();
private Collection<E> delegate;
#Override
public Stream<E> parallelStream() {
if (!System.getProperty("parallelSupport").equals("true")) {
return this.delegate.stream();
}
switch (operatingSystem) {
case WINDOWS:
if (numberOfCores > 3 && delegate.size() > 10000) {
return this.delegate.parallelStream();
}else{
return this.delegate.stream();
}
case LINUX:
return SomeVerySpecialStreamImplementation.stream(this.delegate.spliterator());
case ANDROID:
default:
return this.delegate.stream();
}
}
}
The nice thing here is, that your loop implementation doesn't need to know or care about these details.
forEach() can be implemented to be faster than for-each loop, because the iterable knows the best way to iterate its elements, as opposed to the standard iterator way. So the difference is loop internally or loop externally.
For example ArrayList.forEach(action) may be simply implemented as
for(int i=0; i<size; i++)
action.accept(elements[i])
as opposed to the for-each loop which requires a lot of scaffolding
Iterator iter = list.iterator();
while(iter.hasNext())
Object next = iter.next();
do something with `next`
However, we also need to account for two overhead costs by using forEach(), one is making the lambda object, the other is invoking the lambda method. They are probably not significant.
see also http://journal.stuffwithstuff.com/2013/01/13/iteration-inside-and-out/ for comparing internal/external iterations for different use cases.
TL;DR: List.stream().forEach() was the fastest.
I felt I should add my results from benchmarking iteration.
I took a very simple approach (no benchmarking frameworks) and benchmarked 5 different methods:
classic for
classic foreach
List.forEach()
List.stream().forEach()
List.parallelStream().forEach
the testing procedure and parameters
private List<Integer> list;
private final int size = 1_000_000;
public MyClass(){
list = new ArrayList<>();
Random rand = new Random();
for (int i = 0; i < size; ++i) {
list.add(rand.nextInt(size * 50));
}
}
private void doIt(Integer i) {
i *= 2; //so it won't get JITed out
}
The list in this class shall be iterated over and have some doIt(Integer i) applied to all it's members, each time via a different method.
in the Main class I run the tested method three times to warm up the JVM. I then run the test method 1000 times summing the time it takes for each iteration method (using System.nanoTime()). After that's done i divide that sum by 1000 and that's the result, average time.
example:
myClass.fored();
myClass.fored();
myClass.fored();
for (int i = 0; i < reps; ++i) {
begin = System.nanoTime();
myClass.fored();
end = System.nanoTime();
nanoSum += end - begin;
}
System.out.println(nanoSum / reps);
I ran this on a i5 4 core CPU, with java version 1.8.0_05
classic for
for(int i = 0, l = list.size(); i < l; ++i) {
doIt(list.get(i));
}
execution time: 4.21 ms
classic foreach
for(Integer i : list) {
doIt(i);
}
execution time: 5.95 ms
List.forEach()
list.forEach((i) -> doIt(i));
execution time: 3.11 ms
List.stream().forEach()
list.stream().forEach((i) -> doIt(i));
execution time: 2.79 ms
List.parallelStream().forEach
list.parallelStream().forEach((i) -> doIt(i));
execution time: 3.6 ms
I feel that I need to extend my comment a bit...
About paradigm\style
That's probably the most notable aspect. FP became popular due to what you can get avoiding side-effects. I won't delve deep into what pros\cons you can get from this, since this is not related to the question.
However, I will say that the iteration using Iterable.forEach is inspired by FP and rather result of bringing more FP to Java (ironically, I'd say that there is no much use for forEach in pure FP, since it does nothing except introducing side-effects).
In the end I would say that it is rather a matter of taste\style\paradigm you are currently writing in.
About parallelism.
From performance point of view there is no promised notable benefits from using Iterable.forEach over foreach(...).
According to official docs on Iterable.forEach :
Performs the given action on the contents of the Iterable, in the
order elements occur when iterating, until all elements have been
processed or the action throws an exception.
... i.e. docs pretty much clear that there will be no implicit parallelism. Adding one would be LSP violation.
Now, there are "parallell collections" that are promised in Java 8, but to work with those you need to me more explicit and put some extra care to use them (see mschenk74's answer for example).
BTW: in this case Stream.forEach will be used, and it doesn't guarantee that actual work will be done in parallell (depends on underlying collection).
UPDATE: might be not that obvious and a little stretched at a glance but there is another facet of style and readability perspective.
First of all - plain old forloops are plain and old. Everybody already knows them.
Second, and more important - you probably want to use Iterable.forEach only with one-liner lambdas. If "body" gets heavier - they tend to be not-that readable.
You have 2 options from here - use inner classes (yuck) or use plain old forloop.
People often gets annoyed when they see the same things (iteratins over collections) being done various vays/styles in the same codebase, and this seems to be the case.
Again, this might or might not be an issue. Depends on people working on code.
One of most upleasing functional forEach's limitations is lack of checked exceptions support.
One possible workaround is to replace terminal forEach with plain old foreach loop:
Stream<String> stream = Stream.of("", "1", "2", "3").filter(s -> !s.isEmpty());
Iterable<String> iterable = stream::iterator;
for (String s : iterable) {
fileWriter.append(s);
}
Here is list of most popular questions with other workarounds on checked exception handling within lambdas and streams:
Java 8 Lambda function that throws exception?
Java 8: Lambda-Streams, Filter by Method with Exception
How can I throw CHECKED exceptions from inside Java 8 streams?
Java 8: Mandatory checked exceptions handling in lambda expressions. Why mandatory, not optional?
The advantage of Java 1.8 forEach method over 1.7 Enhanced for loop is that while writing code you can focus on business logic only.
forEach method takes java.util.function.Consumer object as an argument, so It helps in having our business logic at a separate location that you can reuse it anytime.
Have look at below snippet,
Here I have created new Class that will override accept class method from Consumer Class,
where you can add additional functionility, More than Iteration..!!!!!!
class MyConsumer implements Consumer<Integer>{
#Override
public void accept(Integer o) {
System.out.println("Here you can also add your business logic that will work with Iteration and you can reuse it."+o);
}
}
public class ForEachConsumer {
public static void main(String[] args) {
// Creating simple ArrayList.
ArrayList<Integer> aList = new ArrayList<>();
for(int i=1;i<=10;i++) aList.add(i);
//Calling forEach with customized Iterator.
MyConsumer consumer = new MyConsumer();
aList.forEach(consumer);
// Using Lambda Expression for Consumer. (Functional Interface)
Consumer<Integer> lambda = (Integer o) ->{
System.out.println("Using Lambda Expression to iterate and do something else(BI).. "+o);
};
aList.forEach(lambda);
// Using Anonymous Inner Class.
aList.forEach(new Consumer<Integer>(){
#Override
public void accept(Integer o) {
System.out.println("Calling with Anonymous Inner Class "+o);
}
});
}
}

Implement Local Search (2-opt) to solve the TSP in Java

I am trying to implement this but I can't find a good paper or description of how to do it, could you guys point me in the right direction please? I do have an implementation of it in C# but I don't know enough to just convert the code to Java.
As per a comment I'm adding some of the C# Code I haven't been able to convert to Java:
//T with the smallest func(t)
static T MinBy<T, TComparable>(this IEnumerable<T> xs, Func<T, TComparable> func) where TComparable : IComparable<TComparable>{
return xs.DefaultIfEmpty().Aggregate((maxSoFar, elem) => func(elem).CompareTo(func(maxSoFar)) > 0 ? maxSoFar : elem);
}
//returns an ordered set of nearest neighbors
static IEnumerable<Stop> NearestNeighbors(this IEnumerable<Stop> stops){
var stopsLeft = stops.ToList();
for (var stop = stopsLeft.First(); stop != null; stop = stopsLeft.MinBy(s => Stop.Distance(stop, s))){
stopsLeft.Remove(stop);
yield return stop;
}
}
I assume you are not familiar with C#. So I will try to explain some of the things in short.
IEnumerable<T> is C#'s equivalent of java's Iterable<T>
Func<T, V> is an abstraction of a method who's input is T and return value is V. C#, unlike Java, supports closures, but they are effectively like java anonymous classes, without all the syntactic fuss around. So basically, the second argument of MinBy is a means to extract the property from T is relevant for comparison. You could easily implement the very same abstraction with an anonymous class, but it will not be as concise.
The strange this modifier that comes before the first argument is saying that this is an extension method. It solely serves a syntactic sugar purpose. When a method is define like this, it means that it can be called on the instance of the first argument (that has the this modifier before it). This allowes you to write code like:
IEnumerable<String> seq = getS();
seq.MinBy(/*bla*/);
instead of explicitly specifying the Utility class the static method is defined in:
MyUtility.MinBy(s, /*bla*/);
You probably do not need this high level of abstraction (and lets face it, java is simply not built for it today) so what you want to do is to define a method instead of MinBy that inputs an Iterable leftStops and another Stop currentStop and finds the closest stop to currentStop from leftStops.
Something like:
Stop findClosest(Stop currentStop, Iterable<Stop> left stops) {/*implement me*/}
That done, lets turn to NearestNeighbors itself. What is that yield return? it is a very powerful way to implelent iterators in .Net. I feel that a full explanation on its workings is beyond the scope of our discussion, so I have rewritten the method not to use this feature in a way that conserves its functionality (and removed the this qualifier of its first argument):
static IEnumerable<Stop> NearestNeighbors(IEnumerable<Stop> stops){
IEnumerable<Stop> result = new List<stop>();
var stopsLeft = stops.ToList();
for (var stop = stopsLeft.First(); stop != null; stop = stopsLeft.MinBy(s => Stop.Distance(stop, s))){
stopsLeft.Remove(stop);
result.Add(stop);
}
return result;
}
So we are left with the following algorithm:
Input a list of Stops
next-stop = first-stop
Remove next-stop from the Stop list
Find the closest stop to next-stop and set next-stop=closest
if there are more stops, go to 3
Return the stops in the order they were visited.
Hopefully it is clearer now.

Categories

Resources