Do lambda expressions have any use other than saving lines of code? - java

Do lambda expressions have any use other than saving lines of code?
Are there any special features provided by lambdas which solved problems which weren't easy to solve? The typical usage I've seen is that instead of writing this:
Comparator<Developer> byName = new Comparator<Developer>() {
#Override
public int compare(Developer o1, Developer o2) {
return o1.getName().compareTo(o2.getName());
}
};
We can use a lambda expression to shorten the code:
Comparator<Developer> byName =
(Developer o1, Developer o2) -> o1.getName().compareTo(o2.getName());

Lambda expressions do not change the set of problems you can solve with Java in general, but definitely make solving certain problems easier, just for the same reason we’re not programming in assembly language anymore. Removing redundant tasks from the programmer’s work makes life easier and allows to do things you wouldn’t even touch otherwise, just for the amount of code you would have to produce (manually).
But lambda expressions are not just saving lines of code. Lambda expressions allow you to define functions, something for which you could use anonymous inner classes as a workaround before, that’s why you can replace anonymous inner classes in these cases, but not in general.
Most notably, lambda expressions are defined independently to the functional interface they will be converted to, so there are no inherited members they could access, further, they can not access the instance of the type implementing the functional interface. Within a lambda expression, this and super have the same meaning as in the surrounding context, see also this answer. Also, you can not create new local variables shadowing local variables of the surrounding context. For the intended task of defining a function, this removes a lot of error sources, but it also implies that for other use cases, there might be anonymous inner classes which can not be converted to a lambda expression, even if implementing a functional interface.
Further, the construct new Type() { … } guarantees to produce a new distinct instance (as new always does). Anonymous inner class instances always keep a reference to their outer instance if created in a non-static context¹. In contrast, lambda expressions only capture a reference to this when needed, i.e. if they access this or a non-static member. And they produce instances of an intentionally unspecified identity, which allows the implementation to decide at runtime whether to reuse existing instances (see also “Does a lambda expression create an object on the heap every time it's executed?”).
These differences apply to your example. Your anonymous inner class construct will always produce a new instance, also it may capture a reference to the outer instance, whereas your (Developer o1, Developer o2) -> o1.getName().compareTo(o2.getName()) is a non-capturing lambda expression that will evaluate to a singleton in typical implementations. Further, it doesn’t produce a .class file on your hard drive.
Given the differences regarding both, semantic and performance, lambda expressions may change the way programmers will solve certain problems in the future, of course, also due to the new APIs embracing ideas of functional programming utilizing the new language features. See also Java 8 lambda expression and first-class values.
¹ From JDK 1.1 to JDK 17. Starting with JDK 18, inner classes may not retain a reference to the outer instance if it is not used. For compatibility reasons, this requires the inner class not be serializable. This only applies if you (re)compile the inner class under JDK 18 or newer with target JDK 18 or newer. See also JDK-8271717

Programming languages are not for machines to execute.
They are for programmers to think in.
Languages are a conversation with a compiler to turn our thoughts into something a machine can execute. One of the chief complaints about Java from people who come to it from other languages (or leave it for other languages) used to be that it forces a certain mental model on the programmer (i.e. everything is a class).
I'm not going to weigh in on whether that's good or bad: everything is trade-offs. But Java 8 lambdas allow programmers to think in terms of functions, which is something you previously could not do in Java.
It's the same thing as a procedural programmer learning to think in terms of classes when they come to Java: you see them gradually move from classes that are glorified structs and have 'helper' classes with a bunch of static methods and move on to something that more closely resembles a rational OO design (mea culpa).
If you just think of them as a shorter way to express anonymous inner classes then you are probably not going to find them very impressive in the same way that the procedural programmer above probably didn't think classes were any great improvement.

Saving lines of code can be viewed as a new feature, if it enables you to write a substantial chunk of logic in a shorter and clearer manner, which takes less time for others to read and understand.
Without lambda expressions (and/or method references) Stream pipelines would have been much less readable.
Think, for example, how the following Stream pipeline would have looked like if you replaced each lambda expression with an anonymous class instance.
List<String> names =
people.stream()
.filter(p -> p.getAge() > 21)
.map(p -> p.getName())
.sorted((n1,n2) -> n1.compareToIgnoreCase(n2))
.collect(Collectors.toList());
It would be:
List<String> names =
people.stream()
.filter(new Predicate<Person>() {
#Override
public boolean test(Person p) {
return p.getAge() > 21;
}
})
.map(new Function<Person,String>() {
#Override
public String apply(Person p) {
return p.getName();
}
})
.sorted(new Comparator<String>() {
#Override
public int compare(String n1, String n2) {
return n1.compareToIgnoreCase(n2);
}
})
.collect(Collectors.toList());
This is much harder to write than the version with lambda expressions, and it's much more error prone. It's also harder to understand.
And this is a relatively short pipeline.
To make this readable without lambda expressions and method references, you would have had to define variables that hold the various functional interface instances being used here, which would have split the logic of the pipeline, making it harder to understand.

Internal iteration
When iterating Java Collections, most developers tend to get an element and then process it. This is, take that item out and then use it, or reinsert it, etc. With pre-8 versions of Java, you can implement an inner class and do something like:
numbers.forEach(new Consumer<Integer>() {
public void accept(Integer value) {
System.out.println(value);
}
});
Now with Java 8 you can do better and less verbose with:
numbers.forEach((Integer value) -> System.out.println(value));
or better
numbers.forEach(System.out::println);
Behaviors as arguments
Guess the following case:
public int sumAllEven(List<Integer> numbers) {
int total = 0;
for (int number : numbers) {
if (number % 2 == 0) {
total += number;
}
}
return total;
}
With Java 8 Predicate interface you can do better like so:
public int sumAll(List<Integer> numbers, Predicate<Integer> p) {
int total = 0;
for (int number : numbers) {
if (p.test(number)) {
total += number;
}
}
return total;
}
Calling it like:
sumAll(numbers, n -> n % 2 == 0);
Source: DZone - Why We Need Lambda Expressions in Java

There are many benefits of using lambdas instead of inner class following as below:
Make the code more compactly and expressive without introducing more language syntax semantics. you already gave an example in your question.
By using lambdas you are happy to programming with functional-style operations on streams of elements, such as map-reduce transformations on collections. see java.util.function & java.util.stream packages documentation.
There is no physical classes file generated for lambdas by compiler. Thus, it makes your delivered applications smaller. How Memory assigns to lambda?
The compiler will optimize lambda creation if the lambda doesn't access variables out of its scope, which means the lambda instance only create once by the JVM. for more details you can see #Holger's answer of the question Is method reference caching a good idea in Java 8?
.
Lambdas can implements multi marker interfaces besides the functional interface, but the anonymous inner classes can't implements more interfaces, for example:
// v--- create the lambda locally.
Consumer<Integer> action = (Consumer<Integer> & Serializable) it -> {/*TODO*/};

Lambdas are just syntactic sugar for anonymous classes.
Before lambdas, anonymous classes can be used to achieve the same thing. Every lambda expression can be converted to an anonymous class.
If you are using IntelliJ IDEA, it can do the conversion for you:
Put the cursor in the lambda
Press alt/option + enter

To answer your question, the matter of fact is lambdas don’t let you do anything that you couldn’t do prior to java-8, rather it enables you to write more concise code. The benefits of this, is that your code will be clearer and more flexible.

One thing I don't see mentioned yet is that a lambda lets you define functionality where it's used.
So if you have some simple selection function you don't need to put it in a separate place with a bunch of boilerplate, you just write a lambda that's concise and locally relevant.

Yes many advantages are there.
No need to define whole class we can pass implementation of function it self as reference.
Internally creation of class will create .class file while if you use lambda then class creation is avoided by compiler because in lambda you are passing function implementation instead of class.
Code re-usability is higher then before
And as you said code is shorter then normal implementation.

Function composition and higher order functions.
Lambda functions can be used as building blocks towards building "higher order functions" or performing "function composition". Lambda functions can be seen as reusable building blocks in this sense.
Example of Higher Order Function via lambda:
Function<IntUnaryOperator, IntUnaryOperator> twice = f -> f.andThen(f);
IntUnaryOperator plusThree = i -> i + 3;
var g = twice.apply(plusThree);
System.out.println(g.applyAsInt(7))
Example Function Composition
Predicate<String> startsWithA = (text) -> text.startsWith("A");
Predicate<String> endsWithX = (text) -> text.endsWith("x");
Predicate<String> startsWithAAndEndsWithX =
(text) -> startsWithA.test(text) && endsWithX.test(text);
String input = "A hardworking person must relax";
boolean result = startsWithAAndEndsWithX.test(input);
System.out.println(result);

One benefit not yet mentioned is my favorite: lambdas make deferred execution really easy to write.
Log4j2 uses this for example, where instead of passing a value to conditionally log (a value that may have been expensive to calculate), you can now pass a lambda to calculate that expensive value. The difference being that before, that value was being calculated every time whether it got used or not, whereas now with lambdas if your log level decides not to log that statement, then the lambda never gets called, and that expensive calculation never takes place -- a performance boost!
Could that be done without lambdas? Yes, by surrounding each log statement with if() checks, or using verbose anonymous class syntax, but at the cost of horrible code noise.
Similar examples abound. Lambdas are like having your cake and eating it too: all the efficiency of gnarly multi-line optimized code squeezed down into the visual elegance of one-liners.
Edit: As requested by commenter, an example:
Old way, where expensiveCalculation() always gets called regardless of whether this log statement will actually use it:
logger.trace("expensive value was {}", expensiveCalculation());
New lambda efficient way, where expensiveCalculation() call won't happen unless trace log level is enabled:
logger.trace("expensive value was {}", () -> expensiveCalculation());

Related

List forEach with method reference explanation

I have been learning java for past few months and just started to get into lambda functions. I recently switched my IDE and noticed a warning saying "Can be replaced with method reference" on codes like this.
List<Integer> intList = new ArrayList<>();
intList.add(1);
intList.add(2);
intList.add(3);
intList.forEach(num -> doSomething(num));
voiddoSomething(int num) {
System.out.println("Number is: " + num);
}
After some digging, I realized that instead of the line
intList.forEach(num -> doSomething(num));
I can just use
intList.forEach(this::doSomething);
This is just amazing. A few days ago I did not even knew about lambdas and was using for loops to do operations like this. Now I replaced my for loops with lambdas and even better, I can replace my lambdas with method references. The problem is that I don't really understand how all this works internally. Can anyone please explain or provide a good resource explaining how the doSomething function is called and the argument is passed to it when we use method reference?
The double-colon operator is simply a convenience operator for doing the same thing that your lambda is doing. Check out this page for more details: https://javapapers.com/core-java/java-method-reference/
The double colon is simply syntactic sugar for defining a lambda expression whose parameters and return type are the same as an existing function. It was created to to allow lambdas to more easily be added with existing codebases.
Calling the forEach method of a List<Integer> object takes as its parameter any object implementing the Consumer functional interface. Your lambda num -> doSomething(num) itself happens to fulfill the formal requirements of this interface.
Thus, you can use the double colon as syntactic sugar for that lambda expression.
In general, if you have an object obj with method func, which accepts parameters params... then writing obj::func is equivalent to the lambda (params...) -> obj.func(params...).
In your case, o is this (the current object), which has a method doSomething(), which takes an integer parameter, thus, this::doSomething is equivalent to num -> doSomething(num).
Given you've mentioned that it's only until recently you started getting into functional programming I'd like to keep things as simple and straightforward as possible, but note that with just the little code you've provided, we could derive a lot both from the high-level view of things as well the low-level view.
Can anyone please explain or provide a good resource explaining how
the doSomething function is called and the argument is passed to it
when we use method reference?
how the doSomething function is called is left to the library (internal iteration) regardless of whether we use a method reference or a lambda expression, so essentially we specify the what not the how meaning we provide to the forEach method a behaviour (a function) that we want to execute for each element of the source intList and not necessarily how it should go about its work.
This is then left to the library to apply (execute) the specified function of doSomething for each element of the source intList.
Method references can be seen as a shorthand for lambdas calling only a specific method. The benefit here is that by referring to a specific method name explicitly, your code gains better readability, therefore, making it easier to read and follow and in most cases reading code with method references reads as the problem statement which is a good thing.
It's also important to know that not any given function can be passed to the forEach terminal operation as every method that accepts a behaviour has a restriction on the type of function allowed. This is accomplished with the use of functional interfaces in the java.util.function package.
Lastly but not least, in terms of refactoring it's not always possible to use method references nor is it always better to use lambdas expressions over code that we used prior to Java-8. However, as you go on with your journey of learning the Java-8 features, a few tips to better your code are to try:
Refactoring anonymous classes to lambda expressions
Refactoring lambda expressions to method references
Refactoring imperative-style data processing to streams

Java lambdas: replace -> operator with :: for stream filter

There is an event class:
public class Event {
private int index;
public int getIndex() {return index;}
}
Also there is a method - it selects an event sublist with certain values of "index" property. Extremely simple, but such functionality is widely used.
public List<Event> select(List<Event> scenario, List<Integer> indexesToInclude) {
Predicate<Event> indexMatcher = e -> indexesToInclude.contains(e.getIndex());
return scenario.stream().filter(indexMatcher).collect(Collectors.toList());
}
The task is to avoid usage of -> operator in favor of :: operator. Why? Because e -> ... looks like a workaround for such common task.
Is it possible to do?
I expect syntax like (this won't compile of course):
Predicate<Event> indexMatcher = { indexesToInclude.contains(Event::getIndex) };
however it can be a chain of methods or other solution without writing loops or creating new classes/methods.
Is it possible to do?
No. Lambda expressions (the so-called "workaround") are the way to do this. That's what they were added to the language for.
(Actually ... you could do this the old-school way by defining an anonymous inner class. But it won't be a one-liner.)
Why? Because e -> ... looks like a workaround for such common task.
I guess, it depends on your perspective. For instance, a syntax purist might consider s1 + i as a "workaround" for s1.concat(Integer.toString(i)).
In fact, these things are generally called "syntactic sugar" ... and they are added to a language to make it easier to write concise and readable code.
Obviously, to be able to read the code you first need to understand the syntax, then you need to get used to it.
Unfortunately, it seems that the real problem here seems to be that you don't like the Java lambda syntax. Sorry, but you will just need to get used to it. Fighting it is not going to work.

Predicates vs if statements

I have seen in some projects that people use Predicates instead of pure if statements, as illustrated with a simple example below:
int i = 5;
// Option 1
if (i == 5) {
// Do something
System.out.println("if statement");
}
// Option 2
Predicate<Integer> predicate = integer -> integer == 5;
if (predicate.test(i)) {
// Do something
System.out.println("predicate");
}
What's the point of preferring Predicates over if statements?
Using a predicate makes your code more flexible.
Instead of writing a condition that always checks if i == 5, you can write a condition that evaluates a Predicate, which allows you to pass different Predicates implementing different conditions.
For example, the Predicate can be passed as an argument to a method :
public void someMethod (Predicate<Integer> predicate) {
if(predicate.test(i)) {
// do something
System.out.println("predicate");
}
...
}
This is how the filter method of Stream works.
For the exact example that you provided, using a Predicate is a big over-kill. The compiler and then the runtime will create:
a method (de-sugared predicate)
a .class that will implement java.util.Predicate
an instance of the class created at 2
all this versus a simple if statement.
And all this for a stateless Predicate. If your predicate is statefull, like:
Predicate<Integer> p = (Integer j) -> this.isJGood(j); // you are capturing "this"
then every time you will use this Predicate, a new instance will be created (at least under the current JVM).
The only viable option IMO to create such a Predicate is, of course, to re-use it in multiple places (like passing as arguments to methods).
Using if statements is the best (read: most performant) way to check binary conditions.
The switch statement may be faster for more complex situations.
A Predicate are a special form of Function. In fact the java language architect work on a way to allow generic primitive types. This will make Predicate<T> roughly equivalent to Function<T, boolean> (modulo the test vs apply method name).
If a function (resp. method) takes one or more functions as argument(s), we call it higher-order function. We say that we are passing behaviour to a function. This allows us to create powerful APIs.
String result = Match(arg).of(
Case(isIn("-h", "--help"), help()),
Case(isIn("-v", "--version"), version()),
Case($(), cmd -> "unknown command: " + cmd)
);
This example is taken from Javaslang, a library for object-functional programming in Java 8+.
Disclaimer: I'm the creator of Javaslang.
Thi is an old question, but I'll give it a try, since I am battling with it myself...
In my attempt to excuse my own usage of predicates I have made a self-rule.
I believe Predicates are useful where the "logic point" - is NOT the: leaf | corner | the end - of a: graph | tree | straight line, which would make the logic point effectively a "logic joint".
By it being a joint (aka node) it has a state, a re-usable and mutable state, that serves as a means towards an end.
In a stream, where the data is supposed to traverse a path, predicates are useful since they grant a degree of access while keeping the integrity of the stream, this is why the best predicates IMO are only method references minimizing side effects.
Even though the most common form of Predicate is newObject.equal(old), which is in itself a BiPredicate, but CAN be used with a single Predicate with side effect lambda -> lambda.equal(localCache) (so this may be an exception to the Only Method References rule).
IF, the logic serves as the output/exit point towards a different architectural design, or component, or a code that is not written by you, or even if it is written by you, one that differs on its functionality, then an if-else is my way to go.
Another benefit of predicates in the case of reactive programming is that multiple subscribers can make use of the same defined logic gate.
But if the end point of a publisher will be a single lone subscriber (which would be a case similar to your example if I'm reaching), then the logic is better done with an if-else.

How can I write a higher order function like map, or reduce in java?

I read an article on Joel On Software about the idea of using higher order functions to greatly simplify code through the use of map and reduce. He mentioned that this was difficult to do in Java. The article: http://www.joelonsoftware.com/items/2006/08/01.html
The example from the article below, loops through an array, and uses the function fn that was passed as an argument on each element in the array:
function map(fn, a)
{
for (i = 0; i < a.length; i++)
{
a[i] = fn(a[i]);
}
}
This would be invoked similar to the below in practice:
map( function(x){return x*2;}, a );
map( alert, a );
Ideally I'd like to write a map function to work on arrays, or Collections of any type if possible.
I have been looking around on the Internet, and I am having a difficult time finding resources on the subject. Firstly, are anonymous functions possible in java? Is this possible to do in another way? Will it be available in a future version of java? If possible, how can I do it?
I imagine that if this is not possible in Java there is some kind of 'pattern'/technique that people use to achieve the same effect, as I imagine anonymous functions are a very powerful tool in the software world. the only similar question I was able to find was this: Java generics - implementing higher order functions like map and it makes absolutely no sense to me.
Guava provides map (but it's called transform instead, and is in utility classes like Lists and Collections2). It doesn't provide fold/reduce, however.
In any case, the syntax for using transform feels really clunky compared to using map in Scheme. It's a bit like trying to write with your left hand, if you're right-handed. But, this is Java; what do you expect. :-P
Looks like this one?
How can I write an anonymous function in Java?
P.S: try Functional Java. Maybe it could give you hints.
Single method anonymous classes provide a similar, but much more verbose, way of writing an anonymous function in Java.
For example, you could have:
Iterable<Source> foos = ...;
Iterable<Destination> mappedFoos = foos.map(new Function<Source, Destination>()
{
public Destination apply(Source item) { return ... }
});
For an example of a Java library with a functional style, see Guava
interface Func<V,A> {
V call (A a);
}
static <V,A> List<V> map (Func<V,A> func, List<A> as) {
List<V> vs = new ArrayList<V>(as.size());
for (A a : as) {
Vs.add(func.call(a));
}
return vs;
}
Paguro has an open-source implementation of higher order functions. Initial test show it to be 98% as fast as the native Java forEach loop. The operations it supports are applied lazily without modifying the underlying collection. It outputs to type-safe versions of the immutable (and sometimes mutable) Clojure collections. Transformable is built into Paguro's unmodifiable and immutable collections and interfaces. To use a raw java.util collection as input, just wrap it with the xform() function.

Complexity of Java 7's current Lambda proposal? (August 2010)

Some people say that every programming language has its "complexity budget" which it can use to accomplish its purpose. But if the complexity budget is depleted, every minor change becomes increasingly complicated and hard to implement in a backward-compatible way.
After reading the current provisional syntax for Lambda (≙ Lambda expressions, exception transparency, defender methods and method references) from August 2010 I wonder if people at Oracle completely ignored Java's complexity budget when considering such changes.
These are the questions I'm thinking about - some of them more about language design in general:
Are the proposed additions comparable in complexity to approaches other languages chose?
Is it generally possible to add such additions to a language and protecting the developer from the complexity of the implementation ?
Are these additions a sign of reaching the end of the evolution of Java-as-a-language or is this expected when changing a language with a huge history?
Have other languages taken a totally different approach at this point of language evolution?
Thanks!
I have not followed the process and evolution of the Java 7 lambda
proposal, I am not even sure of what the latest proposal wording is.
Consider this as a rant/opinion rather than statements of truth. Also,
I have not used Java for ages, so the syntax might be rusty and
incorrect at places.
First, what are lambdas to the Java language? Syntactic sugar. While
in general lambdas enable code to create small function objects in
place, that support was already preset --to some extent-- in the Java
language through the use of inner classes.
So how much better is the syntax of lambdas? Where does it outperform
previous language constructs? Where could it be better?
For starters, I dislike the fact that there are two available syntax
for lambda functions (but this goes in the line of C#, so I guess my
opinion is not widespread. I guess if we want to sugar coat, then
#(int x)(x*x) is sweeter than #(int x){ return x*x; } even if the
double syntax does not add anything else. I would have preferred the
second syntax, more generic at the extra cost of writting return and
; in the short versions.
To be really useful, lambdas can take variables from the scope in
where they are defined and from a closure. Being consistent with
Inner classes, lambdas are restricted to capturing 'effectively
final' variables. Consistency with the previous features of the
language is a nice feature, but for sweetness, it would be nice to be
able to capture variables that can be reassigned. For that purpose,
they are considering that variables present in the context and
annotated with #Shared will be captured by-reference, allowing
assignments. To me this seems weird as how a lambda can use a variable
is determined at the place of declaration of the variable rather than
where the lambda is defined. A single variable could be used in more
than one lambda and this forces the same behavior in all of them.
Lambdas try to simulate actual function objects, but the proposal does
not get completely there: to keep the parser simple, since up to now
an identifier denotes either an object or a method that has been kept
consistent and calling a lambda requires using a ! after the lambda
name: #(int x)(x*x)!(5) will return 25. This brings a new syntax
to use for lambdas that differ from the rest of the language, where
! stands somehow as a synonim for .execute on a virtual generic
interface Lambda<Result,Args...> but, why not make it complete?
A new generic (virtual) interface Lambda could be created. It would
have to be virtual as the interface is not a real interface, but a
family of such: Lambda<Return>, Lambda<Return,Arg1>,
Lambda<Return,Arg1,Arg2>... They could define a single execution
method, which I would like to be like C++ operator(), but if that is
a burden then any other name would be fine, embracing the ! as a
shortcut for the method execution:
interface Lambda<R> {
R exec();
}
interface Lambda<R,A> {
R exec( A a );
}
Then the compiler need only translate identifier!(args) to
identifier.exec( args ), which is simple. The translation of the
lambda syntax would require the compiler to identify the proper
interface being implemented and could be matched as:
#( int x )(x *x)
// translated to
new Lambda<int,int>{ int exec( int x ) { return x*x; } }
This would also allow users to define Inner classes that can be used
as lambdas, in more complex situations. For example, if lambda
function needed to capture a variable annotated as #Shared in a
read-only manner, or maintain the state of the captured object at the
place of capture, manual implementation of the Lambda would be
available:
new Lambda<int,int>{ int value = context_value;
int exec( int x ) { return x * context_value; }
};
In a manner similar to what the current Inner classes definition is,
and thus being natural to current Java users. This could be used,
for example, in a loop to generate multiplier lambdas:
Lambda<int,int> array[10] = new Lambda<int,int>[10]();
for (int i = 0; i < 10; ++i ) {
array[i] = new Lambda<int,int>{ final int multiplier = i;
int exec( int x ) { return x * multiplier; }
};
}
// note this is disallowed in the current proposal, as `i` is
// not effectively final and as such cannot be 'captured'. Also
// if `i` was marked #Shared, then all the lambdas would share
// the same `i` as the loop and thus would produce the same
// result: multiply by 10 --probably quite unexpectedly.
//
// I am aware that this can be rewritten as:
// for (int ii = 0; ii < 10; ++ii ) { final int i = ii; ...
//
// but that is not simplifying the system, just pushing the
// complexity outside of the lambda.
This would allow usage of lambdas and methods that accept lambdas both
with the new simple syntax: #(int x){ return x*x; } or with the more
complex manual approach for specific cases where the sugar coating
interferes with the intended semantics.
Overall, I believe that the lambda proposal can be improved in
different directions, that the way it adds syntactic sugar is a
leaking abstraction (you have deal externally with issues that are
particular to the lambda) and that by not providing a lower level
interface it makes user code less readable in use cases that do not
perfectly fit the simple use case.
:
Modulo some scope-disambiguation constructs, almost all of these methods follow from the actual definition of a lambda abstraction:
λx.E
To answer your questions in order:
I don't think there are any particular things that make the proposals by the Java community better or worse than anything else. As I said, it follows from the mathematical definition, and therefore all faithful implementations are going to have almost exactly the same form.
Anonymous first-class functions bolted onto imperative languages tend to end up as a feature that some programmers love and use frequently, and that others ignore completely - therefore it is probably a sensible choice to give it some syntax that will not confuse the kinds of people who choose to ignore the presence of this particular language feature. I think hiding the complexity and particulars of implementation is what they have attempted to do by using syntax that blends well with Java, but which has no real connotation for Java programmers.
It's probably desirable for them to use some bits of syntax that are not going to complicate existing definitions, and so they are slightly constrained in the symbols they can choose to use as operators and such. Certainly Java's insistence on remaining backwards-compatible limits the language evolution slightly, but I don't think this is necessarily a bad thing. The PHP approach is at the other end of the spectrum (i.e. "let's break everything every time there is a new point release!"). I don't think that Java's evolution is inherently limited except by some of the fundamental tenets of its design - e.g. adherence to OOP principles, VM-based.
I think it's very difficult to make strong statements about language evolution from Java's perspective. It is in a reasonably unique position. For one, it's very, very popular, but it's relatively old. Microsoft had the benefit of at least 10 years worth of Java legacy before they decided to even start designing a language called "C#". The C programming language basically stopped evolving at all. C++ has had few significant changes that found any mainstream acceptance. Java has continued to evolve through a slow but consistent process - if anything I think it is better-equipped to keep on evolving than any other languages with similarly huge installed code bases.
It's not much more complicated then lambda expressions in other languages.
Consider...
int square(x) {
return x*x;
}
Java:
#(x){x*x}
Python:
lambda x:x*x
C#:
x => x*x
I think the C# approach is slightly more intuitive. Personally I would prefer...
x#x*x
Maybe this is not really an answer to your question, but this may be comparable to the way objective-c (which of course has a very narrow user base in contrast to Java) was extended by blocks (examples). While the syntax does not fit the rest of the language (IMHO), it is a useful addition and and the added complexity in terms of language features is rewarded for example with lower complexity of concurrent programming (simple things like concurrent iteration over an array or complicated techniques like Grand Central Dispatch).
In addition, many common tasks are simpler when using blocks, for example making one object a delegate (or - in Java lingo - "listener") for multiple instances of the same class. In Java, anonymous classes can already be used for that cause, so programmers know the concept and can just spare a few lines of source code using lambda expressions.
In objective-c (or the Cocoa/Cocoa Touch frameworks), new functionality is now often only accessible using blocks, and it seems like programmers are adopting it quickly (given that they have to give up backwards compatibility with old OS versions).
This is really really close to Lambda functions proposed in the new generation of C++ (C++0x)
so I think, Oracle guys have looked at the other implementations before cooking up their own.
http://en.wikipedia.org/wiki/C%2B%2B0x
[](int x, int y) { return x + y; }

Categories

Resources