Has anybody written a library in java that provides mapping functions (such as mapcar from lisp).
I saw this post and few others (such as this one this one), but sadly nothing that I could consider mainstream and/or usable.
There are a few. They are usually described as something like "functional programming in Java" libraries rather than by reference to LISP.
At my company, the functional programming druids settled on Functional Java as their preferred library, although there is a significant and vocal minority who prefer the functional-esque provisions in Guava.
Guava is a very mainstream and popular library; it's firmly in the "nobody got fired for using" category. FJ may be less well known, but we're using it pretty happily. We've even forked it so we can help improve it.
You'll be looking forward to Java 8, then! It will have Project Lambda included, which has a much, much nicer syntax for closure-like anonymous classes.† Example:
Iterable<String> strs = ...
Iterable<String> downCased = strs.map(s -> s.toLowerCase());
Any interface with one method (or abstract class with one abstract method) can use this syntax, including Guava's Function and Predicate (though Java 8 has its own Mapper and Predicate interfaces, so these are usable out of the box). In this case Iterable.map is a new extension method that takes a new interface type called Mapper.
If you'd like more examples of Java 8 lambdas, just ask!
† All the usual restrictions of anonymous classes still apply, including that local free variables must be "effectively final". This means you don't have to explicitly tag the variable as final, but you're still not allowed to alter the value.
Related
Is it a safe practice to use default methods as a poor's man version of traits in Java 8?
Some claim it may make pandas sad if you use them just for the sake of it, because it's cool, but that's not my intention. It is also often reminded that default methods were introduced to support API evolution and backward compatibility, which is true, but this does not make it wrong or twisted to use them as traits per se.
I have the following practical use case in mind:
public interface Loggable {
default Logger logger() {
return LoggerFactory.getLogger(this.getClass());
}
}
Or perhaps, define a PeriodTrait:
public interface PeriodeTrait {
Date getStartDate();
Date getEndDate();
default isValid(Date atDate) {
...
}
}
Admitedly, composition could be used (or even helper classes) but it seems more verbose and cluttered and does not allow to benefit from polymorphism.
So, is it ok/safe to use default methods as basic traits, or should I be worried about unforeseen side effects?
Several questions on SO are related to Java vs Scala traits; that's not the point here. I'm not asking merely for opinions either. Instead, I'm looking for an authoritative answer or at least field insight: if you've used default methods as traits on your corporate project, did it turn out to be a timebomb?
The short answer is: it's safe if you use them safely :)
The snarky answer: tell me what you mean by traits, and maybe I'll give you a better answer :)
In all seriousness, the term "trait" is not well-defined. Many Java developers are most familiar with traits as they are expressed in Scala, but Scala is far from the first language to have traits, either in name or in effect.
For example, in Scala, traits are stateful (can have var variables); in Fortress they are pure behavior. Java's interfaces with default methods are stateless; does this mean they are not traits? (Hint: that was a trick question.)
Again, in Scala, traits are composed through linearization; if class A extends traits X and Y, then the order in which X and Y are mixed in determines how conflicts between X and Y are resolved. In Java, this linearization mechanism is not present (it was rejected, in part, because it was too "un-Java-like".)
The proximate reason for adding default methods to interfaces was to support interface evolution, but we were well aware that we were going beyond that. Whether you consider that to be "interface evolution++" or "traits--" is a matter of personal interpretation. So, to answer your question about safety ... so long as you stick to what the mechanism actually supports, rather than trying to wishfully stretch it to something it does not support, you should be fine.
A key design goal was that, from the perspective of the client of an interface, default methods should be indistinguishable from "regular" interface methods. The default-ness of a method, therefore, is only interesting to the designer and implementor of the interface.
Here are some use cases that are well within the design goals:
Interface evolution. Here, we are adding a new method to an existing interface, which has a sensible default implementation in terms of existing methods on that interface. An example would be adding the forEach method to Collection, where the default implementation is written in terms of the iterator() method.
"Optional" methods. Here, the designer of an interface is saying "Implementors need not implement this method if they are willing to live with the limitations in functionality that entails". For example, Iterator.remove was given a default which throws UnsupportedOperationException; since the vast majority of implementations of Iterator have this behavior anyway, the default makes this method essentially optional. (If the behavior from AbstractCollection were expressed as defaults on Collection, we might do the same for the mutative methods.)
Convenience methods. These are methods that are strictly for convenience, again generally implemented in terms of non-default methods on the class. The logger() method in your first example is a reasonable illustration of this.
Combinators. These are compositional methods that instantiate new instances of the interface based on the current instance. For example, the methods Predicate.and() or Comparator.thenComparing() are examples of combinators.
If you provide a default implementation, you should also provide some specification for the default (in the JDK, we use the #implSpec javadoc tag for this) to aid implementors in understanding whether they want to override the method or not. Some defaults, like convenience methods and combinators, are almost never overridden; others, like optional methods, are often overridden. You need to provide enough specification (not just documentation) about what the default promises to do, so the implementor can make a sensible decision about whether they need to override it.
This question already has answers here:
Java 8 lambda expression and first-class values
(5 answers)
Closed 8 years ago.
So the functional programmer in me likes languages like python that treat functions as first class citizens. It looks like Java 8 caved to the pressure and "sort of" implemented things like lambda expressions and method references.
My question is, is Java on its way to using first class functions, or is this really just syntactic sugar to reduce the amount of code it takes to implement anonymous classes / interfaces such as runnable? (my gut says the latter).
My ideal scenario
Map<String,DoubleToDoubleFunc> mymap = new HashMap<String,DoubleToDoubleFunc>();
...
mymap.put("Func 1", (double a, double b) -> a + b);
mymap.put("Func 2", Math::pow);
...
w = mymap.get("Func 1")(y,z);
x = mymap.get("Func 2")(y,z);
There is no function structural type in Java 8. But Java has always had first class objects. And in fact, they have been used as an alternative. Historically, due to the lack of first-class functions, what we have done so far is to wrap the function inside an object. These are the famous SAM types (single abstract method types).
So, instead of
function run() {}
Thread t = new Thread(run);
We do
Runnable run = new Runnable(){ public void run(){} };
Thread t = new Thread(run);
That is, we put the function inside an object in order to be able to pass it around as a value. So, first class objects have been an alternative solution for a long time.
The JDK 8 simply makes implementing this concept simpler, and they call this type of wrapper interfaces "Functional Interfaces" and offer some syntactic sugar to implement the wrapper objects.
Runnable run = () -> {};
Thread t = new Thread(run);
But ultimately, we are still using first-class objects. And they have similar properties to first-class functions. They encapsulate behavior, they can be passed as arguments and be returned as values.
In the lambda mailing list Brian Goetz gave a good explanation of some of the reasons that motivated this design.
Along the lines that we've been discussing today, here's a peek at
where we're heading. We explored the road of "maybe lambdas should
just be inner class instances, that would be really simple", but
eventually came to the position of "functions are a better direction
for the future of the language".
This exploration played out in stages: first internally before the EG
was formed, and then again when the EG discussed the issues. The
following is my position on the issue. Hopefully, this fills in some
of the gaps between what the spec currently says and what we say
about it.
The issues that have been raised about whether lambdas are objects or
not largely come down to philosophical questions like "what are
lambdas really", "why will Java benefit from lambdas", and ultimately
"how best to evolve the Java language and platform."
Oracle's position is that Java must evolve -- carefully, of course --
in order to remain competitive. This is, of course, a difficult
balancing act.
It is my belief that the best direction for evolving Java is to
encourage a more functional style of programming. The role of Lambda
is primarily to support the development and consumption of more
functional-like libraries; I've offered examples such as
filter-map-reduce to illustrate this direction.
There is plenty of evidence in the ecosystem to support the hypothesis
that, if given the tools to do so easily, object-oriented programmers
are ready to embrace functional techniques (such as immutability) and
work them into an object-oriented view of the world, and will write
better, less error-prone code as a result. Simply put, we believe the
best thing we can do for Java developers is to give them a gentle push
towards a more functional style of programming. We're not going to
turn Java into Haskell, nor even into Scala. But the direction is
clear.
Lambda is the down-payment on this evolution, but it is far from the
end of the story. The rest of the story isn't written yet, but
preserving our options are a key aspect of how we evaluate the
decisions we make here.
This is why I've been so dogged in my insistence that lambdas are not
objects. I believe the "lambdas are just objects" position, while
very comfortable and tempting, slams the door on a number of
potentially useful directions for language evolution.
As a single example, let's take function types. The lambda strawman
offered at devoxx had function types. I insisted we remove them, and
this made me unpopular. But my objection to function types was not
that I don't like function types -- I love function types -- but that
function types fought badly with an existing aspect of the Java type
system, erasure. Erased function types are the worst of both worlds.
So we removed this from the design.
But I am unwilling to say "Java never will have function types"
(though I recognize that Java may never have function types.) I
believe that in order to get to function types, we have to first deal
with erasure. That may, or may not be possible. But in a world of
reified structural types, function types start to make a lot more
sense.
The lambdas-are-objects view of the world conflicts with this possible
future. The lambdas-are-functions view of the world does not, and
preserving this flexibility is one of the points in favor of not
burdening lambdas with even the appearance of object-ness.
You might think that the current design is tightly tied to an object
box for lambdas -- SAM types -- making them effectively objects
anyway. But this has been carefully hidden from the surface area so
as to allow us to consider 'naked' lambdas in the future, or consider
other conversion contexts for lambdas, or integrate lambdas more
tightly into control constructs. We're not doing that now, and we
don't even have a concrete plan for doing so, but the ability to
possibly do that in the future is a critical part of the design.
I am optimistic about Java's future, but to move forward we sometimes
have to let go of some comfortable ideas. Lambdas-are-functions opens
doors. Lambdas-are-objects closes them. We prefer to see those doors
left open.
They are not really first-class functions IMO. Lambda is ultimately an instance of a functional interface. But it does give you advantages of a first-class function. You can pass it as an argument to a method that expects an instance of functional interface. You can assign it to a variable, in which case its type will be inferred using target typing. You can return it from a method, so on.
Also, lambdas doesn't completely replace anonymous classes. Not all anonymous classes can be converted to lambdas. You can go through this answer for a nice explanation about difference between the two. So, no lambdas are not syntactic sugar for anonymous classes.
It's not syntactic sugar for anonymous classes; what it compiles to is somewhat more involved. (Not that it would be wrong to do that; any language will have its own implementation detail for how lambdas are compiled, and Java's is no worse than many others.)
See http://cr.openjdk.java.net/~briangoetz/lambda/lambda-translation.html for the full, nitty-gritty details.
interface DoubleDoubleToDoubleFunc {
double f(double x, double y);
}
public static void main(String[] args) {
Map<String, DoubleDoubleToDoubleFunc> mymap = new HashMap<>();
mymap.put("Func 1", (double a, double b) -> a + b);
mymap.put("Func 2", Math::pow);
double y = 2.0;
double z = 3.0;
double w = mymap.get("Func 1").f(y,z);
double v = mymap.get("Func 2").f(y,z);
}
So it still is syntactic sugar and only to some degree.
The Scala compiler compiles direct to Java byte code (or .NET CIL). Some of the features of Scala could be re-done in Java straightforwardly (e.g. simple for comprehensions, classes, translating anonymous/inner functionc etc). What are the features that cannot be translated that way?
That is presumably mostly of academic interest. More usefully, perhaps, what are the key features or idioms of Scala that YOU use that cannot be easily represented in Java?
Are there any the other way about? Things that can be done straightforwardly in Java that have no straightforward equivalent in Scala? Idioms in Java that don't translate?
This question, in my opinion, misses the point about by asking us to compare JVM languages by looking at their generated bytecode.
Scala compiles to Java-equivalent bytecode. That is, the bytecode could have been generated by code written in Java. Indeed you can even get scalac to output an intermediate form which looks a lot like Java.
All features like traits (via static forwarders), non-local returns (via exceptions), lazy values (via references) etc are all expressible by a Java program, although possibly in a most-ugly manner!
But what makes scala scala and not Java is what scalac can do for you, before the bytecode is generated. What scalac has going for it, as a statically typed language, is the ability to check a program for correctness, including type correctness (according to its type system) at compile time.
The major difference then between Java and scala (as of course Java is also statically typed), therefore, is scala's type system, which is capable of expressing programmatic relations which java-the-language's type system cannot.For example:
class Foo[M[_], A](m : M[A])
trait Bar[+A]
These concept, that M is a type parameter which itself has type parameters or that Bar is covariant, just do not exist in Java-land.
Traits are one thing that does not have an equivalent. Traits are Interfaces with code in them. You can copy the code to all classes that have a trait mixed in, but that is not the same thing.
Also I believe scala type system is more complete. While it will eventually map to the JVM types (actually suffer erasure). You can express some things in the Scala type system that may not be possible in Java (like variances).
I think, there is no equivalent for dynamically mix in some Traits. In Scala you can add at the time you're creating new objects some Traits, which are mixed in.
For example, we create one dog which is hungry and thirsty and one dog which is just hungry.
val hungryThirstyDog = new Dog with Hungry with Thirsty
val onlyHungryDog = new Dog with Hungry
I don't know an equivalent way to do this in Java. In Java, the inheritance is statically defined.
Implicit conversions don't have a straightforward equivalent in Java.
One feature of scala that I have found a good use for is type reification through Manifests. Since the JVM strips out all type information from generics, scala allows you to conserve this information in variables. This is something that Java reflection AFAIK can't handle, since there are no arguments to types in the bytecode.
The case I needed them was to pattern match on a type of List. This is, I had a VertexBuffer object which stored data on the GPU, that could be constructed from a List of floats or integers. The Manifest code looked approximately like this:
class VertexBuffer[T](data:List[T])(implicit m:Manifest[T]) {
m.toString.match {
case "float" => ...
case "int" => ...
}
}
This link links to a blog post with more information.
There are plenty of SO pages with more information too, like this one.
Three words: higher kinded types.
Your topic is not clear wehther you mean Java the JVM or Java the language. Given that Scala runs on the JVM, the q makes no sense, as we all know Scala runs on the JVM.
Scala has a "native" support for XML. You can build the XML, find elements, match directly in the Scala code.
Examples: http://programming-scala.labs.oreilly.com/ch10.html
Lamdbaj allows the definition of closures in the Java language, various examples can be found
here
My question is regarding the underlying Java mechanisms at use, for instance, to define the println closure, the following code is used:
Closure println = closure();
{ of(System.out).println(var(String.class)); }
This closure can be subsequently executed via:
println.apply("foobar");
I am curious as to what mechanisms in Java would allow the call to of(...).println(...) to become associated with the println instance itself.
Naturally, the lambdaj source code is available to read but I was hoping for a slightly higher level explanation if anyone has one. My reflection skills go as far as a bit of introspection and executing methods dynamically.
I am Mario Fusco and I am the main developer of the lambdaj library.
First of all I would like to clarify something: lambdaj is not intended to replace any functional language. As I said last week in my speech at the Jug of Zurich if you have a chance to use Scala, go for it and never look back. Here you can find a resume of my speech where it is clearly stated that:
http://ctpjava.blogspot.com/2009/10/lambdaj-new-trends-in-java.html
I am an happy Scala developer too. But sometimes you are just obliged to develop in Java (in my experience, in the real world, about the 80% of times you cannot choose in which language you have to write your code) and in this case some of the lambdaj features could be helpful (or I hope so). I just wanted to bring to Java some functional features that are totally missing. Of course the result is not completely satisfying mainly due to the limitation imposed by Java itself.
As for the internal lambdaj mechanism, yes it uses a ThreadLocal in order to achieve that result. If you have other questions, curiosities or even better suggestions and constructive critics about lambdaj maybe you could be interested to register yourself to the lambdaj mailing list here:
http://groups.google.com/group/lambdaj
Bye
Mario
Well, of is presumably a static method which is imported statically so it can be called without the enclosing class name. I expect that var is the same. Both methods must return some type which have the methods subsequently called:
public class Printable {
public void println(Var var);
}
public class Fac {
public static Printable of(Object o) {
return new Printable(o);
}
public static Var var(Class<?> clazz) {
return new Var(clazz);
}
}
All of a sudden:
Fac.of(System.out).println(Fac.var(String.class));
Is valid Java. Using static imports, hey presto:
import static Fac.*;
of(System.out).println(var(String.class));
The curly-braces are obviously valid Java as you can add these in any method to aid in defining a lexical sope. This API-design style is called fluent and is best showcased by the JMock testing library.
By the way, if this is supposed to introduce closures to Java, it's quite ridiculous - the syntax is unreadably awful. Their I/O example actually made me laugh out loud. Try Scala!
EDIT - the two println calls are associated I believe because the first sequence of calls allow the library to capture the variables which you have passed in as parameters. These are probably captured in some ThreadLocal structure. When you then call a (also presumably static) println method, the library is using this captured data to actually execute the behaviour at a later point. Also testing related, the EasyMock test framework uses a similar mechanism (which uses Java proxies in the background) to capture expected values.
Specifically, I'm looking for similarly clean notation to the Collection<T>.TrueForAll / Exists, etc.
It feels smelly to have to write a foreach loop to inspect the return of a method on each object, so I'm hoping there's a better Java idiom for it.
Predicates are provided in the Google Collections library.
Functional Java provides first-class functions. A predicate is expressed as F<T, Boolean>. For example, here's a program that tests an array for the existence of a string that is all lowercase letters.
import fj.F;
import fj.data.Array;
import static fj.data.Array.array;
import static fj.function.Strings.matches;
public final class List_exists {
public static void main(final String[] args) {
final Array<String> a = array("Hello", "There", "how", "ARE", "yOU?");
final boolean b = a.exists(matches.f("^[a-z]*$"));
System.out.println(b); // true
}
}
As far as I know, no. But Apache Commons Collections has something like this: Predicate
Edit: Right, as noted in comments, Commons Collections is from pre-generics world, so Google Collections (update: Guava) seems like a clearly better option now. Still, Commons Collections deserves to be mentioned as it's a well-known library that does this, and also so that people know why not to use it. :)
I was just reading more about Google Collections in this nice interview with its main developers, and wanted to quote a bit that deals specifically with the "Google Collections vs. Apache Commons Collections" issue:
What is unique about your approach?
How does it differ to, for example,
the Apache Commons Collection?
Kevin: "Well, thank God for the Apache
Commons. We'd all be in bad shape
without libraries like this. That
said, sadly that particular project
has stalled, in a pre-generics world.
They do want to adopt generics, but
they recognize that this would involve
a pretty nontrivial and incompatible
rewrite. So far, no one seems to be
actively driving such an effort. At
Google we've been using Java 5
company-wide since the spring of 2005.
A collections library being
ungenerified was a deal-breaker for
us, because we really hate getting
compiler warnings. I was also
concerned about the many places in
which the Apache collections don't
conform to the specifications of the
JDK interfaces they implement."
[...]
Jared: "As Kevin implies, our library
is the only collections library I know
of, outside the JDK, built with Java 5
features: generics, enums, covariant
return types, etc. When writing Java 5
code, you want a collections library
that takes full advantage of the
language. In addition, we put enormous
effort into making the library
complete, robust, and consistent with
the JDK collection classes. Our
collection classes were much more
limited initially, but we've gradually
improved them over the last two years.
Since all library usage is in Google's
source control system, we've had the
flexibility to modify public
interfaces. An open-source project
like Apache Commons Collection doesn't
have the freedom to change its
behavior after the initial release.
Since we'll lose that flexibility once
Google Collections Library 1.0 is
released, we're eager to receive
feedback now so we can get things
right."