Compatibility between Scala closures and Java 8 closures - java

After reading some OpenJDK mailinglist entries, it seems that the Oracle developers are currently further removing things from the closure proposal, because earlier design mistakes in the Java language complicate the introduction of the Java closures.
Considering that Scala closures are much more powerful than the closures planned for Java 8, I wonder if it will be possible to e. g. call a Java method taking a closure from Scala, defining a closure in Java and giving it to a Scala function etc.?
So will Java closures be represented like their Scala counterparts in bytecode or differently?
Will it be possible to close the gap of functionality between Java/Scala closures?

I think it's more complicated than assuming there's two groups of stakeholders here. The Project Lambda people seem to be working mostly independently of the Oracle people, occasionally throwing something over the wall that the Project Lambda people find out indirectly. (Scala, is of course the third stakeholder.)
Since the latest Project Lambda proposal is to eliminate function types all together, and just create some sort of fancy inference for implementing interfaces that have a single astract method (SAM types), I foresee the following:
Calling Scala code that requires a Scala closure will depend entirely on the implementation of the Function* traits (and the implementation of traits in general) -- whether it appears to the Java compiler as a SAM (which it is in Scala-land) or whether the non-abstract methods also appear abstract to the JVM. (I would think they currently do look like they're abstract since traits are implemented as interfaces, but I'm know almost nothing about Scala's implementation. This could be a big hurdle to interperability.)
Complications with Java generics (in particular how to expressInt/int/Integer, or Unit/Nothing/void in a generic interface) may also complicate things.
Using Scala functions to implement Java SAMs will not be any different than it now -- you need to create an implicit conversion for the specific interface you wish to implement.
If the JVM gets function types (and Oracle seems not to have eliminated that possibility), it may depend how it's implemented. If they're first class objects implementing a particular interface, then all that Scala needs to do to be compatible is make Function* implement the new interface. If a new kind of type is implemented in the JVM entirely, then this could be difficult -- the Scala developers may wrap them using magic like they currently do for Arrays, or they may create create implicit conversions. (A new language concept seems a bit far-fetched.)
I hope that one of the results of all of this discussion is that all of the various JVM languages will agree on some standard way to represent closures -- so that Scala, Groovy, JRuby, etc... can all pass closures back and forth with a minimum of hassle.
What's more interesting to me is the proposals for virtual extension methods that will allow the Java Collections API to use lambdas. Depending on how these are implemented, they may greatly simplify some of the binary compatibility problems that we've had to deal with when changing Scala code, and they may help to more easily and efficiently implement traits.
I hope that some of the Scala developers are getting involved and offering their input, but I haven't actually seen any discussion of Scala on the Project Lambda lists, nor any participants who jump out to me as being Scala developers.

You would likely be able to do this extremely easily using implicit conversions a la collection.JavaConversions whether or not they come out of the box.
Of course, this is not obviously so, because it may be the case that Java closures are turned into types which get generated by the JVM at runtime - I seem to recall a Neal Gafter presentation saying something along these lines

Note: 5 years later, SCALA 2.12.0-M3 (Oct. 2015) did include this enhancement:
Scala 2.12 emits closures in the same style as Java 8.
For each lambda the compiler generates a method containing the lambda body.
At runtime, this method is passed as an argument to the LambdaMetaFactory provided by the JDK, which creates a closure object.
Compared to Scala 2.11, the new scheme has the advantage that the compiler does not generate an anonymous class for each lambda anymore. This leads to significantly smaller JAR files.

Related

Can we use Domain model written in Scala in a Java/Spring Application?

Is there a way to use a Domain Model written in Scala from a Spring+Java+Maven project ?
Asking this after going through Scott Wlaschin's video on Functional Domain Modelling. Would love to implement some of it using Scala and bring the power of Functional DDD to our mundane old spring code! 🙏
Any help would be wonderful!
There is no "ADT magic" in the video. There is a powerful type system in the language and a functional approach.
You can leverage Java's type system to some degree and also use some functional libraries like vavr or maybe something like reactor. It will improve your code greatly, but it wouldn't be so powerful like in functional languages with great type system (like F#, Haskell, Scala, and other ML-like langs)
If you want to leverage Scala's type system (which is even more powerful then F#) you should use Scala almost everywhere because guarantees of the type system are in the compiler. And you can use Spring from Scala (but I think this is a huge antipattern).
There are 2 parts here.
One part has nothing to do with ADT's, rather Scala and Java compatibility.
Everything written in scala compiles down to JVM bytecode, so whatever 'magic' you are referring to, can be written in Java, albeit much less concise and readable.
And therefore is usable.
The second part is the type-checker
Part of ADT magic is compile-time safety checks, such as exhaustive match checks amongst others...
You will not get that when you call instanceof in java on a case class designed in Scala

Enhance library for Java 8 while keeping backwards compatibility

I'm developing an open source library in Java and would like to ensure that it is convenient for Java 8 users, and takes advantage of new concepts in Java 8 wherever possible (lambdas etc.)
At the same time I absolutely need to maintain backwards compatibility (the library must still be usable for people using Java 6 or 7).
What useful features from Java 8 can I adopt that would be beneficial for library users without breaking library compatibility for users of older Java versions?
I don't know about your library, this advice might be slightly off.
Lambdas: Don't worry. Any functional interface can be implemented using a Lambda expression.
Method references: Same as lambdas, they should just be usable.
Streams: If this fits your library, you should use them, but keeping compatibility is harder here. The backwards compatibility could be achieved using a second library part, wrapping around the base library and hooking into the public API of it. It could therefore provide additional sugar/functionality without abandoning Java 6/7.
Default methods: By all means, use these! They are a quick/cheap/good way to enhance existing implementations without breaking them. All default methods you add will be automatically available for implementing classes. These will, however, also need the second library part, so you should provide the base interfaces in your base library, and extend the interfaces from the companion-library.
Don't fork the library, abandoning the old one, as there are still many developers who cannot use Java 8, or even Java 7. If your library is sensible to use on e.g. Android, please keep that compatibility.
If you want your code to be usable by Java 6 consuming VMs, you have to write using Java 6 language compatibility. Alas. The bytecode format critically changed from 6 to 7, so code for 7 and 8 won't load into 6. (I found this with my own code migrating to 7; even when all I was using was multi-catch — which should be writable in the 6 bytecode — I couldn't build it for a 6 target. The compiler refused.)
I've yet to experiment with 8, but you'll have to be careful because if you depend on any new Java packages or classes, you will still be unable to work with older versions; the old consuming VMs simply won't have access to the relevant classes and won't be able to load the code. In particular, lambdas definitely won't work.
Well, can we target a different classfile version? There's no combination of source and target that will actually make javac happy with this.
kevin$ $JAVA8/bin/javac -source 1.8 -target 1.7 *.java
javac: source release 1.8 requires target release 1.8
So there's simply no way to compile Java source with lambdas into a pre-Java 8 classfile version.
My general take is that if you want to support old versions of Java, you have to use old versions of Java to do so. You can use a Java 8 toolchain, but you need Java 7 or Java 6 source. You can do it by forking the code, maintaining versions for all the versions of Java you want to support, but that's far more work than you could ever justify as a lone developer. Pick the minimum version and go with that (and kiss those juicy new features goodbye for now, alas).
If you use any new language features in Java 8, it requires also using Java 8 bytecode.
$ javac -source 1.8 -target 1.7
javac: source release 1.8 requires target release 1.8
That means your options are quite limited. You cannot use lambdas, method references, default methods, Streams, etc. and maintain backwards compatibility.
There are still two things you can do that users of Java 8 will benefit from. The first is using Functional Interfaces in your public API. If your methods take Runnables, Callables, Comparators, etc. as parameters, then users of Java 8 will be able to pass in lambdas. You may want to create your own Single-Abstract-Method interfaces as well. If you find you need Functions and Predicates, I'd suggest reusing the ones that ship with GS Collections or Guava instead of writing your own.
The second thing you can do is use a rich collections API that benefits from using Functional Interfaces. Again, that means using GS Collections or Guava. For example, if you have a method that would return List, return MutableList or ImmutableList instead. That way, callers of the method will be able to chain usages of the rich API exposed by these interfaces.
As said by others, providing and using interfaces with a single method such that they can be implemented using lambdas or method references when using Java 8 is a good way of supporting Java 8 without breaking Java 7 compatibility.
This can be complemented by providing methods by your library which fit into one of the standard function types of Java 8 (e.g. Supplier, (Bi)Consumer, (Bi)Function) so that Java 8 developers can create method references to them for Java 8 API methods. This implies that their signature matches one of these functional interfaces and they don’t throw checked exceptions. This often comes naturally, e.g. getFoo() may act as a Function and isBar() as a Predicate but sometimes it’s possible to improve methods by thinking about possible Java 8 use scenarios.
For example, if you provide a method taking two parameters, it’s useful to choose the order where the first parameter is the one that is more likely to be a key in a Map. So it is more likely to be useful for Map.forEach with a method reference.
And avoid method with ambiguous signatures. E.g. if you have a class Foo with an instance method ReturnType bar() and a static method ReturnType bar(Foo) neither of them can be used as method reference anymore as Foo::bar would be ambiguous. Eliminate or rename one of these methods.
It’s important that such methods do not have undocumented internal state that causes surprising behavior when used by multiple threads. Otherwise they can not be used by parallel streams.
Another opportunity that should not be underestimated is to use names for classes, interfaces and members conforming to patterns introduced by the Java 8 API. E.g. if you have to introduce some sort of filter interface with a test method for your library that ought to work with Java 7 as well, you should name the interface Predicate and the method test to associate it with the similar named functional interface of Java 8.

Scala api and java compatibility, do you have to remap?

Say I have a scala library that exposes api methods that take scala specific types (say a collection that isn't available in java).
Does this mean you have to either change the exposed parameter, or change the collection to a java compatible type and then iterate over it and initialize the scala specific collection?
Basically what I am getting at is, when designing a scala api that will be used in java, you have to use a compatible type or you pay performance/memory penalty if you have to iterate and map the java collection to the scala one right?
Is that basically what the options are?
From http://www.scala-lang.org/faq/4
What does it mean that Scala is compatible with Java?
The standard Scala backend is a Java VM. Scala classes are Java classes, and vice
versa. You can call the methods of either language from methods in the
other one. You can extend Java classes in Scala, and vice versa. The
main limitation is that some Scala features do not have equivalents in
Java, for example traits.
Also, to answer your question, plase visit How to use scala.collection.immutable.List in a Java code

Is there a non-type-erased generics extension to the Java Compiler available as a 3rd party compiler extension?

I'm becoming increasingly frustrated with the limits of type-erased Java generics. I was wondering if there was a custom Java Compiler that provided a full version of generics without the quirks associated with type-erasure?
Chris
It is not just a compiler change that would be required. I think it would also be necessary to change the JVM implementation in ways that are incompatible with the JVM spec, and the Java class libraries in ways that are incompatible with the current APIs.
For example, the semantics of the checkcast instruction change significantly, as must the objects returned by the Object.getClass() operation.
In short, the end result would not be "Java" any more and would be of little interest to the vast majority of Java developers. And any code developed using the new tools/JVM/libraries would be tainted.
Now if Sun/Oracle were proposing / making this change ... that would be interesting.
Scala (a language which runs on top of the JVM) may allow you to get around the problem of type erasure using the powerful concept of manifests, which essentially give you reified types.
More info: http://www.scala-blogs.org/2008/10/manifests-reified-types.html
Question is meaningless unless you are asserting that the JDK compiler doesn't implement the language correctly. Any compiler that didn't obey the same rules wouldn't be a Java compiler so nobody could recommend its use.
It would be doable, but I'm not aware of anyone that's done it yet. It would require a significant rewrite of javac to make it instantiate generics when needed (creating a new .class file for each instantiation), but otherwise should be reasonably straight-forward. It could even add support for using primitive types as generic type args.

Is static metaprogramming possible in Java?

I am a fan of static metaprogramming in C++. I know Java now has generics. Does this mean that static metaprogramming (i.e., compile-time program execution) is possible in Java? If so, can anyone recommend any good resources where one can learn more about it?
No, this is not possible. Generics are not as powerful as templates. For instance, a template argument can be a user-defined type, a primitive type, or a value; but a generic template argument can only be Object or a subtype thereof.
Edit: This is an old answer; since 2011 we have Java 7, which has Annotations that can be used for such trickery.
The short answer
This question is nearly more than 10 years old, but I am still missing one answer to this. And this is: yes, but not because of generics and note quite the same as C++.
As of Java 6, we have the pluggable annotation processing api. Static metaprogramming is (as you already stated in your question)
compile-time program execution
If you know about metaprogramming, then you also know that this is not really true, but for the sake of simplicity, we will use this. Please look here if you want to learn more about metaprogramming in general.
The pluggable annotation processing api is called by the compiler, right after the .java files are read but before the compiler writes the byte-code to the .class files. (I had one source for this, but i cannot find it anymore.. maybe someone can help me out here?).
It allows you, to do logic at compile time with pure java-code. However, the world you are coding in is quite different. Not specifically bad or anything, just different. The classes you are analyzing do not yet exist and you are working on meta data of the classes. But the compiler is run in a JVM, which means you can also create classes and program normally. But furthermore, you can analyze generics, because our annotation processor is called before type erasure.
The main gist about static metaprogramming in java is, that you provide meta-data (in form of annotations) and the processor will be able to find all annotated classes to process them. On (more easy) example can be found on Baeldung, where an easy example is formed. In my opinion, this is quite a good source for getting started. If you understand this, try to google yourself. There are multiple good sources out there, to much to list here. Also take a look at Google AutoService, which utilizes an annotation processor, to take away your hassle of creating and maintaining the service files. If you want to create classes, i recommend looking at JavaPoet.
Sadly though, this API does not allow us, to manipulate source code. But if you really want to, you should take a look at Project Lombok. They do it, but it is not supported.
Why is this important (Further reading for the interested ones among you)
TL;DR: It is quite baffling to me, why we don't use static metaprogramming as much as dynamic, because it has many many advantages.
Most developers see "Dynamic and Static" and immediately jump to the conclusion that dynamic is better. Nothing wrong with that, static has a lot of negative connotations for developers. But in this case (and specifically for java) this is the exact other way around.
Dynamic metaprogramming requires reflections, which has some major drawbacks. There are quite a lot of them. In short: Performance, Security, and Design.
Static metaprogramming (i.e. Annotation Processing) allows us to intersect the compiler, which already does most of the things we try to accomplish with reflections. We can also create classes in this process, which are again passed to the annotation processors. You then can (for example) generate classes, which do what normally had to be done using reflections. Further more, we can implement a "fail fast" system, because we can inform the compiler about errors, warnings and such.
To conclude and compare as much as possible: let us imagine Spring. Spring tries to find all Component annotated classes at runtime (which we could simplify by using service files at compile time), then generates certain proxy classes (which we already could have done at compile time) and resolves bean dependencies (which, again, we already could have done at compile time). Jake Whartons talk about Dagger2, in which he explains why they switched to static metaprogramming. I still don't understand why the big players like Spring don't use it.
This post is to short to fully explain those differences and why static would be more powerful. If you want, i am currently working on a presentation for this. If you are interested and speak German (sorry about that), you can have a look at my website. There you find a presentation, which tries to explain the differences in 45 minutes. Only the slides though.
Take a look at Clojure. It's a LISP with Macros (meta-programming) that runs on the JVM and is very interoperable with Java.
What do you exactly mean by "static metaprogramming"? Yes, C++ template metaprogramming is impossible in Java, but it offers other methods, much more powerful than those from C++:
reflection
aspect-oriented programming (#AspectJ)
bytecode manipulation (Javassist, ObjectWeb ASM, Java agents)
code generation (Annotation Processing Tool, template engines like Velocity)
Abstract Syntax Tree manipulations (APIs provided by popular IDEs)
possibility to run Java compiler and use compiled code even at runtime
There's no best method: each of those methods has its strengths and weaknesses.
Due to flexibility of JVM, all of those methods in Java can be used both at compilation time and runtime.
No. Even more, generic types are erased to their upper bound by the compiler, so you cannot create a new instance of a generic type T at runtime.
The best way to do metaprogamming in Java is to circumvent the type erasure and hand in the Class<T> object of your type T. Still, this is only a hack.
If you need powerful compile-time logic for Java, one way to do that is with some kind of code generation. Since, as other posters have pointed out, the Java language doesn't provide any features suitable for doing compile-time logic, this may be your best option (iff you really do have a need for compile-time logic). Once you have exhausted the other possibilities and you are sure you want to do code-generation, you might be interested in my open source project Rjava, available at:
http://www.github.com/blak3mill3r
It is a Java code generation library written in Ruby, which I wrote in order to generate Google Web Toolkit interfaces for Ruby on Rails applications automatically. It has proved quite handy for that.
As a warning, it can be very difficult to debug Rjava code, Rjava doesn't do much checking, it just assumes you know what you're doing. That's pretty much the state of static metaprogramming anyway. I'd say it's significantly easier to debug than anything non-trivial done with C++ TMP, and it is possible to use it for the same kinds of things.
Anyway, if you were considering writing a program which outputs Java source code, stop right now and check out Rjava. It might not do what you want yet, but it's MIT licensed, so feel free to improve it, deep fry it, or sell it to your grandma. I'd be glad to have other devs who are experienced with generic programming to comment on the design.
Lombok offers a weak form of compile time metaprogramming. However, the technique they use is completely general.
See Java code transform at compile time for a related discussion
You can use a metaprogramming library for Java such as Spoon: https://github.com/INRIA/spoon/
No, generics in Java is purely a way to avoid casting of Object.
In a very reduced sense, maybe?
http://michid.wordpress.com/2008/08/13/type-safe-builder-pattern-in-java/

Categories

Resources