I am working with Java 8 mainly, but meanwhile also creating a library that uses Java 6, such that other people can use it as well in the future, as it is quite interesting.
The problem I have now is that I could very easily solve some issue by using Java 8's Predicate<T>, however I am unsure how to backport it.
I see the following options available, but they either have issues or I'm unsure how to use them:
Use Google Guava's Predicate<T>, this however introduces a relatively big dependency where I do not really need it, also when a Java 8 user wants to use Predicate, then Google Guava's import for the Predicate class shows up.
Use my own Predicate<T>, no big dependency, still the same issues as mentioned above.
Use a custom name like TessPredicate<T>, as Tess will be relevant name in my project, does not feel that nice either.
Use a name that makes sense in the project setting, such as (tentative) RegexVerficationPredicate, as it is a predicate in addition to using a regular expression, such that you can also do calculations on the elements. Bank codes, etc. usually have some checksum that you need to compute. Implemented as functional interface, this might be most feasible?
Backport java.util.function from Java 8 to Java 6, is this even possible?
How can I solve this?
You can’t backport the java.util.function package due to the heavy use of default and static methods within these interfaces. Such a backport would look quite different.
I recommend creating your own Predicate<T> interface being as minimal as possible, i.e. having that single abstract method with the same signature as the Java 8 Predicate<T>. Having the same interface name and method signature like the well-known acts like a self-documentation.
This implies that programmers using Java 8 can still implement your predicate using a lambda expression or method reference (without even importing your interface). And using a Java 8 predicate is as easy as passing predicate::test to your method.
Adding a dependency to an entire 3rd party library just for one interface looks nasty to me.
I'd recommend use of Guava. When I first used it, I was thinking the same way (too big dependency) then over time I started to use other features provided by guava and now I dont understand how do I do without them (those are my must have tools right now), code is clearer, faster and easily maintainable.
The fact that java SDK (especially java 8) is taking lot of features from Guava tells a lot... thus even tough you can write your own implementation, in a long term use of the library is more preferable...
Related
I was writing some code and was going to throw a RuntimeException as a default for something, when I noticed that there are two options for RuntimeException - one from java.lang and one from kotlin:
Inside the kotlin version of this, there's an entire list of other methods which work the same way :
So, from my understanding of this, the one from package kotlin is simply an alias for the java equivalent (correct me if I'm wrong) which leads me to the question :
what is the point of having this alias file and when should you use it over the "standard" java equivalent ? Does this simply save a few imports ?
When using the JDK, these map to JDK classes. When using Kotlin for Javascript, they would map to a specific implementation in the Kotlin Javascript library. Documentation about actual and expect here.
To answer your question, if there's a chance of you porting your code to also work on another platform, always use the kotlin. variant. Otherwise, it doesn't matter.
In my opinion typealias is strong feature provided by Kotlin language.
Why? Because it gives ability to extend code to multiple domains like extension to provide interoperability between any other languages and Kotlin.
It also becomes helpful when providing APIs or SDK with semantic versioning without worrying much about changes affecting on lower versions of APIs.
One good example would be Collections derived from Java to Kotlin language with some additional & powerful methods as typealias (Literally it saves lot of development time and efforts).
Another good example would be multiplatform programming support by Kotlin language that helps you create APIs with actual and expect keywords implementation.
So long story short, I prefer using RuntimeException from Kotlin package to extend support amongst variety of Kotlin versions (You can see newly added classes starting from version 1.3, but doesn't affect existing API).
I have stumbled upon Hashing class from com.google.common.hash package.
Intellij IDEA shows following warning if I am using functions of that class:
The class itself is annotated with #Beta annotation:
The description of #Beta annotation says:
Signifies that a public API (public class, method or field) is subject to incompatible changes, or even removal, in a future release. An API bearing this annotation is exempt from any compatibility guarantees made by its containing library. Note that the presence of this annotation implies nothing about the quality or performance of the API ...
So the implementation of the API is fine and stable?
... in question, only the fact that it is not "API-frozen."
It is generally safe for applications to depend on beta APIs, at the cost of some extra work ...
Which kind of extra work?
... during upgrades. However it is generally inadvisable for libraries (which get included on users' CLASSPATHs, outside the library developers' control) to do so.
The question is whether it is safe / stable to use mentioned class and its functionality? What is the tradeoff while using a beta API?
The implementaion of the API is fine, you can rely on that since it is an extensively used library from google.
As for stability - you can do a little research here and compare a couple of versions of this API a year apart. Let's say, 23.0 versus 27.0-jre
https://google.github.io/guava/releases/23.0/api/docs/com/google/common/hash/Hashing.html
https://google.github.io/guava/releases/27.0-jre/api/docs/com/google/common/hash/Hashing.html
If you do a diff, the API's from different years (2017 versus 2018) are exactly the same.
Therefore, I would interpret the #Beta here as a heads-up that "be aware, this API may change in future", but in practise the API is both stable, reliable and heavily used.
Maybe at some point, the google developers may choose to remove the #Beta annotation. Or maybe they intend to, or have forgotten (speculative...)
The "extra work" referred to means that, if you build an application using this API, you may need to refactor your application slightly (imagine that a method signiture changes, or a method becomes deprecated and replaced) if you need to upgrade to the newest version of this API.
The degree of work there depends on how heavily and how often you use the API, and how deep the dependency on that API is (transitively, through other libraries, for example - those would also need to be rebuilt).
In summary, in this case - "dont worry, move along" :)
So the implementation of the API is fine and stable?
No way to know from this annotation.
To answer that you need to know how widely used it is, and for how long.
Which kind of extra work?
The kind of extra work that you have to do when a method that required only 1 parameter and returned String now requires 3 parameters, and returns a List<String>.
i.e.: Code that uses this API might need to change due to API change.
So the implementation of the API is fine and stable?
The quoted text says that the API is "subject to incompatible changes". That means that it (the API) is not stable.
Note also that the quoted text explicitly states that the annotation is saying nothing about whether or not the API's implementation works.
But also note that this is not a yes / no issue. It is actually a yes / no / maybe issue. Some questions don't have answers.
Which kind of extra work?
Rewriting some of your code ... if the API changes.
The question is whether it is safe / stable to use mentioned class and its functionality?
This requires the ability to predict the future. It is unanswerable. (Unless you ask the people who put that annotation on that API. They may be able to make a reliable prediction ...)
It is also unanswerable because it depends on what you mean by safe, and what context you intend to use the Hashing class in.
What is the tradeoff while using a beta API?
The tradeoff is self evident:
On the plus side, you get to use new API functionality that may be beneficial to your application in the long run. (And if there is no evidence that it may be beneficial, this whole discussion is moot!)
On the minus side, you may have to rewrite some of your code if the authors modify the API ... as they say they might do.
I'm developing an open source library in Java and would like to ensure that it is convenient for Java 8 users, and takes advantage of new concepts in Java 8 wherever possible (lambdas etc.)
At the same time I absolutely need to maintain backwards compatibility (the library must still be usable for people using Java 6 or 7).
What useful features from Java 8 can I adopt that would be beneficial for library users without breaking library compatibility for users of older Java versions?
I don't know about your library, this advice might be slightly off.
Lambdas: Don't worry. Any functional interface can be implemented using a Lambda expression.
Method references: Same as lambdas, they should just be usable.
Streams: If this fits your library, you should use them, but keeping compatibility is harder here. The backwards compatibility could be achieved using a second library part, wrapping around the base library and hooking into the public API of it. It could therefore provide additional sugar/functionality without abandoning Java 6/7.
Default methods: By all means, use these! They are a quick/cheap/good way to enhance existing implementations without breaking them. All default methods you add will be automatically available for implementing classes. These will, however, also need the second library part, so you should provide the base interfaces in your base library, and extend the interfaces from the companion-library.
Don't fork the library, abandoning the old one, as there are still many developers who cannot use Java 8, or even Java 7. If your library is sensible to use on e.g. Android, please keep that compatibility.
If you want your code to be usable by Java 6 consuming VMs, you have to write using Java 6 language compatibility. Alas. The bytecode format critically changed from 6 to 7, so code for 7 and 8 won't load into 6. (I found this with my own code migrating to 7; even when all I was using was multi-catch — which should be writable in the 6 bytecode — I couldn't build it for a 6 target. The compiler refused.)
I've yet to experiment with 8, but you'll have to be careful because if you depend on any new Java packages or classes, you will still be unable to work with older versions; the old consuming VMs simply won't have access to the relevant classes and won't be able to load the code. In particular, lambdas definitely won't work.
Well, can we target a different classfile version? There's no combination of source and target that will actually make javac happy with this.
kevin$ $JAVA8/bin/javac -source 1.8 -target 1.7 *.java
javac: source release 1.8 requires target release 1.8
So there's simply no way to compile Java source with lambdas into a pre-Java 8 classfile version.
My general take is that if you want to support old versions of Java, you have to use old versions of Java to do so. You can use a Java 8 toolchain, but you need Java 7 or Java 6 source. You can do it by forking the code, maintaining versions for all the versions of Java you want to support, but that's far more work than you could ever justify as a lone developer. Pick the minimum version and go with that (and kiss those juicy new features goodbye for now, alas).
If you use any new language features in Java 8, it requires also using Java 8 bytecode.
$ javac -source 1.8 -target 1.7
javac: source release 1.8 requires target release 1.8
That means your options are quite limited. You cannot use lambdas, method references, default methods, Streams, etc. and maintain backwards compatibility.
There are still two things you can do that users of Java 8 will benefit from. The first is using Functional Interfaces in your public API. If your methods take Runnables, Callables, Comparators, etc. as parameters, then users of Java 8 will be able to pass in lambdas. You may want to create your own Single-Abstract-Method interfaces as well. If you find you need Functions and Predicates, I'd suggest reusing the ones that ship with GS Collections or Guava instead of writing your own.
The second thing you can do is use a rich collections API that benefits from using Functional Interfaces. Again, that means using GS Collections or Guava. For example, if you have a method that would return List, return MutableList or ImmutableList instead. That way, callers of the method will be able to chain usages of the rich API exposed by these interfaces.
As said by others, providing and using interfaces with a single method such that they can be implemented using lambdas or method references when using Java 8 is a good way of supporting Java 8 without breaking Java 7 compatibility.
This can be complemented by providing methods by your library which fit into one of the standard function types of Java 8 (e.g. Supplier, (Bi)Consumer, (Bi)Function) so that Java 8 developers can create method references to them for Java 8 API methods. This implies that their signature matches one of these functional interfaces and they don’t throw checked exceptions. This often comes naturally, e.g. getFoo() may act as a Function and isBar() as a Predicate but sometimes it’s possible to improve methods by thinking about possible Java 8 use scenarios.
For example, if you provide a method taking two parameters, it’s useful to choose the order where the first parameter is the one that is more likely to be a key in a Map. So it is more likely to be useful for Map.forEach with a method reference.
And avoid method with ambiguous signatures. E.g. if you have a class Foo with an instance method ReturnType bar() and a static method ReturnType bar(Foo) neither of them can be used as method reference anymore as Foo::bar would be ambiguous. Eliminate or rename one of these methods.
It’s important that such methods do not have undocumented internal state that causes surprising behavior when used by multiple threads. Otherwise they can not be used by parallel streams.
Another opportunity that should not be underestimated is to use names for classes, interfaces and members conforming to patterns introduced by the Java 8 API. E.g. if you have to introduce some sort of filter interface with a test method for your library that ought to work with Java 7 as well, you should name the interface Predicate and the method test to associate it with the similar named functional interface of Java 8.
Which is the best way you think to use Guava? Since, in the web site, the guys say that the interfaces are subject to change till they release 1.0. Taking this into account, the code you write shouldn't depend directly on those interfaces, so, are you wrapping all the Guava code you call into some kind of layer or facade in our projects in order to, if those interfaces change, then you at least have those changes centralized in one place?
Which is the best way to go? I am really interested in starting using it but I have that question hitting my mind hahah :)
I'm not sure where you're getting that about the interfaces being subject to change until version 1.0. That was true with Google Collections, Guava's predecessor, but that has had its 1.0 release and is now a part of Guava. Additionally, nothing that was part of Google Collections will be changed in a way that could break code.
Guava itself doesn't even use a release system with a concept of "1.0". It just does releases, labeled "r05", "r06" and so on. All APIs in Guava are effectively frozen unless they are marked with the #Beta annotation. If #Beta is on a class or interface, anything in that class is subject to change. If a class isn't annotated with it, but some methods in the class are, those specific methods are subject to change.
Note that even with the #Beta APIs, the functionality they provide will very likely not be removed completely... at most they'll probably just change how that functionality is provided. Additionally, I believe they're deprecating the original form of any #Beta API they change for 1 release before removing it completely, giving you time to see that it's changed and update to the new form of that API. #Beta also doesn't mean that a class or method isn't well-tested or suitable for production use.
Finally, this shouldn't be much of an issue if you're working on an application that uses Guava. It should be easy enough to update to a new version whenever, just making changes here and there if any #Beta APIs you were using changed. It's people writing libraries that use Guava who really need to avoid using #Beta APIs, as using one could create a situation where you're unable to switch to a newer version of Guava in your application OR use another library that uses a newer version because it would break code in the older library that depends on a changed/removed beta API.
After reading some OpenJDK mailinglist entries, it seems that the Oracle developers are currently further removing things from the closure proposal, because earlier design mistakes in the Java language complicate the introduction of the Java closures.
Considering that Scala closures are much more powerful than the closures planned for Java 8, I wonder if it will be possible to e. g. call a Java method taking a closure from Scala, defining a closure in Java and giving it to a Scala function etc.?
So will Java closures be represented like their Scala counterparts in bytecode or differently?
Will it be possible to close the gap of functionality between Java/Scala closures?
I think it's more complicated than assuming there's two groups of stakeholders here. The Project Lambda people seem to be working mostly independently of the Oracle people, occasionally throwing something over the wall that the Project Lambda people find out indirectly. (Scala, is of course the third stakeholder.)
Since the latest Project Lambda proposal is to eliminate function types all together, and just create some sort of fancy inference for implementing interfaces that have a single astract method (SAM types), I foresee the following:
Calling Scala code that requires a Scala closure will depend entirely on the implementation of the Function* traits (and the implementation of traits in general) -- whether it appears to the Java compiler as a SAM (which it is in Scala-land) or whether the non-abstract methods also appear abstract to the JVM. (I would think they currently do look like they're abstract since traits are implemented as interfaces, but I'm know almost nothing about Scala's implementation. This could be a big hurdle to interperability.)
Complications with Java generics (in particular how to expressInt/int/Integer, or Unit/Nothing/void in a generic interface) may also complicate things.
Using Scala functions to implement Java SAMs will not be any different than it now -- you need to create an implicit conversion for the specific interface you wish to implement.
If the JVM gets function types (and Oracle seems not to have eliminated that possibility), it may depend how it's implemented. If they're first class objects implementing a particular interface, then all that Scala needs to do to be compatible is make Function* implement the new interface. If a new kind of type is implemented in the JVM entirely, then this could be difficult -- the Scala developers may wrap them using magic like they currently do for Arrays, or they may create create implicit conversions. (A new language concept seems a bit far-fetched.)
I hope that one of the results of all of this discussion is that all of the various JVM languages will agree on some standard way to represent closures -- so that Scala, Groovy, JRuby, etc... can all pass closures back and forth with a minimum of hassle.
What's more interesting to me is the proposals for virtual extension methods that will allow the Java Collections API to use lambdas. Depending on how these are implemented, they may greatly simplify some of the binary compatibility problems that we've had to deal with when changing Scala code, and they may help to more easily and efficiently implement traits.
I hope that some of the Scala developers are getting involved and offering their input, but I haven't actually seen any discussion of Scala on the Project Lambda lists, nor any participants who jump out to me as being Scala developers.
You would likely be able to do this extremely easily using implicit conversions a la collection.JavaConversions whether or not they come out of the box.
Of course, this is not obviously so, because it may be the case that Java closures are turned into types which get generated by the JVM at runtime - I seem to recall a Neal Gafter presentation saying something along these lines
Note: 5 years later, SCALA 2.12.0-M3 (Oct. 2015) did include this enhancement:
Scala 2.12 emits closures in the same style as Java 8.
For each lambda the compiler generates a method containing the lambda body.
At runtime, this method is passed as an argument to the LambdaMetaFactory provided by the JDK, which creates a closure object.
Compared to Scala 2.11, the new scheme has the advantage that the compiler does not generate an anonymous class for each lambda anymore. This leads to significantly smaller JAR files.