I am trying out Java 7 for one project and getting warnings from annotation processors (Bindgen and Hibernate JPA modelgen) of this sort:
warning: Supported source version 'RELEASE_6' from annotation processor 'org.hibernate.jpamodelgen.JPAMetaModelEntityProcessor' less than -source '1.7'
This is caused by the #SupportedSourceVersion(SourceVersion.RELEASE_6) annotation on the annotation processor classes. Since they are compiled with Java 6, the highest value of SourceVersion available to them is RELEASE_6. The Java 7 version of SourceVersion introduces RELEASE_7.
My questions: How are annotation processors supposed to handle forward compatibility? Will there have to be separate jdk6 and jdk7 binary versions of them? Am I not understanding something else here?
I only found the following information regarding this concern:
Querdydsl bug report which used
#Override
public SourceVersion getSupportedSourceVersion() {
return SourceVersion.latest();
}
Oracle blog in which a commentor recommends supporting latest source version
Forward compatibility is handled by processing unknown language constructs appropriately, for example by implementing ElementVisitor.visitUnknown.
There is another entry in the mentioned Oracle blog, which suggests two policies regarding forward compatibility:
Write the processor to only work with the current language version.
Write the processor to cope with unknown future constructs.
The second one is done by returning SourceVersion.latest() as already posted in the question.
I think it's ok to do this in most cases, when you are sure additional language elements won't break anything. Of course you shouldn't just assume that everything will be fine even with newer versions, you should test it too.
Ok, I guess processing unknown language constructs appropriately sounds a bit vague, so here are some examples.
Supposed you have a processor that checks for a custom type of annotations on known language constructs (annotations on a class for example) and creates a simple documentation of what it has found. You are probably safe to assume it will also work in newer versions. Restricting it to a particular version would not be a good decision in my opinion.
Supposed you have a processor that checks every element it can find and analyses the code structure to generate a graph out of it. You may get problems with newer versions. You may be able to handle unknown language constructs somehow (like by adding a unknown node to the graph) but only do this if that makes sense - and if it's worth the trouble. If the processor just wouldn't be useful any more when confronted with something unknown, it should probably stick to a particular java version.
Regardless of the policy used, the best way in my opinion would be to monitor upcoming changes to the language and update the processor accordingly. In Java 7 for example, project coin introduced some new language features, which are most likely not even visible to a processor. Java 8 on the other hand does have new constructs that will affect processing, for example type annotations. New language features don't happen that often though, so Chances are that you don't even need to change anything for a long time.
Related
I'm currently working on optimizing a particular method, which is unfortunately inlined by the JVM, which prevents it from being properly vectorized. I've noticed that there is an annotation to forbid inlining, namely jdk.internal.vm.annotation.DontInline . However, it cannot be accessed from the default module.
Is there a clean way of gaining access to this annotation or to prevent the inlining of the offending method some other way?
DontInline, ForceInline, etc. are JDK internal annotations, they cannot be applied to user code. Even if you somehow manage to open these annotations, HotSpot JVM has an explicit check to disallow them for non-privileged classes.
The reasons are understandable. These annotations are the implementation detail of the particular JVM version; JDK developers are free to add/remove/change meaning of such annotations without notice, even in a minor JDK update.
Using #DontInline to force vectorization does not seem a good approach anyway. In general, inlining should not prevent from other optimizations. If you encounter such problem, it's better to report an issue on hotspot-compiler-dev mailing list.
Now the good news.
Since JDK 9, there is a public supported API to manually tune JIT compiler. This is JEP 165: Compiler Control.
The idea is to provide compiler directives in a separate JSON file, and start the JVM with -XX:CompilerDirectivesFile=<file> option. If your application is sensitive to certain compiler decisions, you may provide the directives file along with the application.
{
match: "*::*",
inline: "-org/package/MyClass::hotMethod"
}
It is even possible to apply compiler directives programmatically in runtime using DiagnosticCommand API:
ManagementFactory.getPlatformMBeanServer().invoke(
new ObjectName("com.sun.management:type=DiagnosticCommand"),
"compilerDirectivesAdd",
new Object[]{new String[]{"compiler.json"}},
new String[]{"[Ljava.lang.String;"}
);
By the way, there is Vectorize: true option among the directives list, which may probably help in vectorizing the particular method.
My initial question was an exact duplicate of this one; that is, why is it that this interface has a runtime retention policy.
But the accepted answer does not satisfy me at all, for two reasons:
the fact that this interface is #Documented has (I believe) nothing to do with it (although why #Documented has a runtime retention policy is a mystery to me as well);
even though many "would be" functional interfaces existed in Java prior to Java 8 (Comparable as the answer mentions, but also Runnable etc), this does not prevent them from being used as "substitutes" (for instance, you can perfecty well use a DirectoryStream.Filter as a substitute to a Predicate if all you do is filter on Path, for instance).
But still, it has this retention. Which means that it has to influence the JVM behavior somehow. How?
I've found the thread in core-libs-dev mailing list which discusses the retention of #FunctionalInterface annotation. The main point mentioned here is to allow third-party tools to use this information for code analysis/validation and to allow non-Java JVM languages to map correctly their lambdas to functional interfaces. Some excerpts:
Joe Darcy (original committer of #FunctionalInterface):
We intentionally made this annotation have runtime retention to
allow it to also be queried to various tools at runtime, etc.
Brian Goetz
There is a benefit for languages other than Java, that can use this as a means to determine whether the interface is suitable for passing to the SAM conversion machinery. The JDK support for lambda conversion is available to other languages as well.
So seems that it's not used by JVM itself, it's just an additional possibility for third-party tools. Making the annotation runtime-visible is not a big cost, so seems there were no strong reasons not to do this.
The only requirement for annotations with retention policy runtime is
Annotations are to be recorded in the class file by the compiler and retained by the VM at run time, so they may be read reflectively. (https://docs.oracle.com/javase/7/docs/api/java/lang/annotation/RetentionPolicy.html#RUNTIME)
Now this has some consequences on runtime behaviour, since the class loader must load these annoations and the VM must keep these annotations in memory for reflective access (for example by third party libraries).
There is however no requirement for the VM to act on such annotations.
I am running into the issue where Rhino throws the "Encountered code generation error while compiling script: generated bytecode for method exceeds 64K limit" exception when running Rhino via the javax.script.ScriptEngine API. The accepted solution appears to be to invoke setOptimizationLevel(-1) on the sun.org.mozilla.javascript.Context.
Unfortunately, I cannot seem to access the Context that is created by the ContextFactory. I have tried adding a ContextFactory.Listener to ContextFactory.getGlobal() that would modify the Context after creation, but my listener never seems to get called. I also took a look at the RhinoScriptEngine source from Java 6 to see whether there was a property that I could set that the ContextFactory would read from in order to determine the value of the optimization level.
As far as I can tell, in Java 7, RhinoScriptEngine sets the optimization level to -1 by default and makes it possible to set the optimization level via the rhino.opt.level property. Compare the makeContext() method in the Java 7 version with the makeContext() method in the Java 6 version to see what I mean.
As far as I can tell, I believe that my best option is to run Rhino directly, as shown in this example of using Rhino to run the CoffeeScript compiler. Though as you can see, the code is a lot messier, so I would prefer to use the javax.script.ScriptEngine API, if possible, while continuing to support Java 6. Are there any other options?
No, according to the documentation: http://docs.oracle.com/javase/6/docs/technotes/guides/scripting/programmer_guide/index.html#jsengine
Where it says:
A few components have been excluded due to footprint and security reasons:
JavaScript-to-bytecode compilation (also called "optimizer"). This
feature depends on a class generation library. The removal of this
feature means that JavaScript will always be interpreted. The
removal of this feature does not affect script execution because the
optimizer is transparent.
The optimizer class has been excluded for bundling it with JDK6 therefore optimization level cannot be set for java 6.
I'm running with 6 and it also appears to be set to -1 by default. Or rather, unless sun.org.mozilla.javascript.internal.optimizer.Codegen is on the classpath, it's set to -1.
I am considering starting a project which is used to generate code in Java using annotations (I won't get into specifics, as it's not really relevant). I am wondering about the validity and usefulness of the project, and something that has struck me is the dependence on the Annontation Processor Tool (apt).
What I'd like to know, as I can't speak from experience, is what are the drawbacks of using annotation processing in Java?
These could be anything, including the likes of:
it is hard to do TDD when writing the processor
it is difficult to include the processing on a build system
processing takes a long time, and it is very difficult to get it to run fast
using the annotations in an IDE requires a plugin for each, to get it to behave the same when reporting errors
These are just examples, not my opinion. I am in the process of researching if any of these are true (including asking this question ;-) )
I am sure there must be drawbacks (for instance, Qi4J specifically list not using pre-processors as an advantage) but I don't have the experience with it to tell what they are.
The ony reasonable alternative to using annotation processing is probably to create plugins for the relevant IDEs to generate the code (it would be something vaguely similar to override/implement methods feature that would generate all the signatures without method bodies). However, that step would have to be repeated each time relevant parts of the code changes, annotation processing would not, as far as I can tell.
In regards to the example given with the invasive amount of annotations, I don't envision the use needing to be anything like that, maybe a handful for any given class. That wouldn't stop it being abused of course.
I created a set of JavaBean annotations to generate property getters/setters, delegation, and interface extraction (edit: removed link; no longer supported)
Testing
Testing them can be quite trying...
I usually approach it by creating a project in eclipse with the test code and building it, then make a copy and turn off annotation processing.
I can then use Eclipse to compare the "active" test project to the "expected" copy of the project.
I don't have too many test cases yet (it's very tedious to generate so many combinations of attributes), but this is helping.
Build System
Using annotations in a build system is actually very easy. Gradle makes this incredibly simple, and using it in eclipse is just a matter of making a plugin specifying the annotation processor extension and turning on annotation processing in projects that want to use it.
I've used annotation processing in a continuous build environment, building the annotations & processor, then using it in the rest of the build. It's really pretty painless.
Processing Time
I haven't found this to be an issue - be careful of what you do in the processors. I generate a lot of code in mine and it runs fine. It's a little slower in ant.
Note that Java6 processors can run a little faster because they are part of the normal compilation process. However, I've had trouble getting them to work properly in a code generation capacity (I think much of the problem is eclipse's support and running multiple-phase compiles). For now, I stick with Java 5.
Error Processing
This is one of the best-thought-through things in the annotation API. The API has a "messenger" object that handles all errors. Each IDE provides an implementation that converts this into appropriate error messages at the right location in the code.
The only eclipse-specific thing I did was to cast the processing environment object so I could check if it was bring run as a build or for editor reconciliation. If editing, I exit. Eventually I'll change this to just do error checking at edit time so it can report errors as you type. Be careful, though -- you need to keep it really fast for use during reconciliation or editing gets sluggish.
Code Generation Gotcha
[added a little more per comments]
The annotation processor specifications state that you are not allowed to modify the class that contains the annotation. I suspect this is to simplify the processing (further rounds do not need to include the annotated classes, preventing infinite update loops as well)
You can generate other classes, however, and they recommend that approach.
I generate a superclass for all of the get/set methods and anything else I need to generate. I also have the processor verify that the annotated class extends the generated class. For example:
#Bean(...)
public class Foo extends FooGen
I generate a class in the same package with the name of the annotated class plus "Gen" and verify that the annotated class is declared to extend it.
I have seen someone use the compiler tree api to modify the annotated class -- this is against spec and I suspect they'll plug that hole at some point so it won't work.
I would recommend generating a superclass.
Overall
I'm really happy using annotation processors. Very well designed, especially looking at IDE/command-line build independence.
For now, I would recommend sticking with the Java5 annotation processors if you're doing code generation - you need to run a separate tool called apt to process them, then do the compilation.
Note that the API for Java 5 and Java 6 annotation processors is different! The Java 6 processing API is better IMHO, but I just haven't had luck with java 6 processors doing what I need yet.
When Java 7 comes out I'll give the new processing approach another shot.
Feel free to email me if you have questions. (scott#javadude.com)
Hope this helps!
I think if annotation processor then definitely use the Java 6 version of the API. That is the one which will be supported in the future. The Java 5 API was still in the in the non official com.sun.xyz namespace.
I think we will see a lot more uses of the annotation processor API in the near future. For example Hibernate is developing a processor for the new JPA 2 query related static meta model functionality. They are also developing a processor for validating Bean Validation annotations. So annotation processing is here to stay.
Tool integration is ok. The latest versions of the mainstream IDEs contain options to configure the annotation processors and integrate them into the build process. The main stream build tools also support annotation processing where maven can still cause some grief.
Testing I find a big problem though. All tests are indirect and somehow verify the end result of the annotation processing. I cannot write any simple unit tests which just assert simple methods working on TypeMirrors or other reflection based classes. The problem is that one cannot instantiate these type of classes outside the processors compilation cycle. I don't think that Sun had really testability in mind when designing the API.
One specific which would be helpful in answering the question would be as opposed to what? Not doing the project, or doing it not using annotations? And if not using annotations, what are the alternatives?
Personally, I find excessive annotations unreadable, and many times too inflexible. Take a look at this for one method on a web service to implement a vendor required WSDL:
#WebMethod(action=QBWSBean.NS+"receiveResponseXML")
#WebResult(name="receiveResponseXML"+result,targetNamespace = QBWSBean.NS)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public int receiveResponseXML(
#WebParam(name = "ticket",targetNamespace = QBWSBean.NS) String ticket,
#WebParam(name = "response",targetNamespace = QBWSBean.NS) String response,
#WebParam(name = "hresult",targetNamespace = QBWSBean.NS) String hresult,
#WebParam(name = "message",targetNamespace = QBWSBean.NS) String message) {
I find that code highly unreadable. An XML configuration alternative isn't necessarily better, though.
I have one java program that has to be compiled as 1.4, and another program that could be anything (so, 1.4 or 1.6), and the two need to pass serialized objects back and forth. If I define a serializable class in a place where both programs can see it, will java's serialization still work, or do I need 1.6-1.6 or 1.4-1.4 only?
Make sure the classes to be serialized define and assign a value to static final long serialVersionUID and you should be ok.
That said, normally I would not do this. My preference is to only use normal serialization either within a single process, or between two processes are on the same machine and getting the serialized classes out of the same jar file. If that's not the case, serializing to XML is the better and safer choice.
Along with the serialVersionUID the package structure has to remain consistent for serialization, so if you had myjar.mypackage.myclass in 1.4 you have to have myjar.mypackage.myclass in 1.6.
It is not uncommon to have the Java version or your release version somewhere in the package structure. Even if the serialVersionUID remains the same between compilations the package structure will cause an incompatible version exception to get thrown at runtime.
BTW if you implement Serializable in your classes you should get a compiler warning if serialVersionUID is missing.
In my view (and based on some years of quite bitter experience) Java native serialization is fraught with problems and ought to be avoided if possible, especially as there is excellent XML/JSON support. If you do have to serialize natively, then I recommend that you hide your classes behind interfaces and implement a factory pattern in the background which will create an object of the right class when needed.
You can also use this abstraction for detecting the incompatible version exception and doing whatever conversion is necessary behind the scenes for migration of the data in your objects.
Java library classes should have compatible serialised forms between 1.4 and 1.6 unless otherwsie stated. Swing explicitly states that it is not compatible between versions, so if you are trying to serialise Swing objects then you are out of luck.
You may run into problems where the code generated by javac is slightly different. This will change the serialVersionUID. You should ensure you explicitly declare the UID in all your serialisable classes.
No, different version of the JVM will not break the serialization itself.
If some of the objects you are serializing are from the Java runtime, and their classes have evolved incompatibly, you will see failures. Most core Java classes are careful about this, but there have been discontinuities in some packages in the past.
I've successfully used serialization (in the context of RMI) with classes from different compilations on different machines running different versions of the Java runtime for years.
I don't want to digress too far from the original question, but I want to note that evolving a serialized class always requires care, regardless of the format. It is not an issue specific to Java serialization. You have to deal with the same concepts whether you serialize in XML, JSON, ASN.1, etc. Java serialization gives a fairly clear specification of what is allowed and how to make the changes that are allowed. Sometimes this is restrictive, other times it is helpful to have a prescription.
If both sides uses the same jar file, it will work most of the times. However if you use different versions of the same package/module/framework ( for instance different weblogic jars or extended usage of some "rare" exceptions ) a lot of integration test is needed before it can be approved.