Annotation processor handles only recently changed classes - java

In our project we want to create unique identifiers for user interface dialogs. To make sure developers do not create duplicate identifiers for dialogs i created an annotation processor which check for a "Dialog annotation" (It contains the unique identifier) and creates an error through the Messager-class within the Eclipse Problems view if a duplicate identifier has been detected.
I expected the processor to process all classes annotated with the dialog annotation but he only does so if i invoke a complete build within Eclipse. But if i change a single dialog class and save it, the processor only processes this single dialog (I believe this is called incremental build), thus making it impossible for me to check if other dialogs already assigned the same identifier.
I tried setting Run this container's processors in batch mode in the .factorypath file but it did not seem to have an effect. The eclipse documentation of Factory Path Preferences says about this option:
This option only applies to processors using the Java 5 Mirror APIs. It does not affect processors using the Java 6 annotation processing APIs.
I do use the Java 6 annotation processing API. So this does not seem to be an option either.
Is there any way to make it process all annotated classes - even the unchanged classes? I'm also glad to hear about other ways to adress the initial problem.

AnnotationProcessor will process only compiled classes, so no wonder you have troubles.
I assume that your processor collects all ids on runtime. It would be better store collected ids into cache (eg file) and check for occurences there insted of scanning all annotated classes (all classes to be exact) on every compilation. That way, only single compilation of class will be required to register id. Every future modification will check against already compiled, unmodified classes withour rechecking them.

Related

Is there a way to use annotations to build a java class, based on the properties of multiple java classes?

There is a way to use annotations to build a java class, based on the properties of multiple java classes?
I want to create a generic log history table for all operations and entities in a spring data jpa project, for this i was thinking if would be possible to get all properties of my entities at compilation time to generate this generic entity log class.
I don't know so much about annotations, but it is used to generate source files so i believe that isn't a impossible ideia.
Could someone give some direction? If it's possible would be nice to point me a good starting point. Or if there is something already done that match my intent.
Annotations themselves do not generate source files -- they signify pointcuts for other classes to enhance/enrich them, or as a marker interface.
However, you can definately use an annotation scanner to scan files and get all the fields.
Then what is left is generating a class from this.
(and then, compile it). Be aweare that this is a multi-step process, and it may seem a bit clunky: you create a file with name GenericEntity, make sure it's in the proper package (so start it with package my.fun.project, write the imports, and write the java class, all as strings which you send tot he file.
From you scan you have an annotated field / class and you can get the type and name (see the reflections library if necessary), and write that to your file as well. Then close the class properly with a }. Now it should a file which should not give compilation errors when loaded in your IDE.
This GenericEntityGenerator then has to be executed (using a maven plugin, probably) on your source code, probably during the generate-sources phase, after which your generated class will be compiled during the compile phase.... and bob's your uncle now.
In all, a fun project

can we use common javax.annotation's in my servlet class

Can I use java common Annotation like javax.annotation.Resource, javax.annotation.PreDestroy, javax.annotation.security.RolesAllowed in a Servlet class?
Technically, nothing stops you from doing so. You can use that (well, pretty much every other) annotation in any class you want. Of course if the annotation can be applied on that type of Java element (method, field, class). The annotations themselves do not bring any logic or magic in your code. Check the source yourself: you'll see that most of them are nothing but data holders.
Annotations are nothing without processors. The processing can occur at compile time or in a runtime.
Compile time annotation processing can change the source code (simplifying things here) before it is compiled. It can introduce extra behavior, (like null checks) or generate methods (like Lombok's #Getter / #Setter. Compile time annotation are usually removed after processing, they are not present on a runtime.
The annotations you've asked about are different kind - runtime annotations. Obviously, resource injection or role checks cannot be done at compile time. So, they need a processor to take effect too, but that processor must be present on a class-path of a running app. Usually, such processors are provided by an environment you run your application in. In your case, these are Java EE annotation and they require Java EE compatible application server to work. I.e. the magic will happen only if you run your class inside a configured application server, like Glassfish or WebSpere. If you run you application in Tomcat or Jetty (they are web servers) - the annotation won't work, they will be "omitted" because no processors exist to process them.

How to use the Java Instrumentation API to reload classes when they change on the the file system?

I don't want to use the URL Classloader to load classes.
I want to implement this myself.
I don't want to use a solution like JRebel (although it's great).
I've got prior experience of JavaAssist, bytecode generation, implementing javaagent class transformers etc.
I would like to write a javaagent which hooks into the classloader or defines it's own system classloader.
I'll store the class files in an in memory cache, and for particular files, periodically reload them from disk.
I'd prefer to do this in a way which doesn't involve continuously polling the file system and manually invalidating specific classes. I'd much rather intercept class loading events.
I last messed around with this stuff 4 years ago, and I'm sure, although my memory may deceive me that it was possible to do, but 8 hours of searching google doesn't present an obvious solution beyond building a patched JVM.
Is this actually possible?
I've created a stub implementation at https://github.com/packetops/poc_agent if anyone's interested in a simple example of javaagent use.
update
Just found this post - I may have been using the wrong approach, I'll investigate further.
It depends on what you want to do. If you want to reload your classes and define new ones, then you are fine with implementing your own classloader, as you already found.
If you want to replace existing classes, things become more "envolved". You can do this by implementing your own tiny Java agent. See the Java documentation, how to do this: http://docs.oracle.com/javase/7/docs/api/java/lang/instrument/package-summary.html
With the instrumentation mechanism you can not freely redefine classes, quote from Instrumentation.redefineClass:
The redefinition may change method bodies, the constant pool and attributes. The redefinition must not add, remove or rename fields or methods, change the signatures of methods, or change inheritance. These restrictions maybe be lifted in future versions. The class file bytes are not checked, verified and installed until after the transformations have been applied, if the resultant bytes are in error this method will throw an exception.
If you want to do more, you need to load it again. This can be done under the same name, by using a different classloader. The previous class definition will be unloaded, if no one else is using it any more. So, you need to reload any class that uses your previous class also. Utlimatly, you end up reinventing something like OSGi. Take a look at: Unloading classes in java?

Modern Java Annotation Processing

Is annotation processing still an active part of Java 6+, or is it something that has been deprecated/discouraged/obsolesced. If obsolesced, why (why is it no longer needed/useful)? And if it's still extremely useful and "active" (a new Java project developing against the Java 6+ JDK would still benefit from it), please confirm/correct my understanding of how annotation processors are used:
You create your own annotation class, say #MyAnnotation
You mark certain classes, methods, fields, etc. with #MyAnnotation
During the build process, you invoke your custom MyAnnotationProcessor (how?)
The processor scans your classpath for instances of #MyAnnotation
Typically, an annotation processor does dynamic bytecode injection, modifying/enhancing your compiled classes on-the-fly
Correct.
Correct.
Correct. Typically you will extend AbstractProcessor. You specify your MyAnnotationProcessor class using the ServiceLoader pattern - this makes it discoverable to the compiler. The Processor Javadoc contains some information on creating this class.
Correct.
This part is not correct. The annotation processor framework does not give you the ability to modify classes. You will need to use some post-compile process to do this as part of your build. What it does allow you to do is create new files. These may be simple resources, new Java source files (that will subsequently be required and eligible to the annotation processor during the same compile), or Java class files. You may also perform custom checks on source code and write errors to the log (causing the compile to fail).
I hope that addresses your understanding questions.

The drawbacks of annotation processing in Java?

I am considering starting a project which is used to generate code in Java using annotations (I won't get into specifics, as it's not really relevant). I am wondering about the validity and usefulness of the project, and something that has struck me is the dependence on the Annontation Processor Tool (apt).
What I'd like to know, as I can't speak from experience, is what are the drawbacks of using annotation processing in Java?
These could be anything, including the likes of:
it is hard to do TDD when writing the processor
it is difficult to include the processing on a build system
processing takes a long time, and it is very difficult to get it to run fast
using the annotations in an IDE requires a plugin for each, to get it to behave the same when reporting errors
These are just examples, not my opinion. I am in the process of researching if any of these are true (including asking this question ;-) )
I am sure there must be drawbacks (for instance, Qi4J specifically list not using pre-processors as an advantage) but I don't have the experience with it to tell what they are.
The ony reasonable alternative to using annotation processing is probably to create plugins for the relevant IDEs to generate the code (it would be something vaguely similar to override/implement methods feature that would generate all the signatures without method bodies). However, that step would have to be repeated each time relevant parts of the code changes, annotation processing would not, as far as I can tell.
In regards to the example given with the invasive amount of annotations, I don't envision the use needing to be anything like that, maybe a handful for any given class. That wouldn't stop it being abused of course.
I created a set of JavaBean annotations to generate property getters/setters, delegation, and interface extraction (edit: removed link; no longer supported)
Testing
Testing them can be quite trying...
I usually approach it by creating a project in eclipse with the test code and building it, then make a copy and turn off annotation processing.
I can then use Eclipse to compare the "active" test project to the "expected" copy of the project.
I don't have too many test cases yet (it's very tedious to generate so many combinations of attributes), but this is helping.
Build System
Using annotations in a build system is actually very easy. Gradle makes this incredibly simple, and using it in eclipse is just a matter of making a plugin specifying the annotation processor extension and turning on annotation processing in projects that want to use it.
I've used annotation processing in a continuous build environment, building the annotations & processor, then using it in the rest of the build. It's really pretty painless.
Processing Time
I haven't found this to be an issue - be careful of what you do in the processors. I generate a lot of code in mine and it runs fine. It's a little slower in ant.
Note that Java6 processors can run a little faster because they are part of the normal compilation process. However, I've had trouble getting them to work properly in a code generation capacity (I think much of the problem is eclipse's support and running multiple-phase compiles). For now, I stick with Java 5.
Error Processing
This is one of the best-thought-through things in the annotation API. The API has a "messenger" object that handles all errors. Each IDE provides an implementation that converts this into appropriate error messages at the right location in the code.
The only eclipse-specific thing I did was to cast the processing environment object so I could check if it was bring run as a build or for editor reconciliation. If editing, I exit. Eventually I'll change this to just do error checking at edit time so it can report errors as you type. Be careful, though -- you need to keep it really fast for use during reconciliation or editing gets sluggish.
Code Generation Gotcha
[added a little more per comments]
The annotation processor specifications state that you are not allowed to modify the class that contains the annotation. I suspect this is to simplify the processing (further rounds do not need to include the annotated classes, preventing infinite update loops as well)
You can generate other classes, however, and they recommend that approach.
I generate a superclass for all of the get/set methods and anything else I need to generate. I also have the processor verify that the annotated class extends the generated class. For example:
#Bean(...)
public class Foo extends FooGen
I generate a class in the same package with the name of the annotated class plus "Gen" and verify that the annotated class is declared to extend it.
I have seen someone use the compiler tree api to modify the annotated class -- this is against spec and I suspect they'll plug that hole at some point so it won't work.
I would recommend generating a superclass.
Overall
I'm really happy using annotation processors. Very well designed, especially looking at IDE/command-line build independence.
For now, I would recommend sticking with the Java5 annotation processors if you're doing code generation - you need to run a separate tool called apt to process them, then do the compilation.
Note that the API for Java 5 and Java 6 annotation processors is different! The Java 6 processing API is better IMHO, but I just haven't had luck with java 6 processors doing what I need yet.
When Java 7 comes out I'll give the new processing approach another shot.
Feel free to email me if you have questions. (scott#javadude.com)
Hope this helps!
I think if annotation processor then definitely use the Java 6 version of the API. That is the one which will be supported in the future. The Java 5 API was still in the in the non official com.sun.xyz namespace.
I think we will see a lot more uses of the annotation processor API in the near future. For example Hibernate is developing a processor for the new JPA 2 query related static meta model functionality. They are also developing a processor for validating Bean Validation annotations. So annotation processing is here to stay.
Tool integration is ok. The latest versions of the mainstream IDEs contain options to configure the annotation processors and integrate them into the build process. The main stream build tools also support annotation processing where maven can still cause some grief.
Testing I find a big problem though. All tests are indirect and somehow verify the end result of the annotation processing. I cannot write any simple unit tests which just assert simple methods working on TypeMirrors or other reflection based classes. The problem is that one cannot instantiate these type of classes outside the processors compilation cycle. I don't think that Sun had really testability in mind when designing the API.
One specific which would be helpful in answering the question would be as opposed to what? Not doing the project, or doing it not using annotations? And if not using annotations, what are the alternatives?
Personally, I find excessive annotations unreadable, and many times too inflexible. Take a look at this for one method on a web service to implement a vendor required WSDL:
#WebMethod(action=QBWSBean.NS+"receiveResponseXML")
#WebResult(name="receiveResponseXML"+result,targetNamespace = QBWSBean.NS)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public int receiveResponseXML(
#WebParam(name = "ticket",targetNamespace = QBWSBean.NS) String ticket,
#WebParam(name = "response",targetNamespace = QBWSBean.NS) String response,
#WebParam(name = "hresult",targetNamespace = QBWSBean.NS) String hresult,
#WebParam(name = "message",targetNamespace = QBWSBean.NS) String message) {
I find that code highly unreadable. An XML configuration alternative isn't necessarily better, though.

Categories

Resources