I am considering starting a project which is used to generate code in Java using annotations (I won't get into specifics, as it's not really relevant). I am wondering about the validity and usefulness of the project, and something that has struck me is the dependence on the Annontation Processor Tool (apt).
What I'd like to know, as I can't speak from experience, is what are the drawbacks of using annotation processing in Java?
These could be anything, including the likes of:
it is hard to do TDD when writing the processor
it is difficult to include the processing on a build system
processing takes a long time, and it is very difficult to get it to run fast
using the annotations in an IDE requires a plugin for each, to get it to behave the same when reporting errors
These are just examples, not my opinion. I am in the process of researching if any of these are true (including asking this question ;-) )
I am sure there must be drawbacks (for instance, Qi4J specifically list not using pre-processors as an advantage) but I don't have the experience with it to tell what they are.
The ony reasonable alternative to using annotation processing is probably to create plugins for the relevant IDEs to generate the code (it would be something vaguely similar to override/implement methods feature that would generate all the signatures without method bodies). However, that step would have to be repeated each time relevant parts of the code changes, annotation processing would not, as far as I can tell.
In regards to the example given with the invasive amount of annotations, I don't envision the use needing to be anything like that, maybe a handful for any given class. That wouldn't stop it being abused of course.
I created a set of JavaBean annotations to generate property getters/setters, delegation, and interface extraction (edit: removed link; no longer supported)
Testing
Testing them can be quite trying...
I usually approach it by creating a project in eclipse with the test code and building it, then make a copy and turn off annotation processing.
I can then use Eclipse to compare the "active" test project to the "expected" copy of the project.
I don't have too many test cases yet (it's very tedious to generate so many combinations of attributes), but this is helping.
Build System
Using annotations in a build system is actually very easy. Gradle makes this incredibly simple, and using it in eclipse is just a matter of making a plugin specifying the annotation processor extension and turning on annotation processing in projects that want to use it.
I've used annotation processing in a continuous build environment, building the annotations & processor, then using it in the rest of the build. It's really pretty painless.
Processing Time
I haven't found this to be an issue - be careful of what you do in the processors. I generate a lot of code in mine and it runs fine. It's a little slower in ant.
Note that Java6 processors can run a little faster because they are part of the normal compilation process. However, I've had trouble getting them to work properly in a code generation capacity (I think much of the problem is eclipse's support and running multiple-phase compiles). For now, I stick with Java 5.
Error Processing
This is one of the best-thought-through things in the annotation API. The API has a "messenger" object that handles all errors. Each IDE provides an implementation that converts this into appropriate error messages at the right location in the code.
The only eclipse-specific thing I did was to cast the processing environment object so I could check if it was bring run as a build or for editor reconciliation. If editing, I exit. Eventually I'll change this to just do error checking at edit time so it can report errors as you type. Be careful, though -- you need to keep it really fast for use during reconciliation or editing gets sluggish.
Code Generation Gotcha
[added a little more per comments]
The annotation processor specifications state that you are not allowed to modify the class that contains the annotation. I suspect this is to simplify the processing (further rounds do not need to include the annotated classes, preventing infinite update loops as well)
You can generate other classes, however, and they recommend that approach.
I generate a superclass for all of the get/set methods and anything else I need to generate. I also have the processor verify that the annotated class extends the generated class. For example:
#Bean(...)
public class Foo extends FooGen
I generate a class in the same package with the name of the annotated class plus "Gen" and verify that the annotated class is declared to extend it.
I have seen someone use the compiler tree api to modify the annotated class -- this is against spec and I suspect they'll plug that hole at some point so it won't work.
I would recommend generating a superclass.
Overall
I'm really happy using annotation processors. Very well designed, especially looking at IDE/command-line build independence.
For now, I would recommend sticking with the Java5 annotation processors if you're doing code generation - you need to run a separate tool called apt to process them, then do the compilation.
Note that the API for Java 5 and Java 6 annotation processors is different! The Java 6 processing API is better IMHO, but I just haven't had luck with java 6 processors doing what I need yet.
When Java 7 comes out I'll give the new processing approach another shot.
Feel free to email me if you have questions. (scott#javadude.com)
Hope this helps!
I think if annotation processor then definitely use the Java 6 version of the API. That is the one which will be supported in the future. The Java 5 API was still in the in the non official com.sun.xyz namespace.
I think we will see a lot more uses of the annotation processor API in the near future. For example Hibernate is developing a processor for the new JPA 2 query related static meta model functionality. They are also developing a processor for validating Bean Validation annotations. So annotation processing is here to stay.
Tool integration is ok. The latest versions of the mainstream IDEs contain options to configure the annotation processors and integrate them into the build process. The main stream build tools also support annotation processing where maven can still cause some grief.
Testing I find a big problem though. All tests are indirect and somehow verify the end result of the annotation processing. I cannot write any simple unit tests which just assert simple methods working on TypeMirrors or other reflection based classes. The problem is that one cannot instantiate these type of classes outside the processors compilation cycle. I don't think that Sun had really testability in mind when designing the API.
One specific which would be helpful in answering the question would be as opposed to what? Not doing the project, or doing it not using annotations? And if not using annotations, what are the alternatives?
Personally, I find excessive annotations unreadable, and many times too inflexible. Take a look at this for one method on a web service to implement a vendor required WSDL:
#WebMethod(action=QBWSBean.NS+"receiveResponseXML")
#WebResult(name="receiveResponseXML"+result,targetNamespace = QBWSBean.NS)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public int receiveResponseXML(
#WebParam(name = "ticket",targetNamespace = QBWSBean.NS) String ticket,
#WebParam(name = "response",targetNamespace = QBWSBean.NS) String response,
#WebParam(name = "hresult",targetNamespace = QBWSBean.NS) String hresult,
#WebParam(name = "message",targetNamespace = QBWSBean.NS) String message) {
I find that code highly unreadable. An XML configuration alternative isn't necessarily better, though.
Related
I want to write simple custom validation annotation and can't find proper info on difference between two ways, that i know, of processing them, which are:
Reflection API
Custom processor extending javax.annotation.processing.AbstractProcessor
Can please someone tell me the difference between these options and which is better for which scenario, any help would be appreciated
An annotation processor runs when you compile your code. At compile time, it can issue warnings, generate new code, or modify existing code. (Modifying existing code is frowned upon, but some processors such as Lombok do it.)
The reflection API is accessed when your code runs. Your code can change its behavior depending on the annotations that it reads via the reflection API.
That explains the key differences. There are too many different scenarios to list them all, but for any given scenario you can determine the pros and cons based on the differences.
I'm developing a small application that passes messages around, which are POJOs. I'd like to construct a registry of the types of messages that can be handled. I originally planned to stick them in an enumerated type as follows.
public enum Message
{
INIT( InitMessage.class, "init" ),
REFRESH( RefreshMessage.class, "refresh" );
private Message( Class<?> messageClass, String messageCode )
// etc..
}
I don't like this approach, as I have to maintain a central enumerated type to manage the allowed classes. I'd prefer to instead annotate the POJOs.
#MessageDefinition( "init" )
public class InitMessage
{
// .. some properties with appropriate getters and setters
}
I checked out the Tomcat source from Subversion, and even with the help of an IDE, it was taking me a very, very long time to wade through the levels of abstraction and get to some gutsy code showing me how it scans the classes in a webapp searching for annotated classes to register #WebServlet, #WebListener, etc. Even more off-putting was the fact that I could only really see references to those annotations in test classes.
My question is how do annotation driven frameworks scan for annotations? It's hard to know which classes you're hoping to scan, even moreso before they've been loaded by the classloader. I was thinking the approach might be to look for all files that end in a .class extension, load them and check for annotations. However, this approach sounds pretty nasty.
So I guess it boils down to:
how does Tomcat find annotated-classes? (or any other frameworks you're familiar with)
if Tomcat's (or the framework you mentioned above's) approach sucks, how would you do it?
is there a better way that I should architect this overall?
Note: Scan the classpath for classes with custom annotation is great, but I'd love to do it with standard Java 7 if possible.
Update:
Reflections is not suitable. It has a tonne of dependencies, and to make things worse, it relies on old versions of libraries - in particular one that takes pride in deprecating dozens of features every release, and releasing rapidly.
I'm thinking about following this guide and using ASM if possible:
http://bill.burkecentral.com/2008/01/14/scanning-java-annotations-at-runtime/
Tomcat scans for annotations using a cut-down (so only the bits Tomcat needs are present) and package renamed (so it doesn't clash if the web app ships with BCEL as well) version of Apache Commons BCEL. It looks for every .class file, reads the byte code and extracts the annotations.
For the details, you can look at the source code. Start at line 1130 of this file:
http://svn.apache.org/viewvc/tomcat/trunk/java/org/apache/catalina/startup/ContextConfig.java?annotate=1537835
The other popular library for doing this sort of thing is ASM.
What are the relative merits and demerits of annotation processing respect to bytecode generation (e.g. with ASM)? Apart from implementation difficulty, why would you prefer one over another?
Since a commenter asked, I'm trying to automatically generate implementations for abstract getter/setter methods, but I would like a more general answer. I'm not asking what's the better way to generate getters and setters.
Some bytecode generator libraries contain support for easy creation of getter/setter variables, which simplifies things significantly - you just import the library classes and write Java code. Some frameworks can even automatically generate getters and setters (along with a whole bunch of other things) based upon a simple annotation on a field.
On the other hand, bytecode generation generally has a runtime performance impact as the new classes are compiled, although that can be mitigated by caching the generated class files.
My experience with annotation processing has not been nearly as pleasant. It generally requires you to configure or even modify your build system so that the annotation processor is executed. In addition, coding an annotation processor can become very uncomfortable if you wish to modify a source code file extensively, and apparently there is nowhere near the same framework/library variety as there is for bytecode generation.
My personal favorite, to be honest, is using Java 7 method handles when possible - or just writing the **** getters and setters by hand.
EDIT:
The main problem with the annotation processing API is that (as far as I know) it does not support modifying code at compile-time. The recommended approach seems to be the generation of independent decorator classes. Sure, that is relatively easy if you use e.g. Apache Velocity but the end result is not nearly the same.
There are some hacks where the original source file is processed to add methods and re-compiled, but even getting the path of the source file is almost impossible. There is usually a lot of guesswork involved, with various assumptions about the project structure being made. In addition, the annotation processor essentially maintains a separate source tree for the processed source files.
Project Lombok (which I can't believe I forgot to mention before) uses a lot magic of various colors to leverage the annotation processing API to something more usable. It could very well be what you need...
The best thing to do is to use your IDE's accelerators to generate the getters and setters. That way they are going to be present in the source code. That will make reading the code easier and avoid potential problems with your debugger.
Creating getters and setters is a bit tedious, but it is not worth adding a whole bunch of complexity (and potential gotchas) to avoid it. (And if it is really too tedious for you, persuade your boss that you need a "code monkey" to help you.)
Hi guys: Is there an open source way to associate java #annotations to functional requirements, or for example, TRAC tickets, etc? I want to do something like this:
I'm thinking along the lines of an eclipse plugin which somehow links up with another FOSS project tracking tool, wiki, or maybe even a CSV file.
A somewhat silly but exemplary illustration of what I desire is below:
#Requirement WalkDogTwiceADay
public void walkTheDog()
{
}
#Requirement WalkDogTwiceADay
public void dogWalkerThread()
{
walkTheDog(); //in the morning.
Thread.sleep(36000000);
walkTheDog(); //at night
}
Annotations are metadata, they simply add information to your code for other tools to use or to be inspected at runtime via reflection.
One thing you can do is write an annotation processor that will generate the necessary artefacts. Those could be configuration files, scripts, code...
Another thing you can do is write some tool that knows how to interpret your annotations and uses reflection to find them and take the appropriate actions. For this you'd need to make sure that the annotation type is set to have runtime retention, as opposed to only source or class.
Perhaps some of the stuff found in the answers to this question might prove of use. If that's the case, go ahead and use it. But writing custom annotation processors or code for handling them is not all that terribly hard. The difficult part is getting to know the Java model API that's used by annotation processors, which is like reflection but at compile time (before you have fully-formed classes).
in a previous life, we did something similar with #requirement ##### annotations, and then had a custom javadoc task that turned the requirement annotations into hyperlinks in the javadocs.
I was going to write an addin for eclipse that turned them into links in the code as well, but never got that far.
I've been asked to work on changing a number of classes that are core to the system we work on. The classes in question each require 5 - 10 different related objects, which themselves need a similiar amount of objects.
Data is also pulled in from several data sources, and the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
I'm beginning to get overwhelmed with this task. I have tried unit testing with JUnit and Easymock, but as soon as I mock or stub one thing, I find it needs lots more. Everything seems to be quite tightly coupled such that I'm reaching about 3 or 4 levels out with my stubs in order to prevent NullPointerExceptions.
Usually with this type of task, I would simply make changes and test as I went along. But the shortest build cycle is about 10 minutes, and I like to code with very short iterations between executions (probably because I'm not very confident with my ability to write flawless code).
Anyone know a good strategy / workflow to get out of this quagmire?
As you suggest, it sounds like your main problem is that the API you are working with is too tightly coupled. If you have the ability to modify the API, it can be very helpful to hide immediate dependencies behind interfaces so that you can cut off your dependency graph at the immediate dependency.
If this is not possible, an Auto-Mocking Container may be of help. This is basically a container that automatically figures out how to return a mock with good default behavior for nested abstractions. As I work on the .NET framework, I can't recommend any for Java.
If you would like to read up on unit testing patterns and best practices, I can only recommend xUnit Test Patterns.
For strategies for decoupling tightly coupled code I recommend Working Effectively with Legacy Code.
First thing I'd try to do is shorting the build cycle. Maybe add in the options to only build and test the components currently under development.
Next I'd look at decoupling some of the dependencies by introducing interfaces to sit between each component. I'd also want to move the coupling out in the open most likely using Dependency Injection. If I could notmove to DI I would have two ctors, on no-arg ctor that used the service locator (or what have thee) and one injectable ctor.
the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
Is that without meant to be a with? I would look at moving as much into POJOs as you can so it can be tested without needing to know anything EJB-y.
If you project can compile with Java 1.5 you shoul look at JMock? Things can get stubbed pretty quickly with 2.* version of this framework.
1.* version will work with 1.3+ Java compiler but the mocking is much more verbose, so I would not recommend it.
As for the strategy, my advice to you is to embrace interfaces. Even if you have a single implementation of the given interface, always create an interface. They can be mocked very easily and will allow you much better decoupling when testing your code.