After a bunch of XML config files, I've seen Java moving to Annotation based configurations.
Are annotations playing the role of DSL here?
Is it because the static nature of Java? I'm thinking in Ruby which doesn't have ( afaik ) things like that. Is it because Ruby has good metaprogramming capabilities?
Are there alternatives ( I mean other than using a bunch of .xml files )
Basically annotations are a tool that allows you to process source files at compile-time and do action corresponding to annotations found in the file (possibily deriving a new source).
They are quite useful for many purposes like expliciting constraints while avoiding cluttering the code or enrich the behaviour of some methods.
I wouldn't say that they are so similar to DSLs of Ruby since in this case you annotate code with a particular syntax while in Ruby you can design your own DSL from scratches and use it as you wish.
Java ships a tool called apt (like the one you suspect) that is able also to work with annotations at run-time but they are usually used to give compile-time infos to your sources. This doesn't mean that in certain circumstances you can't easily adapt the annotation mechanism to work out the same things that you would obtain with a DSL but they don't exist for the same purpose.
As already said, annotation can be used to create DSLs quite efficiently, bacause they add some sort of metaprogramming capabilities to the langauge. However for that purpose you could use byte code injector or even any other Java language feature.
However the primary purpose of annotations is to be able to annotate source code elements with metadata.
If you are asking for alternatives for creating internal DSLs in Java, just look at the Fowler's DSL book WIP and choose from different concepts which can be used for implementing internal DSLs, many of them are present in Java. If you are asking for alternatives for metaprogramming, then there are also many: different byte code injectors, aspect oriented programming using AspectJ or Spring or code generation.
Related
I came along the Drools fluent API which, as far as I understand, allows Rules to be added/edited/deleted from working memory at runtime.
The documentation mentions it here without going into details:
http://docs.jboss.org/drools/release/5.2.0.Final/droolsjbpm-introduction-docs/html/ch02.html#d0e124
Does anyone have example code on how to use this API?
I am specially interested in adding/changing/deleting rules at runtime.
I think that section was speaking more to the fact that they have a programmatic way to create rules. I was under the impression that the "fluent" part referred the their use of the builder pattern that allowed you to string methods together in the same way a rule would appear.
But yes, you are able to change/edit/delete rules dynamically for a particular KnowledgeBase. An example can be found in their sample integration tests, or consult the KnowledgeBase docs - particularly the addKnowledgePackages(Collection<KnowledgePackage> kpackages) and removeRule(String packageName, String ruleName) functions.
I suppose you can combine the DescrFactory with the rule addition/creation. I'm not really able to find a public API anywhere that will help you with how to use it, and it's in the drools-compiler dependency, so I'm not certain that there will be one published as much of that artifact is meant for drools internal use.
There is also another related S/O discussion about this here.
What are the relative merits and demerits of annotation processing respect to bytecode generation (e.g. with ASM)? Apart from implementation difficulty, why would you prefer one over another?
Since a commenter asked, I'm trying to automatically generate implementations for abstract getter/setter methods, but I would like a more general answer. I'm not asking what's the better way to generate getters and setters.
Some bytecode generator libraries contain support for easy creation of getter/setter variables, which simplifies things significantly - you just import the library classes and write Java code. Some frameworks can even automatically generate getters and setters (along with a whole bunch of other things) based upon a simple annotation on a field.
On the other hand, bytecode generation generally has a runtime performance impact as the new classes are compiled, although that can be mitigated by caching the generated class files.
My experience with annotation processing has not been nearly as pleasant. It generally requires you to configure or even modify your build system so that the annotation processor is executed. In addition, coding an annotation processor can become very uncomfortable if you wish to modify a source code file extensively, and apparently there is nowhere near the same framework/library variety as there is for bytecode generation.
My personal favorite, to be honest, is using Java 7 method handles when possible - or just writing the **** getters and setters by hand.
EDIT:
The main problem with the annotation processing API is that (as far as I know) it does not support modifying code at compile-time. The recommended approach seems to be the generation of independent decorator classes. Sure, that is relatively easy if you use e.g. Apache Velocity but the end result is not nearly the same.
There are some hacks where the original source file is processed to add methods and re-compiled, but even getting the path of the source file is almost impossible. There is usually a lot of guesswork involved, with various assumptions about the project structure being made. In addition, the annotation processor essentially maintains a separate source tree for the processed source files.
Project Lombok (which I can't believe I forgot to mention before) uses a lot magic of various colors to leverage the annotation processing API to something more usable. It could very well be what you need...
The best thing to do is to use your IDE's accelerators to generate the getters and setters. That way they are going to be present in the source code. That will make reading the code easier and avoid potential problems with your debugger.
Creating getters and setters is a bit tedious, but it is not worth adding a whole bunch of complexity (and potential gotchas) to avoid it. (And if it is really too tedious for you, persuade your boss that you need a "code monkey" to help you.)
I had written a lot of java bean classes using my IDE. Another person suggests a different approach. He suggests that I put an xml file with bean definitions in them. Then I either use jaxb or xslt to dynamically generate the classes during build time. Though its a novel and interesting approach, I do not see any major benefit in it.
I see only one benefit in this suggested approach : The java bean classes need not be maintained in configuration control. Any bean changes is going to require only an update in the xml file.
Are there any major benefits in dynamically generating java classes ? Is there any other reason why this approach is taken ?
I agree with #Akhilss. My experiences have been in large scale Java EE projects where code generation is common.
It all depends on your project. If you are coding only a few beans and only need basic functionality then I don't see the need to start with XML (Which is often over used anyway). Especially if you actually don't need the XML as well.
However if you are building a system which needs the XML, an example being a SOAP web service WSDL and schema, then generation is a good idea because it saves you from manually keep schemas and beans in sync. As well as providing factory classes and other support code.
As a counter argument, with EJB3 and similar standards, it's now often easier to write the beans and generate the messy XML stuff on the fly. Ie. let the server do the grunt work.
Another reason to consider code generation is if you require more complex functionality in your beans because they represent data structures. A few years ago I trialled the Apache Tuscany project for generating SDO beans from XML. The nice thing about that was that I could generate functionality like property change notifications so when I modified any of the bean's properties (including collections), other parts of your program could be notified automatically. Generated functionality like that can save you a lot of time and money if you need it.
Ultimately, I'd suggest adhering to the KISS principle. So don't add what you don't need. Generated code from XML is useful if it helps you in the long run. But like any technology, be sure you are adding it for the right reasons.
I have used Jibx and its generator in my project. My experience has been mixed.
The usual case for using JAXB's (XJC) generator is referred to in http://static.springsource.org/spring-ws/site/reference/html/why-contract-first.html
Conversion to and from XML maked it possible to store in the DB and retrieve for future use as well as use for test case input for functional tests.
Using any kind of generator (Jaxb,Jibx,XMLBeans,Custom) might make sense for large sized projects. It allows for standardization of data types (like BigDecimal for financial amounts, like ArrayList for all lists), forcing interfaces (like Serializable or Cloneable). This enforces good practices and reduce the need for reviews of generated files.
It allows for injection of code through XSLT or post processing of generated java file. Example is to inject Rounding code to a specific decimal size(2,6,9) with a specific policy (UP,DOWN,NEAR) within the setter method for each field of type financialAmount. Forcing such behavior does reduce the instance of bugs(for incorrect financial values which companies are liable for).
The disadvantage are
Usually each java class can be only a bean class. Any customization made will be overwritten. Since (in my case) the generator is tied in to the build process. The classes get generated with every build.
You cannot do implementation of your custom interfaces on a bean class or add annotations for your own or third party frameworks.
You cannot easily implement patterns like a factory method since default constructors are usually generated. Refactoring is usually difficult since generators do not usually support it.
You may(not sure now, was true a couple of years ago for Jibx) not be able to generated ENUMS when it would be most applicable.
You may not be able to override the default datatype with your own regardless of the need. CopyOnWrite list and not ArrayList for a variable shared across threads or a custom implementation of a List which also implements the Observer pattern.
The benefits of a generator outweigh the costs for large sized (in persons and not code, think 150 developers in three locations) distributed projects. You can work around the disadvantages by defining your custom classes which contain the bean and implements behaviour or post processing (adding additional code) with further metadata picked up from XSD annotations or another configuration file. Remember support and Maintenance of the generator become critical since the entire project depends on it. Use it with CAUTION.
For smaller sized projects I personally would write my own classes. For larger sized projects I personally would not use it in the middle tier mostly because of the lack of refactoring support. It can be used for simple beans meant to be bound to UI frameworks.
Are Java annotations used for adding functionality to Java code besides just adding documentation about what's going on in the code? What's the most advanced/complex functionality you could add to your code through an annotation?
Annotation are basically not more than a tag (with optional additional data) on a class/method/field. Other code (libraries or tools) can discover these tags and execute functionality dependant on the annotations found. I don't see a real limit on the complexity of the functionality possibly added by annotations. This can for example emulate AOP (adding functionality before or after a method with an annotation).
Annotations as such only add information (metadata) to a class.
One can easily build a system that uses that metadata to provide additional functionality, however.
For example you can use apt to generate classes based on the information provided by the annotation.
An annotation needs a tool to react to it. If such a tool does not exist the annotation is merely a notation. The "tool" can be an APT based agent or some piece of code that uses reflection (for instance, JUnit's #Test).
Several annotations are recognized by the Java compiler and thus have pre-defined semantics: #Override, #Deprecated, #Target.
I would understand Annotations as a way to document your code in a machine readable way.
For example in Hibernate you can specify the whole persistence information for your objects as annotations. This is directly readable for you and not in a distant xml file. But is also readable for the tool to generate configurations, database schemes etc.
I am considering starting a project which is used to generate code in Java using annotations (I won't get into specifics, as it's not really relevant). I am wondering about the validity and usefulness of the project, and something that has struck me is the dependence on the Annontation Processor Tool (apt).
What I'd like to know, as I can't speak from experience, is what are the drawbacks of using annotation processing in Java?
These could be anything, including the likes of:
it is hard to do TDD when writing the processor
it is difficult to include the processing on a build system
processing takes a long time, and it is very difficult to get it to run fast
using the annotations in an IDE requires a plugin for each, to get it to behave the same when reporting errors
These are just examples, not my opinion. I am in the process of researching if any of these are true (including asking this question ;-) )
I am sure there must be drawbacks (for instance, Qi4J specifically list not using pre-processors as an advantage) but I don't have the experience with it to tell what they are.
The ony reasonable alternative to using annotation processing is probably to create plugins for the relevant IDEs to generate the code (it would be something vaguely similar to override/implement methods feature that would generate all the signatures without method bodies). However, that step would have to be repeated each time relevant parts of the code changes, annotation processing would not, as far as I can tell.
In regards to the example given with the invasive amount of annotations, I don't envision the use needing to be anything like that, maybe a handful for any given class. That wouldn't stop it being abused of course.
I created a set of JavaBean annotations to generate property getters/setters, delegation, and interface extraction (edit: removed link; no longer supported)
Testing
Testing them can be quite trying...
I usually approach it by creating a project in eclipse with the test code and building it, then make a copy and turn off annotation processing.
I can then use Eclipse to compare the "active" test project to the "expected" copy of the project.
I don't have too many test cases yet (it's very tedious to generate so many combinations of attributes), but this is helping.
Build System
Using annotations in a build system is actually very easy. Gradle makes this incredibly simple, and using it in eclipse is just a matter of making a plugin specifying the annotation processor extension and turning on annotation processing in projects that want to use it.
I've used annotation processing in a continuous build environment, building the annotations & processor, then using it in the rest of the build. It's really pretty painless.
Processing Time
I haven't found this to be an issue - be careful of what you do in the processors. I generate a lot of code in mine and it runs fine. It's a little slower in ant.
Note that Java6 processors can run a little faster because they are part of the normal compilation process. However, I've had trouble getting them to work properly in a code generation capacity (I think much of the problem is eclipse's support and running multiple-phase compiles). For now, I stick with Java 5.
Error Processing
This is one of the best-thought-through things in the annotation API. The API has a "messenger" object that handles all errors. Each IDE provides an implementation that converts this into appropriate error messages at the right location in the code.
The only eclipse-specific thing I did was to cast the processing environment object so I could check if it was bring run as a build or for editor reconciliation. If editing, I exit. Eventually I'll change this to just do error checking at edit time so it can report errors as you type. Be careful, though -- you need to keep it really fast for use during reconciliation or editing gets sluggish.
Code Generation Gotcha
[added a little more per comments]
The annotation processor specifications state that you are not allowed to modify the class that contains the annotation. I suspect this is to simplify the processing (further rounds do not need to include the annotated classes, preventing infinite update loops as well)
You can generate other classes, however, and they recommend that approach.
I generate a superclass for all of the get/set methods and anything else I need to generate. I also have the processor verify that the annotated class extends the generated class. For example:
#Bean(...)
public class Foo extends FooGen
I generate a class in the same package with the name of the annotated class plus "Gen" and verify that the annotated class is declared to extend it.
I have seen someone use the compiler tree api to modify the annotated class -- this is against spec and I suspect they'll plug that hole at some point so it won't work.
I would recommend generating a superclass.
Overall
I'm really happy using annotation processors. Very well designed, especially looking at IDE/command-line build independence.
For now, I would recommend sticking with the Java5 annotation processors if you're doing code generation - you need to run a separate tool called apt to process them, then do the compilation.
Note that the API for Java 5 and Java 6 annotation processors is different! The Java 6 processing API is better IMHO, but I just haven't had luck with java 6 processors doing what I need yet.
When Java 7 comes out I'll give the new processing approach another shot.
Feel free to email me if you have questions. (scott#javadude.com)
Hope this helps!
I think if annotation processor then definitely use the Java 6 version of the API. That is the one which will be supported in the future. The Java 5 API was still in the in the non official com.sun.xyz namespace.
I think we will see a lot more uses of the annotation processor API in the near future. For example Hibernate is developing a processor for the new JPA 2 query related static meta model functionality. They are also developing a processor for validating Bean Validation annotations. So annotation processing is here to stay.
Tool integration is ok. The latest versions of the mainstream IDEs contain options to configure the annotation processors and integrate them into the build process. The main stream build tools also support annotation processing where maven can still cause some grief.
Testing I find a big problem though. All tests are indirect and somehow verify the end result of the annotation processing. I cannot write any simple unit tests which just assert simple methods working on TypeMirrors or other reflection based classes. The problem is that one cannot instantiate these type of classes outside the processors compilation cycle. I don't think that Sun had really testability in mind when designing the API.
One specific which would be helpful in answering the question would be as opposed to what? Not doing the project, or doing it not using annotations? And if not using annotations, what are the alternatives?
Personally, I find excessive annotations unreadable, and many times too inflexible. Take a look at this for one method on a web service to implement a vendor required WSDL:
#WebMethod(action=QBWSBean.NS+"receiveResponseXML")
#WebResult(name="receiveResponseXML"+result,targetNamespace = QBWSBean.NS)
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public int receiveResponseXML(
#WebParam(name = "ticket",targetNamespace = QBWSBean.NS) String ticket,
#WebParam(name = "response",targetNamespace = QBWSBean.NS) String response,
#WebParam(name = "hresult",targetNamespace = QBWSBean.NS) String hresult,
#WebParam(name = "message",targetNamespace = QBWSBean.NS) String message) {
I find that code highly unreadable. An XML configuration alternative isn't necessarily better, though.