what is the benefit in dynamically generating java bean classes from xml? - java

I had written a lot of java bean classes using my IDE. Another person suggests a different approach. He suggests that I put an xml file with bean definitions in them. Then I either use jaxb or xslt to dynamically generate the classes during build time. Though its a novel and interesting approach, I do not see any major benefit in it.
I see only one benefit in this suggested approach : The java bean classes need not be maintained in configuration control. Any bean changes is going to require only an update in the xml file.
Are there any major benefits in dynamically generating java classes ? Is there any other reason why this approach is taken ?

I agree with #Akhilss. My experiences have been in large scale Java EE projects where code generation is common.
It all depends on your project. If you are coding only a few beans and only need basic functionality then I don't see the need to start with XML (Which is often over used anyway). Especially if you actually don't need the XML as well.
However if you are building a system which needs the XML, an example being a SOAP web service WSDL and schema, then generation is a good idea because it saves you from manually keep schemas and beans in sync. As well as providing factory classes and other support code.
As a counter argument, with EJB3 and similar standards, it's now often easier to write the beans and generate the messy XML stuff on the fly. Ie. let the server do the grunt work.
Another reason to consider code generation is if you require more complex functionality in your beans because they represent data structures. A few years ago I trialled the Apache Tuscany project for generating SDO beans from XML. The nice thing about that was that I could generate functionality like property change notifications so when I modified any of the bean's properties (including collections), other parts of your program could be notified automatically. Generated functionality like that can save you a lot of time and money if you need it.
Ultimately, I'd suggest adhering to the KISS principle. So don't add what you don't need. Generated code from XML is useful if it helps you in the long run. But like any technology, be sure you are adding it for the right reasons.

I have used Jibx and its generator in my project. My experience has been mixed.
The usual case for using JAXB's (XJC) generator is referred to in http://static.springsource.org/spring-ws/site/reference/html/why-contract-first.html
Conversion to and from XML maked it possible to store in the DB and retrieve for future use as well as use for test case input for functional tests.
Using any kind of generator (Jaxb,Jibx,XMLBeans,Custom) might make sense for large sized projects. It allows for standardization of data types (like BigDecimal for financial amounts, like ArrayList for all lists), forcing interfaces (like Serializable or Cloneable). This enforces good practices and reduce the need for reviews of generated files.
It allows for injection of code through XSLT or post processing of generated java file. Example is to inject Rounding code to a specific decimal size(2,6,9) with a specific policy (UP,DOWN,NEAR) within the setter method for each field of type financialAmount. Forcing such behavior does reduce the instance of bugs(for incorrect financial values which companies are liable for).
The disadvantage are
Usually each java class can be only a bean class. Any customization made will be overwritten. Since (in my case) the generator is tied in to the build process. The classes get generated with every build.
You cannot do implementation of your custom interfaces on a bean class or add annotations for your own or third party frameworks.
You cannot easily implement patterns like a factory method since default constructors are usually generated. Refactoring is usually difficult since generators do not usually support it.
You may(not sure now, was true a couple of years ago for Jibx) not be able to generated ENUMS when it would be most applicable.
You may not be able to override the default datatype with your own regardless of the need. CopyOnWrite list and not ArrayList for a variable shared across threads or a custom implementation of a List which also implements the Observer pattern.
The benefits of a generator outweigh the costs for large sized (in persons and not code, think 150 developers in three locations) distributed projects. You can work around the disadvantages by defining your custom classes which contain the bean and implements behaviour or post processing (adding additional code) with further metadata picked up from XSD annotations or another configuration file. Remember support and Maintenance of the generator become critical since the entire project depends on it. Use it with CAUTION.
For smaller sized projects I personally would write my own classes. For larger sized projects I personally would not use it in the middle tier mostly because of the lack of refactoring support. It can be used for simple beans meant to be bound to UI frameworks.

Related

Generating multiple java Enums from a single table, using JOOQ

I have a table in my database schema which contains configuration information for an application that I'm building. I'd like to generate a number of enums based on the contents of the table. I'm currently using JOOQ in my build scripts to generate other, standard JOOQ classes from the same database, and am hoping that I can obtain this new functionality through JOOQ.
For example, if the table contained the following data
Product Component PresentationOrder
HydroProduct Boat 1
HydroProduct Canoe 2
HyrdoProduct Ship 3
LandProduct Car 1
LandProduct Bike 2
Then I'd want to generate two enums
hydroProduct.Components { Boat, Canoe, Ship; }
and
landProduct.Components {Car, Bike; }
where hydroProduct and landProduct are packages, and the enums are both called Components.
The precise details are up for grabs (e.g. I'd be fine with different naming conventions, so any suggestions are welcome), but the principal (two enums from the one table, based on the data therein) is, at least for this question, the thing that I need.
Having read the JOOQ docs, I see that generating Enums was once a part of JOOQ, and then removed. I can't see the "obvious" way of doing what I want in JOOQ, but it's a pretty amazing library, so I'm guessing that there might be one.
EDIT
A number of commenters have questioned the overall approach, which I understand. I'd like to avoid that kind of conversation - it's basically impossible for me to debate that level of design over this forum. This approach (code generation based on DDL and SQL) is very well tested within my organisation, and has had a very large amount of design scrutiny.
Just for the record, here's an outline the pipeline. I'm putting it in because I appreciate that people have spent time considering my original question, and I'd like to make it clear how the question fits into our overall dev system.
Project One holds some bootstrapping classes, some DDL and some SQL scripts that contain the data constants that our overall system uses. It's output includes (amongst other stuff), some .jar files containing compiled versions of classes generated. These basically comprise the core agreed vocabulary that the rest of our system uses to communicate.
Project Two holds java source code, which makes use of the various artefacts generated from project one. Before project two ever gets compiled, artefacts from project one can be assumed to have been generated. So, for example, an Enum generated from data held in project one can be used, at compile time, in project two.
Our build scripts build project one, and make the outputs available for project two. This includes generating Eclipse project files so that users of project two can check-out, build, and open eclipse for any branch of the project and it "just works".
There are other projects (including in different languages, e.g. Javascript) that also make use of the artefacts generated by project one.
The key benefit of this approach is that, when the data in project one changes, and the definitions of classes and enums in project one therefore change, project two reflects this change with compile-time errors, rather than run-time errors. The benefits of this in practice are absolutely enormous. There is a setup-cost, yes, but in our experience tools like JOOQ (which we use to generate the outputs of Project 1) have made us incredibly more productive than we used to be.
This is especially the case where we ship desktop, in-browser, and J2EE components and they all need to talk together.
This won't be an exhaustive answer, as the question is a bit opinionated, but I can provide some authoritative background on the history of this feature in jOOQ:
Having read the JOOQ docs, I see that generating Enums was once a part of JOOQ, and then removed.
Yes, the reason for removal was precisely because the then-existing implementation was far from sophisticated enough to accommodate, for instance, your particular use-case.
Code generation is a very ad-hoc "science" and it is extremely difficult to maintain a configuration API backwards compatibly, which can accommodate all the possible use-cases... For instance, the fact that you want to map parts of your master data onto packages, class names being constant, ordering custom is rather particular, not generic.
Furthermore, where should generated enums be referenced from? The original implementation replaced foreign keys (e.g. Integers) by enum references in the generated code. This might not at all be what you want in some cases, where you do want to join the master data in its raw form, not as a generated enum.
Long story short, this was an eagerly added feature in jOOQ 1.x, which was extremely hard to maintain during major internal refactorings that were needed to create jOOQ 3.x, which is why it was dropped.
I can't see the "obvious" way of doing what I want in JOOQ, but it's a pretty amazing library, so I'm guessing that there might be one.
Not out of the box. I'd recommend using a templating language like Velocity or Xtend and generate the code manually as part of your build. You could, in fact, even generate custom Converter or Binding implementations to bind your enums to the relevant referencing columns.

Annotation processor vs bytecode generation

What are the relative merits and demerits of annotation processing respect to bytecode generation (e.g. with ASM)? Apart from implementation difficulty, why would you prefer one over another?
Since a commenter asked, I'm trying to automatically generate implementations for abstract getter/setter methods, but I would like a more general answer. I'm not asking what's the better way to generate getters and setters.
Some bytecode generator libraries contain support for easy creation of getter/setter variables, which simplifies things significantly - you just import the library classes and write Java code. Some frameworks can even automatically generate getters and setters (along with a whole bunch of other things) based upon a simple annotation on a field.
On the other hand, bytecode generation generally has a runtime performance impact as the new classes are compiled, although that can be mitigated by caching the generated class files.
My experience with annotation processing has not been nearly as pleasant. It generally requires you to configure or even modify your build system so that the annotation processor is executed. In addition, coding an annotation processor can become very uncomfortable if you wish to modify a source code file extensively, and apparently there is nowhere near the same framework/library variety as there is for bytecode generation.
My personal favorite, to be honest, is using Java 7 method handles when possible - or just writing the **** getters and setters by hand.
EDIT:
The main problem with the annotation processing API is that (as far as I know) it does not support modifying code at compile-time. The recommended approach seems to be the generation of independent decorator classes. Sure, that is relatively easy if you use e.g. Apache Velocity but the end result is not nearly the same.
There are some hacks where the original source file is processed to add methods and re-compiled, but even getting the path of the source file is almost impossible. There is usually a lot of guesswork involved, with various assumptions about the project structure being made. In addition, the annotation processor essentially maintains a separate source tree for the processed source files.
Project Lombok (which I can't believe I forgot to mention before) uses a lot magic of various colors to leverage the annotation processing API to something more usable. It could very well be what you need...
The best thing to do is to use your IDE's accelerators to generate the getters and setters. That way they are going to be present in the source code. That will make reading the code easier and avoid potential problems with your debugger.
Creating getters and setters is a bit tedious, but it is not worth adding a whole bunch of complexity (and potential gotchas) to avoid it. (And if it is really too tedious for you, persuade your boss that you need a "code monkey" to help you.)

Spring annotation-based DI vs xml configuration?

Recently in our team we started discussing using spring annotations in code to define spring dependencies. Currently we are using context.xml to define our dependencies. Would you give me some clues for either approach, and when one is better to be used?
Edit: I know this seems a duplicate question to a more-general one, but I am interested in the impacts of annotations vs configuration for dependency injection only, which I believe would have different answers and attitude than the general question.
After reading some related posts here and having further discussion in the team we come to the following conclusions. I hope the would be useful to others here.
About XML configuration (which we are using up to now), we decided to keep it for dependencies defined by libraries (regardless if being developed by us, or by third parties).
Libraries, by definition, provide a particular functionality and can be used in various scenarios, not necessarily involving DI. Therefore, using annotations in the library projects we develop ourselves, would create a dependency of the DI framework (Spring in our case) to the library, making the library unusable in non-DI context. Having extra dependencies is not considered a good practice among our team (an in general IMHO).
When we are assembling an application, the application context would define the necessary dependencies. This will simplify dependency tracking as the application becomes the central unit of combining all the referenced components, and usually this is indeed where all the wiring up should happen.
XML is also good for us when providing mock implementations for many components, without recompiling the application modules that will use them. This gives us flexibility when testing running in local or production environment.
In regards to annotations, we decided that we can benefit using them when the injected components will not vary -- for instance only a certain implementation for a component will be used troughout the application.
The annotations will be very useful for small components/applications that will not change or support different implementations of a dependency at once, and that are unlikely to be composed in a different way (for instance using different dependencies for different builds). Simple micro-services would fit in this category.
Small enough components, made up with annotations, can be used right out of the box in different projects, without having the respective applications to cover them in their XML configuration. This would simplify the application dependency wiring for the application and reduce repetitive setups.
However, we agreed that such components should have the dependencies well described in our technical documentation, so that when assembling the entire application, one can have an idea of these dependencies without scrolling through the code, or even loading the module in the IDE.
A negative side effect of annotation-configured components, is that different components could bring clashing transitive dependencies, and again it is up to the final application to resolve the conflicts. When these dependencies are not defined in XML, the conflict resolution approaches become quite limited and straying far from the best practices, if they are at all possible.
So, when going with annotations, the component has to be mature enough about what dependencies it is going use.
In general if our dependencies may vary for different scenarios, or a module can be used with different components, we decided to stick to XML. Clearly, there MUST be a right balance between both approaches, and a clear idea for the usages.
An important update regarding the mixed approach. Recently we had a case with a test framework we created for our QA team, which required dependencies from another project. The framework was designed to use the annotation approach and Spring configuration classes, while the referenced project had some xml contexts that we needed to reference. Unfortunately, the test classes (where we used org.testng with spring support) could only work with either the xml or java configuration classes, not mixing both.
This situation illustrates a case where mixing the approaches would clash and clearly, one must be discarded. In our case, we migrated the test framework to use spring xml contexts, but other uses could imply the other way around.
Some advantages of using XML configuration:
The XML configuration is at one place, instead of being scattered all over the source code in case of annotations. Some people may argue that IDEs like STS allow you to look at all annotations based configuration in one place, but I never like having dependencies on IDEs.
Its takes a little more efforts to write XML config, but it saves a lot of time later when you search for dependencies and try to understand the project.
XML keeps configuration well organized and simple. Hence is easier to understand, it helps new relatively inexperienced team members get up to speed quickly.
Allows you to change the config without a need to recompile and redeploy code. So it is better, when it comes to production support.
So in short XML configuration takes a little more efforts, but it saves you a lot of time & headache later in big projects.
2.5 years later:
We use annotations mostly these days, but most crucial change is that we create many small projects (instead of a one big project). Hence understanding dependencies is not a problem anymore; as each project has it's unique purpose and relatively small codebase.
from my experience, I would prefer(or rather am forced by limitations) to use a combination of XML and annotation based DI . If I need to inject a Map of elements inside a bean , I would have to define a util:map and autowire it . Also, I need to use XML DI to inject datasource into the sessionFactory if I have multiple datasources and so on . So a combination of both would be requited .
I prefer the usage of component-scan to autodetect the services and Dao . This cuts down a lot of Configuration (We cut down the configuration files by around 50% switching to component-scan). Annotation based DI supports both byName(#Resource) and byType(#Autowired).
In short my advice to be to go for a fixture of both . I feel that more annotation support will definitely be on cards in future Spring releases.
Take a look at this answer here: Xml configuration versus Annotation based configuration
A short quote directly from there:
Annotations have their use, but they are not the one silver bullet to
kill XML configuration. I recommend mixing the two!
For instance, if using Spring, it is entirely intuitive to use XML for
the dependency injection portion of your application. This gets the
code's dependencies away from the code which will be using it, by
contrast, using some sort of annotation in the code that needs the
dependencies makes the code aware of this automatic configuration.
However, instead of using XML for transactional management, marking a
method as transactional with an annotation makes perfect sense, since
this is information a programmer would probably wish to know.
EDIT: Also, take a look at the answers here: Java Dependency injection: XML or annotations They most probably target the area of your interest much better.
From my own experience annotations better than xml configuration. I think in any case you can override xmls and use annotations. Also Spring 4 give us a huge support for annotations, we can override security from xml to annotations e.t.c, so we will have not 100 lines xml but 10 lines Java Code.
Are annotations better than XML for configuring Spring?
The introduction of annotation-based configurations raised the
question of whether this approach is 'better' than XML. The short
answer is it depends. The long answer is that each approach has its
pros and cons, and usually it is up to the developer to decide which
strategy suits them better. Due to the way they are defined,
annotations provide a lot of context in their declaration, leading to
shorter and more concise configuration. However, XML excels at wiring
up components without
touching their source code or recompiling them. Some developers prefer
having the wiring close to the source while others argue that
annotated classes are no longer POJOs and, furthermore, that the
configuration becomes decentralized and harder to control.
No matter the choice, Spring can accommodate both styles and even mix
them together. It’s worth pointing out that through its JavaConfig
option, Spring allows annotations to be used in a non- invasive way,
without touching the target components source code and that in terms
of tooling, all configuration styles are supported by the Spring Tool
Suite.
my personal option is that xml is better since you have all at one place and you do not need to deep into your packages to search the class.
We can not tell which method is good, it depends on your project. We can nither avoid xml nor annotation. One advantage of using xml is that we can understand the project structure just seeing the xml context files, but annotation reduces lots of meta configuration. So I prefer 30% xml and 70% annotation.
By using XML, you prevent code from being polluted with framework-specific annotations and thus creating an undesired coupling. Keep the framework at the application boundary so you can always replace it should the need arise.
Frameworks come and go, but many applications live for decades. Fortunately, Spring is a non-invasive framework and doesn't bend your architecture. Keeping the configuration in XML will make it even more detached from your application.
Remark: in order to benefit from all this, your application should be well-designed in the first place.

Java Dependency injection: XML or annotations

Annotations becoming popular. Spring-3 supports them. CDI depends on them heavily (I can not use CDI with out of annotations, right?)
My question is why?
I heard several issues:
"It helps get rid of XML". But what is bad about xml? Dependencies are declarative by nature, and XML is very good for declarations (and very bad for imperative programming).
With good IDE (like idea) it is very easy to edit and validate xml, is not it?
"In many cases there is only one implementation for each interface". That is not true!
Almost all interfaces in my system has mock implementation for tests.
Any other issues?
And now my pluses for XML:
You can inject anything anywhere (not only code that has annotations)
What should I do if I have several implementations of one interface? Use qualifiers? But it forces my class to know what kind of injection it needs.
It is not good for design.
XML based DI makes my code clear: each class has no idea about injection, so I can configure it and unit-test it in any way.
What do you think?
I can only speak from experience with Guice, but here's my take. The short of it is that annotation-based configuration greatly reduces the amount you have to write to wire an application together and makes it easier to change what depends on what... often without even having to touch the configuration files themselves. It does this by making the most common cases absolutely trivial at the expense of making certain relatively rare cases slightly more difficult to handle.
I think it's a problem to be too dogmatic about having classes have "no idea about injection". There should be no reference to the injection container in the code of a class. I absolutely agree with that. However, we must be clear on one point: annotations are not code. By themselves, they change nothing about how a class behaves... you can still create an instance of a class with annotations as if they were not there at all. So you can stop using a DI container completely and leave the annotations there and there will be no problem whatsoever.
When you choose not to provide metadata hints about injection within a class (i.e. annotations), you are throwing away a valuable source of information on what dependencies that class requires. You are forced to either repeat that information elsewhere (in XML, say) or to rely on unreliable magic like autowiring which can lead to unexpected issues.
To address some of your specific questions:
It helps get rid of XML
Many things are bad about XML configuration.
It's terribly verbose.
It isn't type-safe without special tools.
It mandates the use of string identifiers. Again, not safe without special tool support.
Doesn't take any advantage of the features of the language, requiring all kinds of ugly constructs to do what could be done with a simple method in code.
That said, I know a lot of people have been using XML for long enough that they are convinced that it is just fine and I don't really expect to change their minds.
In many cases there is only one implementation for each interface
There is often only one implementation of each interface for a single configuration of an application (e.g. production). The point is that when starting up your application, you typically only need to bind an interface to a single implementation. It may then be used in many other components. With XML configuration, you have to tell every component that uses that interface to use this one particular binding of that interface (or "bean" if you like). With annotation-based configuration, you just declare the binding once and everything else is taken care of automatically. This is very significant, and dramatically reduces the amount of configuration you have to write. It also means that when you add a new dependency to a component, you often don't have to change anything about your configuration at all!
That you have mock implementations of some interface is irrelevant. In unit tests you typically just create the mock and pass it in yourself... it's unrelated to configuration. If you set up a full system for integration tests with certain interfaces using mocks instead... that doesn't change anything. For the integration test run of the system, you're still only using 1 implementation and you only have to configure that once.
XML: You can inject anything anywhere
You can do this easily in Guice and I imagine you can in CDI too. So it's not like you're absolutely prevented from doing this by using an annotation-based configuration system. That said, I'd venture to say that the majority of injected classes in the majority of applications are classes that you can add an #Inject to yourself if it isn't already there. The existence of a lightweight standard Java library for annotations (JSR-330) makes it even easier for more libraries and frameworks to provide components with an #Inject annotated constructor in the future, too.
More than one implementation of an interface
Qualifiers are one solution to this, and in most cases should be just fine. However, in some cases you do want to do something where using a qualifier on a parameter in a particular injected class would not work... often because you want to have multiple instances of that class, each using a different interface implementation or instance. Guice solves this with something called PrivateModules. I don't know what CDI offers in this regard. But again, this is a case that is in the minority and it's not worth making the rest of your configuration suffer for it as long as you can handle it.
I have the following principle: configuration-related beans are defined with XML. Everything else - with annotations.
Why? Because you don't want to change configuration in classes. On the other hand, it's much simpler to write #Service and #Inject, in the class that you want to enable.
This does not interfere with testing in any way - annotations are only metadata that is parsed by the container. If you like, you can set different dependencies.
As for CDI - it has an extension for XML configuration, but you are right it uses mainly annotations. That's something I don't particularly like in it though.
In my opinion, this is more a matter of taste.
1) In our project (using Spring 3), we want the XML-configuration files to be just that: configuration. If it doesn't need to be configured (from end-user perspective) or some other issue doesn't force it to be done in xml, don't put the bean-definitions/wirings into the XML-configurations, use #Autowired and such.
2) With Spring, you can use #Qualifier to match a certain implementation of the interface, if multiple exist. Yes, this means you have to name the actual implementations, but I don't mind.
In our case, using XML for handling all the DI would bloat the XML-configuration files a lot, although it could be done in a separate xml-file (or files), so it's not that valid point ;). As I said, it's a matter of taste and I just think it's easier and more clean to handle the injections via annotations (you can see what services/repositories/whatever something uses just by looking at the class instead of going through the XML-file looking for the bean-declaration).
Edit: Here's an opinion about #Autowired vs. XML that I completely agree with: Spring #Autowired usage
I like to keep my code clear, as you pointed. XML feets better, at least for me, in the IOC principle.
The fundamental principle of Dependency Injection for configuration is that application objects should not be responsible for looking up the resources or collaborators they depend on. Instead, an IoC container should configure the objects, externalizing resource lookup from application code into the container. (J2EE Development without EJB - Rod Johnson - page 131)
Again, it just my point of view, no fundamentalism in there :)
EDIT: Some useful discussions out there:
http://forum.springsource.org/showthread.php?t=95126
http://www.theserverside.com/discussions/thread.tss?thread_id=61217
"But what is bad about xml?" It's yet another file to manage and yet another place to have to go look for a bug. If your annotations are right next to your code it's much easier to mange and debug.
Like all things, dependency injection should be used in moderation. Moreover, all trappings of the injections should be segregated from the application code and relegated to the code associated with main.
In general applications should have a boundary that separates the abstract application code from the concrete implementation details. All the source code dependencies that cross that boundary should point towards the application. I call the concrete side of that boundary, the main partition, because that's where 'main' (or it's equivalent) should live.
The main partition consists of factory implementations, strategy implementations, etc. And it is on this side of the boundary that the dependency injection framework should do it's work. Then those injected dependencies can be passed across the boundary into the application by normal means. (e.g. as arguments).
The number of injected dependencies should be relatively small. A dozen or less. In which case, the decision between XML or annotations is moot.
Also don't forget Spring JavaConfig.
In my case the developers writing the application are different that the ones configuring it (different departments, different technologies/languages) and the last group doesn't even has access to the source code (which is the case in many enterprise setups). That makes Guice unusable since I would have to expose source code rather than consuming the xmls configured by the developers implementing the app.
Overall I think it is important to recognize that providing the components and assembling/configuring an application are two different exercises and provide if needed this separation of concerns.
I just have a couple of things to add to what's already here.
To me, DI configuration is code. I would like to treat it as such, but the very nature of XML prevents this without extra tooling.
Spring JavaConfig is a major step forward in this regard, but it still has complications. Component scanning, auto-magic selection of interface implementations, and semantics around CGLIB interception of #Configuration annotated classes make it more complex than it needs to be. But it's still a step forward from XML.
The benefit of separating IoC metadata from application objects is overstated, especially with Spring. Perhaps if you confined yourself to the Spring IoC container only, this would be true. But Spring offers a wide application stack built on the IoC container (Security, Web MVC, etc). As soon as you leverage any of that, you're tied to the container anyway.
XML has the only benefit of a declarative style that is defined clearly separated from the application code itself. That stays independent from DI concerns. The downsides are verbosity, poor re-factoring robustness and a general runtime failure behaviour. There is just a general (XML) tool support with little benefit compared to IDE support for e.g. Java. Besides this XML comes with a performance overhead so it usually is slower than code solutions.
Annoations often said to be more intuitive and robust when re-factoring application code. Also they benefit from a better IDE guidance like guice provides. But they mix application code with DI concerns. An application gets dependent on a framework. Clear separation is almost impossible. Annotations are also limited when describing different injection behaviour at the same place (constructor, field) dependent on other circumstances (e.g. robot legs problem). Moreover they don't allow to treat external classes (library code) like your own source. Therefore they are considered to run faster than XML.
Both techniques have serious downsides. Therefore I recommend to use Silk DI. It is declarative defined in code (great IDE support) but 100% separated from your application code (no framework dependency). It allows to treat all code the same no matter if it is from your source or a external library. Problems like the robot legs problem are easy to solve with usual bindings. Furthermore it has good support to adapt it to your needs.

Are annotations some sort of DSL in Java?

After a bunch of XML config files, I've seen Java moving to Annotation based configurations.
Are annotations playing the role of DSL here?
Is it because the static nature of Java? I'm thinking in Ruby which doesn't have ( afaik ) things like that. Is it because Ruby has good metaprogramming capabilities?
Are there alternatives ( I mean other than using a bunch of .xml files )
Basically annotations are a tool that allows you to process source files at compile-time and do action corresponding to annotations found in the file (possibily deriving a new source).
They are quite useful for many purposes like expliciting constraints while avoiding cluttering the code or enrich the behaviour of some methods.
I wouldn't say that they are so similar to DSLs of Ruby since in this case you annotate code with a particular syntax while in Ruby you can design your own DSL from scratches and use it as you wish.
Java ships a tool called apt (like the one you suspect) that is able also to work with annotations at run-time but they are usually used to give compile-time infos to your sources. This doesn't mean that in certain circumstances you can't easily adapt the annotation mechanism to work out the same things that you would obtain with a DSL but they don't exist for the same purpose.
As already said, annotation can be used to create DSLs quite efficiently, bacause they add some sort of metaprogramming capabilities to the langauge. However for that purpose you could use byte code injector or even any other Java language feature.
However the primary purpose of annotations is to be able to annotate source code elements with metadata.
If you are asking for alternatives for creating internal DSLs in Java, just look at the Fowler's DSL book WIP and choose from different concepts which can be used for implementing internal DSLs, many of them are present in Java. If you are asking for alternatives for metaprogramming, then there are also many: different byte code injectors, aspect oriented programming using AspectJ or Spring or code generation.

Categories

Resources