In a project of mine I have two packages full of DTOs, POJOs with just getters and setters. While it's important that they are simple java beans (e.g. because Apache CXF uses them to create Web Service XSDs etc.), it's also awful and error-prone to program like that.
Foo foo = new Foo();
foo.setBar("baz");
foo.setPhleem(123);
return foo;
I prefer fluent interfaces and builder objects, so I use maven / gmaven to automatically create builders for the DTOs. So for the above code, a FooBuilder is automatically generated, which I can use like this:
Foo foo = new FooBuilder()
.bar("baz")
.phleem(123)
.build();
I also automatically generates Unit tests for the generated Builders. A unit test would generate both of the above codes (builder version and non builder version) and assert that both versions are equivalent in terms of equals() and hashcode(). The way I can achieve that is to have a globally accessible Map with defaults for every property type. Something like this:
public final class Defaults{
private Defaults(){}
private static final Map<Class<?>, Object> DEFAULT_VALUES =
new HashMap<Class<?>, Object>();
static{
DEFAULT_VALUES.put(String.class, "baz");
// argh, autoboxing is necessary :-)
DEFAULT_VALUES.put(int.class, 123);
// etc. etc.
}
public static getPropertyValue(Class<?> type){
return DEFAULT_VALUES.get(type);
}
}
Another non-trivial aspect is that the pojos sometimes have collection members. e.g.:
foo.setBings(List<Bing> bings)
but in my builder I would like this to generate two methods from this case: a set method and an add method:
fooBuilder.bings(List<Bing> bings); // set method
fooBuilder.addBing(Bing bing); // add method
I have solved this by adding a custom annotation to the property fields in Foo
#ComponentType(Bing.class)
private List<Bing> bings;
The builder builder (sic) reads the annotation and uses the value as the generic type of the methods to generate.
We are now getting closer to the question (sorry, brevity is not one of my merits :-)).
I have realized that this builder approach could be used in more than one project, so I am thinking of turning it into a maven plugin. I am perfectly clear about how to generate a maven plugin, so that's not part of the question (nor is how to generate valid Java source code). My problem is: how can I deal with the two above problems without introducing any common dependencies (between Project and Plugin):
<Question>
I need a Defaults class (or a similar mechanism) for getting default values for generated unit tests (this is a key part of the concept, I would not trust automatically generated builders if they weren't fully tested). Please help me come up with a good and generic way to solve this problem, given that each project will have it's own domain objects.
I need a common way of communicating generic types to the builder generator. The current annotation based version I am using is not satisfactory, as both project and plugin need to be aware of the same annotation.
</Question>
Any Ideas?
BTW: I know that the real key point of using builders is making objects immutable. I can't make mine immutable, because standard java beans are necessary, but I use AspectJ to enforce that neither set-methods nor constructors are called anywhere in my code base except in the builders, so for practical purposes, the resulting objects are immutable.
Also: Yes, I am aware of existing Builder-generator IDE plugins. That doesn't fit my purpose, I want an automated solution, that's always up to date whenever the underlying code has changed.
Matt B requested some info about how I generate my builders. Here's what I do:
I read a class per reflection, use Introspector.getBeanInfo(clazz).getPropertyDescriptors() to get an array of property descriptors. All my builders have a base class AbstractBuilder<T> where T would be Foo in the above case. Here's the code of the Abstract Builder class. For every property in the PropertyDescriptor array, a method is generated with the name of the property. This would be the implementation of FooBuilder.bar(String):
public FooBuilder bar(String bar){
setProperty("bar", bar);
return this;
}
the build() method in AbstractBuilder instantiates the object and assigns all properties in it's property map.
A POJO is an object which doesn't follow the Java Bean spoec. ie. it doesn't have setters/getters.
JavaBeans are not required to have setters, if you don't want them to be called, don't generate them. (Your builder can call a package local or private constructor to create your immutable objects)
Have you looked at Diezel ?
It's a Builder generator.
It handles generic types, so it might be helpful here for the question 2
It generates all the interfaces, and implementation boiler plate based on a description XML file. You might be able, through introspection to generate this XML (or even goes directly into lower API )
It is bundled as a maven plugin.
Related
Is there way to do compile-time annotation processing in Java?
Consider this example:
#Name("appName")
private Field<String> appName;
public void setAppName(String name) {
appName.setValue(name);
}
public String getAppName(String name) {
return appName.getValue();
}
public void someFunction() {
String whatFieldName = appName.getName();
}
Where the annotation Name will be processed at compile-time to set the value for Field That is without the common runtime annotation processing. As such, when appName.getName(); (the Field) is accessed it will return the typed value.
Yes, there is, but, no, it cannot change existing files. You can 'plug in' to the compiler and be informed of any annotations; as part of this, you can see signatures (so, field declarations, method signatures, types, etc) but no contents (so not the expression used to initialize a field, and not the contents in the {} of a method declaration), and you can make NEW files, even java files, but you can't edit existing ones.
Project Lombok does edit them, but that is quite the framework to make that possible.
There are some crazy tricks you can use. Project lombok uses one trick (reflect its way into compiler internals, fix everything from there, install agents and plugins in IDEs). Another trick is to use a java source file as a template, of sorts. You name your class some funky (so if you want, say, public class AppDescriptor, you'd actually make the java file AppDescriptorTemplate.java and put public class AppDescriptorTemplate inside. This file has the annotation precisely as you pasted. Your annotation processor can then, during compilation, generate AppDescriptor.java, writing the impls of all methods as simple pass-throughs (a field of type AppDescriptorTemplate is generated, and all methods in ADT are copied over, and the implementations are all one-liners that just invoke that method on the template class). The template class can be package private. In this specific scenario it sounds like you can generate virtually the whole thing based off of pretty much only "appName", though.
Lombok plugs straight into the build and is therefore virtually entirely transparent, in the sense that you simply type in your IDE and the methods it generates just appear as you type, whereas 'normal' annotation processors that e.g. use the XTemplate trick do not work that way and require the build system to kick in, every time. It can be a bit of a productivity drain.
I'm working on a Maven Plugin and I need to modify one class of an external jar (used during maven execution), to add:
a new field on this class
a getter for the field
some behavior to an existing setter to populate the field
The library code should use my 'new' class, and I want to be able to use the getter to retrieve some additional information.
Instances of this class are created within the library code (I'm not creating them in my code, I just need to access them).
Class is a non-final public class.
Do you know if this feasible and which is the best way to do it? Is it possible to do it with ByteBuddy?
EDIT: I cannot wrap the class, because it's not instantiated in my own code, let me elaborate a bit.
There's a library class named "Parser" that instantiate and populate some "Element" instances. My code looks like:
List<library.Element> elements = new library.Parser(file).parse();
the Parser.parse() method calls "Element.setProperties(List properties)" on each element.
I want to be able to enrich the setProperties method to store the original list of properties (that are lost otherwise)
Thanks
Giulio
At the end I managed to obtain the wanted result with byte-buddy (I don't know if it's the best solution but it works):
instrument library class (library.Element) using 'rebase' strategy to delegate the method 'setProperties' call to an interceptor.
Note: this must be done as the first instruction of the maven plugin execution when library.Element.class is not yet loaded (I didn't want to use a JVM agent)
define the above interceptor as a class that stores the original 'properties' value in a global map (key = identity hash code of the object, value = properties) and then call the original 'setProperties' method of library.Element
change my code to get properties from the map, instead of library.Element getter (I had already a wrapper of this class).
If someone is interested, I can show some code.
Your suggestion sounds a bit hackish, and in general what you want to do would not even be possible without generating sources from the JAR, modifying them, and then repackaging. If I faced this problem, I would consider using a decorator pattern instead. You can create a wrapper which exposes all the methods of your JAR, except it would add one extra field along with a getter and setter.
import com.giulio.old.jar.everything.*;
public class WrapperOfJar {
private String newField;
public String getNewField() { // ... }
public void setNewField() { // ... }
// all getters and setters from current JAR
}
Now your users will see the same interface, plus one new method for your field. This answer assumes that the class in question is not final.
TL;DR
Can I use Java serialization/deserialization using Serializable interface, ObjectOutputStream and ObjectInputStream classes, and probably adding readObject and writeObject in the classes implementing Serializable as a valid implementation for Prototype pattern or not?
Note
This question is not to discuss if using copy constructor is better than serialization/deserialization or not.
I'm aware of the Prototype Pattern concept (from Wikipedia, emphasis mine):
The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to:
avoid subclasses of an object creator in the client application, like the abstract factory pattern does.
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
And from this Q/A: Examples of GoF Design Patterns in Java's core libraries, BalusC explains that prototype pattern in Java is implemented by Object#clone only if the class implements Cloneable interface (marker interface similar to Serializable to serialize/deserialize objects). The problem using this approach is noted in blog posts/related Q/As like these:
Copy Constructor versus Cloning
Java: recommended solution for deep cloning/copying an instance
So, another alternative is using a copy constructor to clone your objects (the DIY way), but this fails to implement the prototype pattern for the text I emphasized above:
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword)
AFAIK the only way to create an object without invoking its constructor is by deserialization, as noted in the example of the accepted answer of this question: How are constructors called during serialization and deserialization?
So, I'm just asking if using object deserialization through ObjectOutputStream (and knowing what you're doing, marking necessary fields as transient and understanding all the implications of this process) or a similar approach would be a proper implementation of Prototype Pattern.
Note: I don't think unmarshalling XML documents is a right implementation of this pattern because invokes the class constructor. Probably this also happens when unmarshalling JSON content as well.
People would advise using object constructor, and I would mind that option when working with simple objects. This question is more oriented to deep copying complex objects, where I may have 5 levels of objects to clone. For example:
//fields is an abbreviation for primitive type and String type fields
//that can vary between 1 and 20 (or more) declared fields in the class
//and all of them will be filled during application execution
class CustomerType {
//fields...
}
class Customer {
CustomerType customerType;
//fields
}
class Product {
//fields
}
class Order {
List<Product> productList;
Customer customer;
//fields
}
class InvoiceStatus {
//fields
}
class Invoice {
List<Order> orderList;
InvoiceStatus invoiceStatus;
//fields
}
//class to communicate invoice data for external systems
class InvoiceOutboundMessage {
List<Invoice> invoice;
//fields
}
Let's say, I want/need to copy a instance of InvoiceOutboundMessage. I don't think a copy constructor would apply in this case. IMO having a lot of copy constructors doesn't seem like a good design in this case.
Using Java object serialization directly is not quite the Prototype pattern, but serialization can be used to implement the pattern.
The Prototype pattern puts the responsibility of copying on the object to be copied. If you use serialization directly, the client needs to provide the deserialization and serialization code. If you own, or plan to write, all of the classes that are to be copied, it is easy to move the responsibility to those classes:
define a Prototype interface which extends Serializable and adds an instance method copy
define a concrete class PrototypeUtility with a static method copy that implements the serialization and deserialization in one place
define an abstract class AbstractPrototype that implements Prototype. Make its copy method delegate to PrototypeUtility.copy.
A class which needs to be a Prototype can either implement Prototype itself and use PrototypeUtility to do the work, or can just extend AbstractPrototype. By doing so it also advertises that it is safely Serializable.
If you don't own the classes whose instances are to be copied, you can't follow the Prototype pattern exactly, because you can't move the responsibility for copying to those classes. However, if those classes implement Serializable, you can still get the job done by using serialization directly.
Regarding copy constructors, those are a fine way to copy Java objects whose classes you know, but they don't meet the requirement that the Prototype pattern does that the client should not need to know the class of the object instance that it is copying. A client which doesn't know an instance's class but wants to use its copy constructor would have to use reflection to find a constructor whose only argument has the same class as the class it belongs to. That's ugly, and the client couldn't be sure that the constructor it found was a copy constructor. Implementing an interface addresses those issues cleanly.
Wikipedia's comment that the Prototype pattern avoids the cost of creating a new object seems misguided to me. (I see nothing about that in the Gang of Four description.) Wikipedia's example of an object that is expensive to create is an object which lists the occurrences of a word in a text, which of course are expensive to find. But it would be foolish to design your program so that the only way to get an instance of WordOccurrences was to actually analyze a text, especially if you then needed to copy that instance for some reason. Just give it a constructor with parameters that describe the entire state of the instance and assigns them to its fields, or a copy constructor.
So unless you're working with a third-party library that hides its reasonable constructors, forget about that performance canard. The important points of Prototype are that
it allows the client to copy an object instance without knowing its class, and
it accomplishes that goal without creating a hierarchy of factories, as meeting the same goal with the AbstractFactory pattern would.
I'm puzzled by this part of your requirements:
Note: I don't think unmarshalling XML documents is a right
implementation of this pattern because invokes the class constructor.
Probably this also happens when unmarshalling JSON content as well.
I understand that you might not want to implement a copy constructor, but you will always have a regular constructor. If this constructor is invoked by a library then what does it matter? Furthermore object creation in Java is cheap. I've used Jackson for marshalling/unmarshalling Java objects with great success. It is performant and has a number of awesome features that might be very helpful in your case. You could implement a deep copier as follows:
import com.fasterxml.jackson.databind.ObjectMapper;
public class MyCloner {
private ObjectMapper cloner; // with getter and setter
public <T> clone(T toClone){
String stringCopy = mapper.writeValueAsString(toClone);
T deepClone = mapper.readValue(stringCopy, toClone.getClass());
return deepClone;
}
}
Note that Jackson will work automatically with Beans (getter + setter pairs, no-arg constructor). For classes that break that pattern it needs additional configuration. One nice thing about this configuration is that it won't require you to edit your existing classes, so you can clone using JSON without any other part of your code knowing that JSON is being used.
Another reason I like this approach vs. serialization is it is more human debuggable (just look at the string to see what the data is). Additionally, there are tons of tools out there for working with JSON:
Online JSON formatter
Veiw JSON as HTML based webpage
Whereas tools for Java serialization isn't great.
One drawback to this approach is that by default duplicate references in the original object will be made unique in the copied object by default. Here is an example:
public class CloneTest {
public class MyObject { }
public class MyObjectContainer {
MyObject refA;
MyObject refB;
// Getters and Setters omitted
}
public static void runTest(){
MyCloner cloner = new MyCloner();
cloner.setCloner(new ObjectMapper());
MyObjectContainer container = new MyObjectContainer();
MyObject duplicateReference = new MyObject();
MyObjectContainer.setRefA(duplicateReference);
MyObjectContainer.setRefB(duplicateReference);
MyObjectContainer cloned = cloner.clone(container);
System.out.println(cloned.getRefA() == cloned.getRefB()); // Will print false
System.out.println(container.getRefA() == container.getRefB()); // Will print true
}
}
Given that there are several approaches to this problem each with their own pros and cons, I would claim there isn't a 'proper' way to implement the prototype pattern in Java. The right approach depends heavily on the environment you find yourself coding in. If you have constructors which do heavy computation (and can't circumvent them) then I suppose you don't have much option but to use Deserialization. Otherwise, I would prefer the JSON/XML approach. If external libraries weren't allowed and I could modify my beans, then I'd use Dave's approach.
Your question is really interesting Luiggi (I voted for it because the idea is great), it's a pitty you don't say what you are really concerned about. So I'll try to answer what I know and let you choose what you find arguable:
Advantages :
In terms of memory use, you will get a very good memory consumption by using serialization since it serializes your objects in binary format (and not in text as json or worse: xml). You may have to choose a strategy to keep your objects "pattern" in memory as long as you need it, and persist it in a "less used first persisted" strategy, or "first used first persisted"
Coding it is pretty direct. There are some rules to respect, but it you don't have many complex structures, this remains maintainable
No need for external libraries, this is pretty an advantage in institutions with strict security/legal rules (validations for each library to be used in a program)
If you don't need to maintain your objects between versions of the program/ versions of the JVM. You can profit from each JVM update as speed is a real concern for java programs, and it's very related to io operations (JMX, memory read/writes, nio, etc...). So there are big chances that new versions will have optimized io/memory usage/serialization algos and you will find you're writing/reading faster with no code change.
Disadvantages :
You loose all your prototypes if you change any object in the tree. Serialization works only with the same object definition
You need to deserialize an object to see what is inside it: as opposed to the prototype pattern that is 'self documenting' if you take it from a Spring / Guice configuration file. The binary objects saved to disk are pretty opaque
If you're planning to do a reusable library, you're imposing to your library users a pretty strict pattern (implementing Serializable on each object, or using transient for dields that are not serializable). In addition this constraints cannot be checked by the compiler, you have to run the program to see if there's something wrong (which might not be visible immediately if an object in the tree is null for the tests). Naturally, I'm comparing it to other prototyping technologies (Guice for example had the main feature of being compile time checked, Spring did it lately too)
I think it's all what comes to my mind for now, I'll add a comment if any new aspect raises suddenly :)
Naturally I don't know how fast is writing an object as bytes compared to invoking a constructor. The answer to this should be mass write/read tests
But the question is worth thinking.
There are cases where creating new object using copy constructor is different from creating new object "in a standard way". One example is explained in the Wikipedia link in your question. In that example, to create new WordOccurrences using the constructor WordOccurrences(text, word), we need to perform heavyweight computation. If we use copy constructor WordOccurrences(wordOccurences) instead, we can immediately get the result of that computation (in the Wikipedia, clone method is used, but the principle is the same).
I'd like to implement annotation processor that will generate new class based on existing "prototype" class.
import java.util.List
#MyAnnotation
class MySuperClassPrototype {
static MySuperClassPrototype createInstance() {
return new MySuperClassPrototype();
}
}
As a result of code below. The following new source file (compilation unit) will be generated:
import java.util.List
class MySuperClass {
static MySuperClass createInstance() {
return new MySuperClass();
}
public void specialAddedMethod() {
/*...*/
}
}
I'd like to copy all top-level import statements and static members and not static members of prototype-class. I've moved pretty far with Compiler Tree API (com.sun.source.tree). I can print out Tree data-type while substituting new class name for old. But there are problems that seems pretty hard.
If I get Tree.Kind.IDENTIFIER in the tree, how can I find what actual class it references. I need to replace all occurrences of MySuperClassPrototype identifier with MySuperClass identifier, and than print out whole tree.
Is it feasible?
Similarly I need to filter out #MyAnnotation annotation, and again it is represented with Tree.Kind.IDENTIFIER or Tree.Kind.MEMBER_SELECT.
How can I find out actual annotation class that is referenced by this identifier?
And another problem is printing out tree. If I use toString method I got decent result, but constructors are printed as methods with "<init>" name instead of methods with the same name as it's class, so I need to manually print every kind of Tree node.
You can see code I've come with here
Yes, it is possible and I know at least 2 ways.
First, "traditional" way is to write ant task/maven plugin/just command line java utility that scans given file path and calls for each class something like Class.forName(className).getAnnotations(MyAnnotation.class). If this is not null discover class using reflection and do what you need.
Other way is a little bit more difficult but more powerful.
You can implement your own Processor (that implements javax.annotation.processing.Processor or even better extends javax.annotation.processing.AbstractProcessor.
Your processor will just have to be placed to the compiler classpath and it will run automatically when compiler runs. You can even configure your IDE (e.g. Eclipse) to run your processor. It is a kind of extension to java compiler. So, every time eclipse builds your project it runs the processor and creates all new classes according to new annotations you have added.
Please take a look on this project as a reference.
8 Years and not yet answered. Because of that, i will try to answer it, to your satisfaction.
I fill furthermore concentrate on the static part of the question.
TL;DR:
You will not find copy and paste code in this answer.
Is it feasible?
Yes, absolutely.
How can I find out actual annotation class that is referenced by this identifier?
You will have to use the RoundEnvironment within an Annotation Processor to get the TypeElement.
Static Metaprogramming
Static metaprogramming (which you asked for) is metaprogramming done at compile time. By Kontrast: Dynamic metaprogramming is metaprogramming done at run time. And metaprogramming it self is the design of programs, that handle other programs as data.
Pfeh, a lot to take in. If you are interested in this topic, a more or less good source for that is wikipedia.
Your target would be, to generate a class at compile time. For run time, this would be done with something like cglib. But, since you choose static (and for all the right reasons), i will not explain this.
The concept you are looking for is the annotation processor. The link is a link to Baeldung, where they do exactly, what you are looking for, only with the builder pattern in mind. You will love to hear, that this scenario is highly encouraged and easy to do with the annotation processor API. It even allows you, to generate code, which again is passed to the same or another annotation processor, without you doing anything.
Before jumping right in, try to google yourself about "Java Annotation Processing". There are a lot of good sources out there, which will help you. To much, to list here. Just note, that coding in an annotation processor is different than coding normally. Not a huge difference, but the classes you are working on are not yet created. So keep this in mind and don't get discouraged!
Using the Annotation Processor
Your basic annotation processor would look something like this:
#SupportedAnnotationTypes("package.of.MyAnnotation")
#SupportedSourceVersion(SourceVersion.RELEASE_8)
#AutoService(Processor.class)
public class BuilderProcessor extends AbstractProcessor {
#Override
public boolean process(Set<? extends TypeElement> annotations,
RoundEnvironment roundEnv) {
// First let's find all annotated elements
Set<? extends Element> annotatedElements = roundEnv.getElementsAnnotatedWith(MyAnnotation.class);
// Handle all the annotated classes
return false;
}
}
The AutoService Annotation is used, to dynamically register your annotation processor. It comes from an external source, just so you don't wonder, why this code won't compile.
In the handle all annotated classes part, you have the annotated Elements (which are the annotated classes). You now would have to verify, that they are classes and not interfaces or other annotations. This is because #Target(ElementType.Type) aims at any type, which includes interfaces and annotations. Furthermore, you would want to verify, that anything you require is present, or print an error to the compiler using the Messager.
If you print an error here (for example), you will stop compiling and the error will be seen in most modern IDEs. It can be reached by calling roundEnv.getMessager()
Afterwards you can generate a new class and write it to the input of the compiler, as a .java file. This can be done by using the Filer.
An answer in StackOverflow really does no justice to this topic. I highly recommend looking at the Baeldung example and trying to uncover things from there. This API is as old as Java 6, but still not that greatly used. I encourage you, the reader, to try it out for yourself :)
take a look at https://github.com/rzwitserloot/lombok/, It add methods as you described.
such as
#Getter add getter methods based on fields
#Setter
#ToString add toString() methods base on the fields
Suppose, I have a lot of classes, which are constructed using Java reflection (for some reason). Now I need to post-inject values to fields, which are
annotated with #PostInject.
public class SomeClass {
#PostInject
private final String someString = null;
public void someMethod() {
// here, someString has a value.
}
}
My question is: what is a fast way to set a field using reflection?
Remember, I need to do this very often on a lot of classes, that's
why performance is relevant.
What I would do by intuition is shown by this pseudo-code:
get all fields of the class
clazz.getFields();
check, which are annotated with #PostInject
eachField.getAnnotation(PostInject.class);
make these fields accessible
eachAnnotatedField.setAccessible(true);
set them to a certain value
eachAnnotatedField.set(clazz, someValue);
I'm afraid that getting all fields is the slowest thing to do.
Can I someone get a field, when I know it from the beginning?
NOTE: I can't just let the classes implement some interface, which would
allow to set the fields using a method. I need POJOs.
NOTE2: Why I want post-field injection: From the point of view of an API user, it must be possible to use final fields. Furthermore, when the types and number of fields are not known by the API a priori, it is impossible to achieve field initialization using an interface.
NOTE2b: From the point of view of the user, the final contract is not broken. It stays final. First, a field gets initialized, then it can't be changed. By the way: there are a lot of APIs which use this concept, one of them is JAXB (part of the JDK).
How about doing steps 1 to 3 just after you constructed the object and saving the set of annotated fields that you obtain either in the object itself or by keeping a separate map of class to set-of-annotated-fields?
Then, when you need to update the injected fields in an object, retrieve the set from either the object or the seperate map and perform step 4.
Don't know if it's any good, but this project looks like it would do what you want. Quote:
A set of reflection utilities and
miscellaneous utilities related to
working with classes and their fields
with no dependencies which is
compatible with java 1.5 and generics.
The utilities cache reflection data
for high performance operation but
uses weak/soft caching to avoid
holding open ClassLoaders and causing
the caches to exist in memory
permanently. The ability to override
the caching mechanism with your own is
supported.
Another option, as you say you know the few fields concerned from the beginning, is to ask only for those fields or methods.
Example : see getDeclaredMethod or getDeclaredField in java/lang/Class.html
You can exploit existing frameworks that allow to inject dependencies on object construction. For example Spring allows to do that with aspectj weaving. The general idea is that you define bean dependencies at spring level and just mark target classes in order to advise their object creation. Actual dependency resolution logic is injected directly to the class byte-code (it's possible to use either compile- or load-time weaving).
Fastest way to do anything with reflection is to cache the actual Reflection API classes whenever possible. For example I very recently made a yet-another-dynamic-POJO-manipulator which I believe is one of those things everyone ends up doing at some point which enables me to do this:
Object o = ...
BeanPropertyController c = BeanPropertyController.of(o);
for (String propertyName : c.getPropertyNames()) {
if (c.access(propertyName) == null &&
c.typeOf(propertyName).equals(String.class)) {
c.mutate(propertyName, "");
}
}
The way it works is that it basically has that one controller object which lazyloads all the properties of the bean (note: some magic involved) and then reuses them as long as the actual controller object is alive. All I can say is that by just saving the Method objects themselves I managed to turn that thing into a damn fast thing and I'm quite proud of it and even considering releasing it assuming I can manage to sort out copyrights etc.