Suppose, I have a lot of classes, which are constructed using Java reflection (for some reason). Now I need to post-inject values to fields, which are
annotated with #PostInject.
public class SomeClass {
#PostInject
private final String someString = null;
public void someMethod() {
// here, someString has a value.
}
}
My question is: what is a fast way to set a field using reflection?
Remember, I need to do this very often on a lot of classes, that's
why performance is relevant.
What I would do by intuition is shown by this pseudo-code:
get all fields of the class
clazz.getFields();
check, which are annotated with #PostInject
eachField.getAnnotation(PostInject.class);
make these fields accessible
eachAnnotatedField.setAccessible(true);
set them to a certain value
eachAnnotatedField.set(clazz, someValue);
I'm afraid that getting all fields is the slowest thing to do.
Can I someone get a field, when I know it from the beginning?
NOTE: I can't just let the classes implement some interface, which would
allow to set the fields using a method. I need POJOs.
NOTE2: Why I want post-field injection: From the point of view of an API user, it must be possible to use final fields. Furthermore, when the types and number of fields are not known by the API a priori, it is impossible to achieve field initialization using an interface.
NOTE2b: From the point of view of the user, the final contract is not broken. It stays final. First, a field gets initialized, then it can't be changed. By the way: there are a lot of APIs which use this concept, one of them is JAXB (part of the JDK).
How about doing steps 1 to 3 just after you constructed the object and saving the set of annotated fields that you obtain either in the object itself or by keeping a separate map of class to set-of-annotated-fields?
Then, when you need to update the injected fields in an object, retrieve the set from either the object or the seperate map and perform step 4.
Don't know if it's any good, but this project looks like it would do what you want. Quote:
A set of reflection utilities and
miscellaneous utilities related to
working with classes and their fields
with no dependencies which is
compatible with java 1.5 and generics.
The utilities cache reflection data
for high performance operation but
uses weak/soft caching to avoid
holding open ClassLoaders and causing
the caches to exist in memory
permanently. The ability to override
the caching mechanism with your own is
supported.
Another option, as you say you know the few fields concerned from the beginning, is to ask only for those fields or methods.
Example : see getDeclaredMethod or getDeclaredField in java/lang/Class.html
You can exploit existing frameworks that allow to inject dependencies on object construction. For example Spring allows to do that with aspectj weaving. The general idea is that you define bean dependencies at spring level and just mark target classes in order to advise their object creation. Actual dependency resolution logic is injected directly to the class byte-code (it's possible to use either compile- or load-time weaving).
Fastest way to do anything with reflection is to cache the actual Reflection API classes whenever possible. For example I very recently made a yet-another-dynamic-POJO-manipulator which I believe is one of those things everyone ends up doing at some point which enables me to do this:
Object o = ...
BeanPropertyController c = BeanPropertyController.of(o);
for (String propertyName : c.getPropertyNames()) {
if (c.access(propertyName) == null &&
c.typeOf(propertyName).equals(String.class)) {
c.mutate(propertyName, "");
}
}
The way it works is that it basically has that one controller object which lazyloads all the properties of the bean (note: some magic involved) and then reuses them as long as the actual controller object is alive. All I can say is that by just saving the Method objects themselves I managed to turn that thing into a damn fast thing and I'm quite proud of it and even considering releasing it assuming I can manage to sort out copyrights etc.
Related
I have a class that that can potentially assign values to 50+ plus variables. I don't want to write getters for all of these fields. I would rather have some way that can report which fields have had a value assigned to them and, what that value is.
I had originally made these private and, I know that reflection basically breaks private. Additionally, Securecoding.org states this about reflection:
In particular, reflection must not be used to provide access to classes, methods, and fields unless these items are already accessible without the use of reflection. For example, the use of reflection to access or modify fields is not allowed unless those fields are already accessible and modifiable by other means, such as through getter and setter methods.
My main concern is mucking up my code by declaring dozens of instance variables(and possibly getters). Later in this project, I will have two more large sets of instance variables that need to be declared as well. I know that I can reduce the use of getters with some clever maps and enums but, that still takes parsing dozens ofnull values. Could anyone suggest another way?
I know only 4 ways to access field of class
directly unless field is private
Using method, e.g. getter.
Using constructor.
Using reflection
The ways 1 and 4 are beyond the discussion.
Constructor usage is not convenient here because huge number of fields.
So, methods the possibility.
It is up to you whether you want to use bean convention or for example builder pattern, but if you need this class for persistency or for serialization into XML or JSON etc you need at least getters.
Now, if you just want to validate the instance after its creation you can declare your interface Validatable that declares method validate() and call it when your object should be ready. You have to however implement and maintain this method for each class.
Alternative way is to use one of available validation frameworks. In this case you validation can be done using annotations. You should remember however that behind the scene such frameworks use reflection.
Here are some links for further reading:
http://commons.apache.org/proper/commons-validator/
http://java-source.net/open-source/validation
http://docs.oracle.com/javaee/6/tutorial/doc/gircz.html
TL;DR
Can I use Java serialization/deserialization using Serializable interface, ObjectOutputStream and ObjectInputStream classes, and probably adding readObject and writeObject in the classes implementing Serializable as a valid implementation for Prototype pattern or not?
Note
This question is not to discuss if using copy constructor is better than serialization/deserialization or not.
I'm aware of the Prototype Pattern concept (from Wikipedia, emphasis mine):
The prototype pattern is a creational design pattern in software development. It is used when the type of objects to create is determined by a prototypical instance, which is cloned to produce new objects. This pattern is used to:
avoid subclasses of an object creator in the client application, like the abstract factory pattern does.
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword) when it is prohibitively expensive for a given application.
And from this Q/A: Examples of GoF Design Patterns in Java's core libraries, BalusC explains that prototype pattern in Java is implemented by Object#clone only if the class implements Cloneable interface (marker interface similar to Serializable to serialize/deserialize objects). The problem using this approach is noted in blog posts/related Q/As like these:
Copy Constructor versus Cloning
Java: recommended solution for deep cloning/copying an instance
So, another alternative is using a copy constructor to clone your objects (the DIY way), but this fails to implement the prototype pattern for the text I emphasized above:
avoid the inherent cost of creating a new object in the standard way (e.g., using the 'new' keyword)
AFAIK the only way to create an object without invoking its constructor is by deserialization, as noted in the example of the accepted answer of this question: How are constructors called during serialization and deserialization?
So, I'm just asking if using object deserialization through ObjectOutputStream (and knowing what you're doing, marking necessary fields as transient and understanding all the implications of this process) or a similar approach would be a proper implementation of Prototype Pattern.
Note: I don't think unmarshalling XML documents is a right implementation of this pattern because invokes the class constructor. Probably this also happens when unmarshalling JSON content as well.
People would advise using object constructor, and I would mind that option when working with simple objects. This question is more oriented to deep copying complex objects, where I may have 5 levels of objects to clone. For example:
//fields is an abbreviation for primitive type and String type fields
//that can vary between 1 and 20 (or more) declared fields in the class
//and all of them will be filled during application execution
class CustomerType {
//fields...
}
class Customer {
CustomerType customerType;
//fields
}
class Product {
//fields
}
class Order {
List<Product> productList;
Customer customer;
//fields
}
class InvoiceStatus {
//fields
}
class Invoice {
List<Order> orderList;
InvoiceStatus invoiceStatus;
//fields
}
//class to communicate invoice data for external systems
class InvoiceOutboundMessage {
List<Invoice> invoice;
//fields
}
Let's say, I want/need to copy a instance of InvoiceOutboundMessage. I don't think a copy constructor would apply in this case. IMO having a lot of copy constructors doesn't seem like a good design in this case.
Using Java object serialization directly is not quite the Prototype pattern, but serialization can be used to implement the pattern.
The Prototype pattern puts the responsibility of copying on the object to be copied. If you use serialization directly, the client needs to provide the deserialization and serialization code. If you own, or plan to write, all of the classes that are to be copied, it is easy to move the responsibility to those classes:
define a Prototype interface which extends Serializable and adds an instance method copy
define a concrete class PrototypeUtility with a static method copy that implements the serialization and deserialization in one place
define an abstract class AbstractPrototype that implements Prototype. Make its copy method delegate to PrototypeUtility.copy.
A class which needs to be a Prototype can either implement Prototype itself and use PrototypeUtility to do the work, or can just extend AbstractPrototype. By doing so it also advertises that it is safely Serializable.
If you don't own the classes whose instances are to be copied, you can't follow the Prototype pattern exactly, because you can't move the responsibility for copying to those classes. However, if those classes implement Serializable, you can still get the job done by using serialization directly.
Regarding copy constructors, those are a fine way to copy Java objects whose classes you know, but they don't meet the requirement that the Prototype pattern does that the client should not need to know the class of the object instance that it is copying. A client which doesn't know an instance's class but wants to use its copy constructor would have to use reflection to find a constructor whose only argument has the same class as the class it belongs to. That's ugly, and the client couldn't be sure that the constructor it found was a copy constructor. Implementing an interface addresses those issues cleanly.
Wikipedia's comment that the Prototype pattern avoids the cost of creating a new object seems misguided to me. (I see nothing about that in the Gang of Four description.) Wikipedia's example of an object that is expensive to create is an object which lists the occurrences of a word in a text, which of course are expensive to find. But it would be foolish to design your program so that the only way to get an instance of WordOccurrences was to actually analyze a text, especially if you then needed to copy that instance for some reason. Just give it a constructor with parameters that describe the entire state of the instance and assigns them to its fields, or a copy constructor.
So unless you're working with a third-party library that hides its reasonable constructors, forget about that performance canard. The important points of Prototype are that
it allows the client to copy an object instance without knowing its class, and
it accomplishes that goal without creating a hierarchy of factories, as meeting the same goal with the AbstractFactory pattern would.
I'm puzzled by this part of your requirements:
Note: I don't think unmarshalling XML documents is a right
implementation of this pattern because invokes the class constructor.
Probably this also happens when unmarshalling JSON content as well.
I understand that you might not want to implement a copy constructor, but you will always have a regular constructor. If this constructor is invoked by a library then what does it matter? Furthermore object creation in Java is cheap. I've used Jackson for marshalling/unmarshalling Java objects with great success. It is performant and has a number of awesome features that might be very helpful in your case. You could implement a deep copier as follows:
import com.fasterxml.jackson.databind.ObjectMapper;
public class MyCloner {
private ObjectMapper cloner; // with getter and setter
public <T> clone(T toClone){
String stringCopy = mapper.writeValueAsString(toClone);
T deepClone = mapper.readValue(stringCopy, toClone.getClass());
return deepClone;
}
}
Note that Jackson will work automatically with Beans (getter + setter pairs, no-arg constructor). For classes that break that pattern it needs additional configuration. One nice thing about this configuration is that it won't require you to edit your existing classes, so you can clone using JSON without any other part of your code knowing that JSON is being used.
Another reason I like this approach vs. serialization is it is more human debuggable (just look at the string to see what the data is). Additionally, there are tons of tools out there for working with JSON:
Online JSON formatter
Veiw JSON as HTML based webpage
Whereas tools for Java serialization isn't great.
One drawback to this approach is that by default duplicate references in the original object will be made unique in the copied object by default. Here is an example:
public class CloneTest {
public class MyObject { }
public class MyObjectContainer {
MyObject refA;
MyObject refB;
// Getters and Setters omitted
}
public static void runTest(){
MyCloner cloner = new MyCloner();
cloner.setCloner(new ObjectMapper());
MyObjectContainer container = new MyObjectContainer();
MyObject duplicateReference = new MyObject();
MyObjectContainer.setRefA(duplicateReference);
MyObjectContainer.setRefB(duplicateReference);
MyObjectContainer cloned = cloner.clone(container);
System.out.println(cloned.getRefA() == cloned.getRefB()); // Will print false
System.out.println(container.getRefA() == container.getRefB()); // Will print true
}
}
Given that there are several approaches to this problem each with their own pros and cons, I would claim there isn't a 'proper' way to implement the prototype pattern in Java. The right approach depends heavily on the environment you find yourself coding in. If you have constructors which do heavy computation (and can't circumvent them) then I suppose you don't have much option but to use Deserialization. Otherwise, I would prefer the JSON/XML approach. If external libraries weren't allowed and I could modify my beans, then I'd use Dave's approach.
Your question is really interesting Luiggi (I voted for it because the idea is great), it's a pitty you don't say what you are really concerned about. So I'll try to answer what I know and let you choose what you find arguable:
Advantages :
In terms of memory use, you will get a very good memory consumption by using serialization since it serializes your objects in binary format (and not in text as json or worse: xml). You may have to choose a strategy to keep your objects "pattern" in memory as long as you need it, and persist it in a "less used first persisted" strategy, or "first used first persisted"
Coding it is pretty direct. There are some rules to respect, but it you don't have many complex structures, this remains maintainable
No need for external libraries, this is pretty an advantage in institutions with strict security/legal rules (validations for each library to be used in a program)
If you don't need to maintain your objects between versions of the program/ versions of the JVM. You can profit from each JVM update as speed is a real concern for java programs, and it's very related to io operations (JMX, memory read/writes, nio, etc...). So there are big chances that new versions will have optimized io/memory usage/serialization algos and you will find you're writing/reading faster with no code change.
Disadvantages :
You loose all your prototypes if you change any object in the tree. Serialization works only with the same object definition
You need to deserialize an object to see what is inside it: as opposed to the prototype pattern that is 'self documenting' if you take it from a Spring / Guice configuration file. The binary objects saved to disk are pretty opaque
If you're planning to do a reusable library, you're imposing to your library users a pretty strict pattern (implementing Serializable on each object, or using transient for dields that are not serializable). In addition this constraints cannot be checked by the compiler, you have to run the program to see if there's something wrong (which might not be visible immediately if an object in the tree is null for the tests). Naturally, I'm comparing it to other prototyping technologies (Guice for example had the main feature of being compile time checked, Spring did it lately too)
I think it's all what comes to my mind for now, I'll add a comment if any new aspect raises suddenly :)
Naturally I don't know how fast is writing an object as bytes compared to invoking a constructor. The answer to this should be mass write/read tests
But the question is worth thinking.
There are cases where creating new object using copy constructor is different from creating new object "in a standard way". One example is explained in the Wikipedia link in your question. In that example, to create new WordOccurrences using the constructor WordOccurrences(text, word), we need to perform heavyweight computation. If we use copy constructor WordOccurrences(wordOccurences) instead, we can immediately get the result of that computation (in the Wikipedia, clone method is used, but the principle is the same).
I'm attempting to write a framework to handle an interface with an external library and its API. As part of that, I need to populate a header field that exists with the same name and type in each of many (70ish) possible message classes. Unfortunately, instead of having each message class derive from a common base class that would contain the header field, each one is entirely separate.
As as toy example:
public class A
{
public Header header;
public Integer aData;
}
public class B
{
public Header header;
public Long bData;
}
If they had designed them sanely where A and B derived from some base class containing the header, I could just do:
public boolean sendMessage(BaseType b)
{
b.header = populateHeader();
stuffNecessaryToSendMessage();
}
But as it stands, Object is the only common class. The various options I've thought of would be:
A separate method for each type. This would work, and be fast, but the code duplication would be depressingly wasteful.
I could subclass each of the types and have them implement a common Interface. While this would work, creating 70+ subclasses and then modifying the code to use them instead of the original messaging classes is a bridge too far.
Reflection. Workable, but I'd expect it to be too slow (performance is a concern here)
Given these, the separate method for each seems like my best bet, but I'd love to have a better option.
I'd suggest you the following. Create a set of interfaces you'd like to have. For example
public interface HeaderHolder {
public void setHeader(Header header);
public Header getHeader();
}
I'd like your classes to implement them, i.e you's like that your class B is defined as
class B implements HeaderHolder {...}
Unfortunately it is not. Now problem!
Create facade:
public class InterfaceWrapper {
public <T> T wrap(Object obj, Class<T> api) {...}
}
You can implement it at this phase using dynamic proxy. Yes, dynamic proxy uses reflection, but forget about this right now.
Once you are done you can use your InterfaceWrapper as following:
B b = new B();
new IntefaceWrapper().wrap(b, HeaderHolder.class).setHeader("my header");
As you can see now you can set headers to any class you want (if it has appropriate property). Once you are done you can check your performance. If and only if usage of reflection in dynamic proxy is a bottleneck change the implementation to code generation (e.g. based on custom annotation, package name etc). There are a lot of tools that can help you to do this or alternatively you can implement such logic yourself. The point is that you can always change implementation of IntefaceWrapper without changing other code.
But avoid premature optimization. Reflection works very efficiently these days. Sun/Oracle worked hard to achieve this. They for example create classes on the fly and cache them to make reflection faster. So probably taking in consideration the full flow the reflective call does not take too much time.
How about dynamically generating those 70+ subclasses in the build time of your project ? That way you won't need to maintain 70+ source files while keeping the benefits of the approach from your second bullet.
The only library I know of that can do this Dozer. It does use reflection, but the good news is that it'll be easier to test if it's slow than to write your own reflection code to discover that it's slow.
By default, dozer will call the same getter/setters on two objects even if they are completely different. You can configure it in much more complex ways though. For example, you can also tell it to access the fields directly. You can give it a custom converter to convert a Map to a List, things like that.
You can just take one populated instance, or perhaps even your own BaseType and say, dozer.map(baseType, SubType.class);
This question already has answers here:
Is it in an anti-pattern to always use get and set methods to access a class's own member fields? [duplicate]
(11 answers)
Closed 9 years ago.
Usually, in my own projects I use getters and setters for any field access, and I followed to do the same on my job. Some time ago, the tech lead of our project asked me why I was doing that and why is this better than just using fields themselves (with an option of declaring them protected if they needed to be accessed by subclasses). I couldn't come up with a clear answer.
So, are there any reasons to using getters and setters inside a class for class' own fields, or is it better to use fields directly?
The most obvious answer is side effects:
int getCost()
{
if (cost == null) {
calculateCost();
}
return cost;
}
If you need the cost, use getCost(). If you want to see if cost has been calculated, use cost.
If there is any business logic around those values (or there is the potential for such logic), then there is a benefit to using getters and setters even for internal calls.
For example, your setter might do validation on its inputs, and throw an exception rather than store an invalid value. Having all your code use that setter rather than simply setting values directly means that the error is caught at the time it is made rather than a long time later when that value is used. A similar case for a getter is when there is a logical default value, which should be used in case of a null. By using a getter, you can safely write local methods without needing continuous null checks or default options.
That said, if there's no business logic in those methods, and no side effects caused by them, then it's mostly a stylistic thing. It is essentially the responsibility of the class to be internally consistent, and as long as it remains so then it's mostly personal/professional preference whether you access the variables directly or through wrapping methods.
You want to declare them as public getters and setters, and private fields. This means external classes (not subclasses) who want to modify the variables all do so through the setters, and get them through the getters. The benefit of this is that if you want to control how or what condition they get or set them, or want to add information or even print debug, it means you only have to put it in the getters and setters.
There's a really good explanation of the benefits on stackoverflow actually:
In Java, difference between default, public, protected, and private
Of course, only make methods when they're actually needed, and similarly, only public when needed by external classes.
Hope that helps the defense!
This is part of the general question as to why you use getters and setters. Many developers use them without though, as a matter of practice. Personally, I only put in getters/setters if I need to.
I would suggest you do what is clearest/simplest to you.
In general, if I can easily add a getter/setter later should I need it, I won't add it. If it would be difficult to add later (or you have an immediate use for them), I would include them.
Some of us are web developers so, we resort to creating JavaBeans and JavaBeans has its own specification. In the specification, it clearly states:
The class must have a public default constructor (no-argument).
The class properties must be accessible using get, set, is (used for boolean properties instead of get) and other methods.
The class should be serializable.
The reason being, JavaBeans were designed for Reusability where JavaBeans could travel through any Java technologies (e.g. Servlets, JSPs, RMI, Web Services, etc.).
That's my 2cent worth on why we have getters/setters. I mostly create JavaBeans.
Some people think that they should always encapsulate all fields by using setters/getters.
Others think that this practice should not be used at all.
If your class does not have any logic for the fields and just is used as a holder, you can skip using methods and just declare your fields as public. This concept is also called a Data Transfer Object (or Messenger.) But as a rule you should use final attribute for such fields to make your class immutable:
public class TwoTuple<A,B> {
public final A first;
public final B second;
public TwoTuple(A a, B b) { first = a; second = b; }
}
However you must/or it's strongly recommended to use setters/getters:
in web applications sometimes there are requirements to use setters/getters. See POJO/JavaBean objects.
if your class is going to be used in concurrent environment. See Java Concurrency in Practice, Section 3.2:
"Whether another thread actually does something with a published reference doesn't really matter, because the risk of misuse is still present.[7] Once an object escapes, you have to assume that another class or thread may, maliciously or carelessly, misuse it. This is a compelling reason to use encapsulation: it makes it practical to analyze programs for correctness and harder to violate design constraints accidentally"
if you want to add some extra logic when you set/get values you must use setters/getters. Just read about encapsulation and its advantages.
My own opinion always declare fields as "private final" and only then, if needed change these properties.
I have a custom INIFile class that I've written that read/write INI files containing fields under a header. I have several classes that I want to serialize using this class, but I'm kind of confused as to the best way to go about doing it. I've considered two possible approaches.
Method 1: Define an Interface like ObjectPersistent enforcing two methods like so:
public interface ObjectPersistent
{
public void save(INIFile ini);
public void load(INIFile ini);
}
Each class would then be responsible for using the INIFile class to output all properties out to the file.
Method 2: Expose all properties of the classes needing serialization via getters/setters so that saving can be handling in one centralized place like so:
public void savePlayer(Player p)
{
INIFile i = new INIFile(p.getName() + ".ini");
i.put("general", "name", p.getName());
i.put("stats", "str", p.getSTR());
// and so on
}
The best part of method 1 is that not all properties need to be exposed, so encapsulation is held firm. What's bad about method 1 is that saving isn't technically something that the player would "do". It also ties me down to flat files via the ini object passed into the method, so switching to a relational database later on would be a huge pain.
The best part of method 2 is that all I/O is centralized into one location, and the actual saving process is completely hidden from you. It could be saving to a flat file or database. What's bad about method 2 is that I have to completely expose the classes internal members so that the centralized serializer can get all the data from the class.
I want to keep this as simple as possible. I prefer to do this manually without use of a framework. I'm also definitely not interested in using the built in serialization provided in Java. Is there something I'm missing here? Any suggestions on what pattern would be best suited for this, I would be grateful. Thanks.
Since you don't want (for some reason) to use Java serialization, you can use XML serialization. The simplest way is via XStream:
XStream is a simple library to serialize objects to XML and back again.
If you are really sure you don't want to use any serialization framework, you can of course use reflection. Important points there are:
getClass().getDeclaredFields() returns all fields of the class - both public and private
field.setAccessible(true) - makes a private (or protected) field accessible via reflection
Modifier.isTransient(field.getModifiers()) tells you whether the field has been marked with the transient keyword - i.e. not eligible for serialization.
nested object structures may be represented by a dot notation - team.coach.name, for example.
All serialization libraries are using reflection (or introspection) to achieve their goals.
I would choose Method 1.
It might not be the most object oriented way, but in my experience it is simpler, less error-prone and easier to maintain than Method 2.
If you are conserned about providing multiple implementations for your own serialization, you can use interfaces for save and load methods.
public interface ObjectSerializer
{
public void writeInt(String key, int value);
...
}
public interface ObjectPersistent
{
public void save(ObjectSerializer serializer);
public void load(ObjectDeserializer deserializer);
}
You can improve these ObjectSerializer/Deserializer interfaces to have enough methods and parameters to cover both flat file and database cases.
This is a job for the Visitor pattern.