How should i object model class blueprints and concrete classes? - java

This is not a question about what is a class or what is an object.
I am trying to identify a design pattern, for reuse.
I have a class blue print which consists of a Map keyed with the field name and a value of properties of the field. This map values describe the fields of a particular class.
class FieldDescriptor {
public FieldDescriptor(String name, int length, boolean isKey) {
....
}
...
}
class ConcreteClass {
final public static Map<String, FieldDescriptor> fields;
static {
Map<String, FieldDescriptor> myFields = new HashMap<String, FieldDescriptor>();
myFields.put("PERSON_CODE", new FieldDescriptor("PERSON_CODE", 10, true);
myFields.put("FUN_FUN_FUN", new FieldDescriptor("FUN_FUN_FUN", 6, false);
myFields.put("JEDI_POWER_RATING", new FieldDescriptor("JEDI_POWER_RATING", 9000, true);
fields = Collections.unmodifiableMap(myFields);
}
private String personCode;
private String funFunFun;
private String jediPowerRating;
public void setPersonCode(String personCode) {
this.personCode = transformField(fields.get("PERSON_CODE"), personCode);
}
...
}
The whole reason for the maddness is the transformField call on the setters. It is central to why I have created the map.
However I would like to abstract this away from my class as I would like to build more classes this way and be able to refer to the map generically or via an interface.
I feel strongly that the Map should be encapsulated in a seperate class! However, how will instanciation of the ConcreteClass occur?
Can anyone identify a suitable design pattern?

I am not sure if i do understand your question. But if my understanding is correct, I would probably leverage reflection and an instance of the object, rather than introducing a custom Class called FieldDescriptor. Then again I do not know your complete use case, So I might be wrong.
So this is my solution briefly:
Each class will have to have a default static field called defaultInstance. defaultInstance would be of the same type as the Class itself. If I were using a framework like spring, I will try to leverage a framework callback method, to populate the defaultInstance (to me concise, if the lifecycle of the object is managed). The idea is to have an external Component responsible for providing each class with its defaultInstance. (Dynamic Injection ??).
Once the class needs access to value stored in default instance, It could use Reflection API or a wrapper like Apache BeanUtils to get Individual Field Name and value.
I see that you have a boolean field called isKey. If you need this information at runtime, you can use custom annotation to indicate some fields as Key and use isAnnotation Present to implement your branch logic.
So at the end of it call, you just need to have an attribute called defaultInstance in each class. Have a single component, that is responsible for populating this object. ( to make it configurable, you can store information in a property file or db like sqllite). Use Dynamic Injection or AOP if you could (so that its nonintrusive) and use Apache BeanUtils or Reflection API directly to get the information. (even this logic should be abstracted as a separate component).

It looks like the only reason you want all the extra complexity of field definitions is so you can relate your fields with their associated column attributes in the database table. You should not have to write this yourself - use a persistence framework like Spring or Hibernate to do the job for you. They use reflection internally, and help keep your data transfer objects (DTOs) clean and easy to maintain.

Related

Usage of sling model

Which one of the following a better way of defining a sling model and why?
#Model(adaptables=Resource.class)
public interface MyModel {
#Inject
String getPropertyName();
}
OR
#Model(adaptables=Resource.class)
public class MyModel {
#Inject
private String propertyName;
}
Can you tell me a defined use case for using an interface as a model when all the methods are to be overridden in all the implementation classes?
Use an interface when you access values of the ValueMap without any need to provide an additional view of the data. Class based models are used, when you need to apply transformations to the data or add additional data (via OSGI services etc).
It strongly depends on the usage. In the case where you add the annotation to the getter you could also go with an interface instead of a class.
When you want to get a data attribute and manipulate it e.g. shorten a string or something, then it makes sense to inject it in a variable and then use a getter to return the shortened string.

Framework to populate common field in unrelated classes

I'm attempting to write a framework to handle an interface with an external library and its API. As part of that, I need to populate a header field that exists with the same name and type in each of many (70ish) possible message classes. Unfortunately, instead of having each message class derive from a common base class that would contain the header field, each one is entirely separate.
As as toy example:
public class A
{
public Header header;
public Integer aData;
}
public class B
{
public Header header;
public Long bData;
}
If they had designed them sanely where A and B derived from some base class containing the header, I could just do:
public boolean sendMessage(BaseType b)
{
b.header = populateHeader();
stuffNecessaryToSendMessage();
}
But as it stands, Object is the only common class. The various options I've thought of would be:
A separate method for each type. This would work, and be fast, but the code duplication would be depressingly wasteful.
I could subclass each of the types and have them implement a common Interface. While this would work, creating 70+ subclasses and then modifying the code to use them instead of the original messaging classes is a bridge too far.
Reflection. Workable, but I'd expect it to be too slow (performance is a concern here)
Given these, the separate method for each seems like my best bet, but I'd love to have a better option.
I'd suggest you the following. Create a set of interfaces you'd like to have. For example
public interface HeaderHolder {
public void setHeader(Header header);
public Header getHeader();
}
I'd like your classes to implement them, i.e you's like that your class B is defined as
class B implements HeaderHolder {...}
Unfortunately it is not. Now problem!
Create facade:
public class InterfaceWrapper {
public <T> T wrap(Object obj, Class<T> api) {...}
}
You can implement it at this phase using dynamic proxy. Yes, dynamic proxy uses reflection, but forget about this right now.
Once you are done you can use your InterfaceWrapper as following:
B b = new B();
new IntefaceWrapper().wrap(b, HeaderHolder.class).setHeader("my header");
As you can see now you can set headers to any class you want (if it has appropriate property). Once you are done you can check your performance. If and only if usage of reflection in dynamic proxy is a bottleneck change the implementation to code generation (e.g. based on custom annotation, package name etc). There are a lot of tools that can help you to do this or alternatively you can implement such logic yourself. The point is that you can always change implementation of IntefaceWrapper without changing other code.
But avoid premature optimization. Reflection works very efficiently these days. Sun/Oracle worked hard to achieve this. They for example create classes on the fly and cache them to make reflection faster. So probably taking in consideration the full flow the reflective call does not take too much time.
How about dynamically generating those 70+ subclasses in the build time of your project ? That way you won't need to maintain 70+ source files while keeping the benefits of the approach from your second bullet.
The only library I know of that can do this Dozer. It does use reflection, but the good news is that it'll be easier to test if it's slow than to write your own reflection code to discover that it's slow.
By default, dozer will call the same getter/setters on two objects even if they are completely different. You can configure it in much more complex ways though. For example, you can also tell it to access the fields directly. You can give it a custom converter to convert a Map to a List, things like that.
You can just take one populated instance, or perhaps even your own BaseType and say, dozer.map(baseType, SubType.class);

Java - Designing a validator, class hierarchy

I'm working on designing a validator for certain objects (fields of those objects). These objects are enclosed in one, bigger object - container.
Example: Car as a container . Consists of Wheels, Engine, Body.
Lets say i need to validate if wheels have correct diameter, engine has correct capacity, body has certain length etc.
Theoretically I think I should validate everything before construction of a container (car).
What is the best way to achieve this? Do I make an abstract validator class with validate() method and implement it in every enclosed class? What about the container, do I just not include it at all in the validation process? Thanks for help.
I'd suggest you not to put the validation logic inside the classes you're going to validate.
I find it better to keep those classes as mere value objects, and create a parallel hierarchy of validators, roughly one for each entity to be validated. Alternatively, you could also create a single validator that can validate all the entities: however, this solution is less scalable and could bring you to violate the open-closed principle when you have to add a new entity (e.g. you want to deal also with the rear-view mirrors of the car).
Assuming you choose the one entity : one validator approach, the validator of the container will first validate the components inside the container and then validate if they fit together.
Please consider also the possibility of using validator frameworks such as Apache Commons Validator, that can save you from writing boilerplate code. However, since I don't know what kind of complex validation you have to perform, I don't know if it fits your needs.
Furthermore, I don't think you should be worried of validating everything before it is constructed. Just construct it and validate afterwards: then, if it violates the validation rules, you can discard it (i.e. don't persist it anywhere).
piggy backing off of gd1 answer, I agree. One such way would be to have a ValidatorAdapter for each of your value objects. So it would look like this:
public class GreenCarValidator {
public GreenCarValidator(Car car) {
// save reference
}
#Override
public boolean isValid() {
return car.getColor().equals("green");
}
}
public class RedCarValidator {
public RedCarValidator(Car car) {
// save reference
}
#Override
public boolean isValid() {
// you could compose more validators here for each property in the car object as needed
return car.getColor().equals("red");
}
}
Now you can have many types of validators for a single type of object, dynamic and configurable at runtime. Should you put the "valid()" method inside the classes the classes as gd1 suggest you not do, you would lose this flexibility.
You could create a ValidatablePart interface with a validate method, have all parts implement this interface, and then have the container validate all inclosed parts as they are being added to the container or perhaps when calling the the container's build or whatever method that is supposed to construct it.
Your Container class could follow the Template Method Design Pattern.

Enforce mapping of concrete classes of a common interface in a static map

Lets say I have an enum class - ConfigElement which has some members like GENERAL_CONFIG("General Configuration"), TRANSIT_TIMES("Transit times").
All of these config elements' individual classes, implement a common interface ConfigElementOP, for example
public class TransitTimesOp implements ConfigElementsOp{
//implementation of interface methods
}
These individual classes define a certain behaviour which is particular to them.
Now, the controller of the application just gets the particular ConfigElement, and then with the help of a factory, finds out the class which has the corresponding behaviour and uses it accordingly.
Currently, my factory is just a static map between the ConfigElement and its behaviour class, like
public static Map<ConfigElement, ConfigElementsOp> ConfigElementBehaviourMap =
new HashMap<ConfigElement, ConfigElementsOp>();
ConfigElementBehaviourMap.put(ConfigElement.TRANSIT_TIMES, TransitTimesOp.class);
...
I have two concerns with this:
Is this the correct design for the factory? Seems messier to me, as addition of any new element and behaviour would require changes in multiple places and the miss to include it in this static map, would be silently ignored by the compiler.
Lets say we go with this design of the factory (static map), is there any way of enforcing that any new class defined for a config element, makes an entry into this map. And any such miss, would be a compile time error.
Usage can be described in the following way - Various controllers will require a different behavioural map of this enum. So, lets say - UI controller will have one map which states how to display a particular ConfigElement, Serializer will have another map at its disposal, which is a map between the ConfigElement and its particular serializer. Particular controllers when in work will get the corresponding behaviour for a ConfigElement from their map and use it.
Thanks in advance.
You can enhance the enums with a class parameter in addition to the existing string parameter, and retrieve the class directly from the enum. This is the way I would implement it.
First of all since the key is an enum you should probably use EnumMap which is tailored for this case.
The static map will probably work at the cost of introducing a strong dependency between the class containing the map and the ConfigElementOp implementations.
What is the big picture, how are you going to use your Map?

Manual object serialization in Java

I have a custom INIFile class that I've written that read/write INI files containing fields under a header. I have several classes that I want to serialize using this class, but I'm kind of confused as to the best way to go about doing it. I've considered two possible approaches.
Method 1: Define an Interface like ObjectPersistent enforcing two methods like so:
public interface ObjectPersistent
{
public void save(INIFile ini);
public void load(INIFile ini);
}
Each class would then be responsible for using the INIFile class to output all properties out to the file.
Method 2: Expose all properties of the classes needing serialization via getters/setters so that saving can be handling in one centralized place like so:
public void savePlayer(Player p)
{
INIFile i = new INIFile(p.getName() + ".ini");
i.put("general", "name", p.getName());
i.put("stats", "str", p.getSTR());
// and so on
}
The best part of method 1 is that not all properties need to be exposed, so encapsulation is held firm. What's bad about method 1 is that saving isn't technically something that the player would "do". It also ties me down to flat files via the ini object passed into the method, so switching to a relational database later on would be a huge pain.
The best part of method 2 is that all I/O is centralized into one location, and the actual saving process is completely hidden from you. It could be saving to a flat file or database. What's bad about method 2 is that I have to completely expose the classes internal members so that the centralized serializer can get all the data from the class.
I want to keep this as simple as possible. I prefer to do this manually without use of a framework. I'm also definitely not interested in using the built in serialization provided in Java. Is there something I'm missing here? Any suggestions on what pattern would be best suited for this, I would be grateful. Thanks.
Since you don't want (for some reason) to use Java serialization, you can use XML serialization. The simplest way is via XStream:
XStream is a simple library to serialize objects to XML and back again.
If you are really sure you don't want to use any serialization framework, you can of course use reflection. Important points there are:
getClass().getDeclaredFields() returns all fields of the class - both public and private
field.setAccessible(true) - makes a private (or protected) field accessible via reflection
Modifier.isTransient(field.getModifiers()) tells you whether the field has been marked with the transient keyword - i.e. not eligible for serialization.
nested object structures may be represented by a dot notation - team.coach.name, for example.
All serialization libraries are using reflection (or introspection) to achieve their goals.
I would choose Method 1.
It might not be the most object oriented way, but in my experience it is simpler, less error-prone and easier to maintain than Method 2.
If you are conserned about providing multiple implementations for your own serialization, you can use interfaces for save and load methods.
public interface ObjectSerializer
{
public void writeInt(String key, int value);
...
}
public interface ObjectPersistent
{
public void save(ObjectSerializer serializer);
public void load(ObjectDeserializer deserializer);
}
You can improve these ObjectSerializer/Deserializer interfaces to have enough methods and parameters to cover both flat file and database cases.
This is a job for the Visitor pattern.

Categories

Resources