I have an abstract class Example and concrete subclasses to go along with it. I used a discriminator to pull data out of the database, like so:
<resultMap id="ExampleResultMap" class="Example">
<discriminator column="stateCode" javaType="java.lang.String">
<subMap value="AL" resultMap="AlabamaStateResultMap"/>
<subMap value="AR" resultMap="ArkansasStateResultMap"/>
[...]
</discriminator>
</resultMap>
<resultMap extends="ExampleResultMap"
id="AlabamaStateResultMap"
class="AlabamaState"/>
<resultMap extends="ExampleResultMap"
id="ArkansasStateResultMap"
class="ArkansasState"/>
[...]
Thus I have an AlabamaState object (a subclass of the abstract Example object) with no attributes of any kind on him. This is contrived, but the gist is that I don't have any attribute that uniquely identifies the object's type--and there's no reason I would if not for this case.
(Note: The classes aren't empty, they're behavioral, so refactoring them out of existence isn't an option.)
How do I save it back to the database?
Ideally there would be a Discriminator for ParameterMaps, but there doesn't seem to be one.
As far as I can tell, there are a number of undesirable solutions, among them:
Give up and add a "getType()" method on all my subclasses that returns a static string. In this case, AL. (Note that I tried pretty hard to avoid needing this throughout all my code, so having this = OOD-defeat).
Make a "DB" object that's exactly like my big, complex object but happens to also have an extra string saying "Oh, btw, my TYPE is AL."
Extract all 20 attributes I want to persist into a HashMap before inserting the object.
Some other craziness like using the toString() or something to help me out.
Likely I'll go with the first option, but it seems rather ridiculous, doesn't it? If iBatis can create it, shouldn't it be able to persist it? What I really need is a discriminator for insert.
Am I out of luck, or am I just overlooking something obvious?
If you have no attributes belonging to your subclasses, you should consider removing these subclasses and add an enum to your former base-class, since the only purpose your subclasses serve is to differentiate the type of your objects (if I understood you correctly). Using an enum for this is easier to extend and more elegant in client code (since you can switch on the enum instead of using blocks of instanceof expressions).
If are having special implementations of certain operations on your subclasses, you could move them to the enum as well, and have your base class delegate to the implementation on the enum.
EDIT
Here is an example:
public interface GreetingStrategy {
abstract String sayHello();
}
enum UserType implements GreetingStrategy {
ADMIN {
#Override
public String sayHello() {
return "hello from admin";
}
},
GUEST {
#Override
public String sayHello() {
return "hello from guest";
}
};
}
class User {
private final GreetingStrategy greetingStrategy;
public User(GreetingStrategy greetingStrategy) {
this.greetingStrategy = greetingStrategy;
}
public String sayHello() {
return greetingStrategy.sayHello();
}
}
Related
I have got a Class PagedResult. The class is there to help me realize a JSON output with different objects in a pages format. The E is the object, that is wrapped in the List. It works all fine, but one thing still bothers me. I would like that the list with the objects does not always get the same name. I would like to adapt the name to the corresponding objects.
Class PagedResult:
public class PagedResult<E> {
Long totalItems;
Integer totalPages;
Integer currentPage;
List<E> elements;
[... Getter & Setter ...]
}
The actual JSON Output with an Object like MyPojo looks like this:
{
"totalItems": 2,
"totalPages": 1,
"currentPage": 1,
"elements": [
{
"myPojoAttr1": "hello",
"myPojoAttr2": "there"
},
{
"myPojoAttr1": "hello",
"myPojoAttr2": "folks"
}
]
}
So for each response, no matter which objects, the array is namend as "elements". I don´t want the ugly name in my JSON response, because of the changing objects in the PagedResult-class. When I get a response with objects like MyPojo the name of the JSON-Array should be "myPojos" and when I want to get a response with objects like MyWin the name "myWins".
I tried alot with #JsonProperty, but I can´t find a way, to do this "object-array-name" also generic. Can someone assist me with the problem please? Thanks in advance.
No. You can't do that. Generic types have parameters for types, not for identifiers. AFAIK, nothing in the Java language allows you to treat a Java identifier as a parameter when producing a type. (Certainly, nothing you could use in this context!)
Alternatives:
Don't do it. (Take a good hard look at your reasons for wanting the JSON attribute name to vary. What does it actually achieve? Is it worth the effort?)
Don't use a generic type. Define a different class for each kind of "paged result". (Clunky. Not recommended.)
Use a map, and populate it with a different map key for the elements attribute of each kind of "paged result". (The disadvantage is that you lose static type checking, and take a small performance and storage penalty. But these are unlikely to be significant.)
Write a custom mapper to serialize and deserialize the PagedResult as per your requirements.
For what it is worth, identifiers as parameters is the kind of thing you could do with a macro pre-processor. That Java language doesn't have standard support for that kind of thing.
Yes it's possible, using custom serializers. But even with a custom serializer you still have a problem: Generics are removed at compile time. So we need to somehow get the type during runtime.
Here is an example that will just check the type of the first element in the elements list. Definietly not the cleanest way to do it, but you don't have to adjust your PagedResult class.
public class PagedResultSerializer<T> extends JsonSerializer<PagedResult<Object>> {
#Override
public void serialize(PagedResult<Object> value, JsonGenerator gen, SerializerProvider provider) throws IOException {
gen.writeStartObject();
gen.writeNumberField("totalItems", value.getTotalItems());
// Your other attributes
if (!value.getElements().isEmpty()) {
Object firstElement = value.getElements().get(0);
String elementsFieldName;
if (firstElement instanceof MyPojo) {
elementsFieldName = "myPojos";
} else if (firstElement instanceof MyWin) {
elementsFieldName = "myWins";
} else {
throw new IllegalArumentException("Unknown type");
}
serializers.defaultSerializeField(elementsFieldName, value.getElements(), gen);
}
gen.writeEndObject();
}
}
Now you just need to tell Jackson to use this serializer instead of the default one.
#JsonSerialize(using = PagedResultSerializer.class)
public class PagedResult<T> {
// Your code
}
Improvments: Add a Class<T> elementsType attribute to your PagedResult and use this attribute in your serializer instead of checking the first element in the list.
Another approach: use inheritance.
Have an abstract base class PagedResult that contains all the common fields, to then distinctively subclass it to PagedResultWithElements, PagedResultWithMyPojo and so on. And the subclasses contain just that "type" specific list.
As a drawback: you get some code duplication. But on the other side, you get quite more control over what happens without doing overly complicated (de)serialization based on custom code.
So, when you know the different flavors of "element types", and we talk say 3, max 5 different classes, using inheritance might be a viable solution.
I am going through dozen tutorials which prove to me of very little help because production code is not an animal, bird or human. Not a weapon of type cutting or shooting it is much more complex to reason about.
So returning to reality, scenario:
service 1 is exchanging messages with service 2 through Kafka, messages are serialized/deserialized with Jackson, the model class is shared between services as jar.
Now the plague part, the culmination of evil :
#JsonTypeInfo(
use = Id.NAME,
property = "type",
visible = true
)
#JsonSubTypes({#Type(
value = InternalTextContent.class,
name = "text"
), #Type(
value = InternalImageContent.class,
name = "image"
), #Type(
value = InternalAudioContent.class,
name = "audio"
), #Type(
value = InternalCustomContent.class,
name = "custom"
)})
public abstract class InternalContent {
#JsonIgnore
private ContentType type;
public InternalContent() {
}
Obviously when the time will come to work with this content we will have something like:
message.getInternalContent
which results to a sea of switch statements, if conditions, instanceof and wait for it ... downcasting everywhere
And this is just one property example the wrapping object contains. Clearly I cannot add polymorphic behaviour to InternalContent , because hellooo it is within a jar.
What went wrong here? Is it even wrong?
How do I add polymorphic behaviour ? To add a new mitigating layer, I still need instanceof in some factory to create a new type of polymorphic objects family which are editable to add the desired behavior? Not even sure it is going to be better, it just smells and make me want to shoot the advocates which throw blind statement like instanceof with downcasting is a code smell" torturing people like me who genuinely care, which makes me wonder if they ever worked on a real project. I deliberately added system environment details to understand how to model not just the code but interaction between systems. What are possible redesign options to achieve the "by book" solution?
So far I can think of that sharing domain model is a sin. But then if I use different self-service-contained classes to represent same things for serialization/deserialization I gather flexibility but lose contract and increase unpredictability. Which is what technically happens with HTTP contracts.
Should I send different types of messages with different structures along the wire instead of trying to fit common parts and subtypes for uncommon in a single message type?
To throw more sand at OO , I consider Pivotal the best among the best yet:
https://github.com/spring-projects/spring-security/blob/master/core/src/main/java/org/springframework/security/authentication/dao/AbstractUserDetailsAuthenticationProvider.java
public boolean supports(Class<?> authentication) {
return (UsernamePasswordAuthenticationToken.class
.isAssignableFrom(authentication));
}
AuhenticationManager has a list of AuthenticationProviders like this and selects correct one based on the method above. Does this violate polymorphism ? Sometimes it all just feels as a hype...
Use the visitor pattern.
Example (I'll limit to two subclasses, but you should get the idea):
interface InternalContentVisitor<T> {
T visitText(InternalTextContent c);
T visitImage(InternalImageContent c);
}
public abstract class InternalContent {
public abstract <T> T accept(InternalContentVisitor<T> visitor);
// ...
}
public class InternalTextContent {
#Override
public <T> T accept(InternalContentVisitor<T> visitor) {
return visitor.visitText(this);
}
}
public class InternalImageContent {
#Override
public <T> T accept(InternalContentVisitor<T> visitor) {
return visitor.visitImage(this);
}
}
This code is completely generic, and can be shared by any application using the classes.
So now, if you want to polymorphically do something in project1 with an InternalContent, all you need to do is to create a visitor. This visitor is out of the InternalContent classes, and can thus contain code that is specific to project1. Suppose for example that project1 has a class Copier that can be used to create a Copy of a text or of an image, you can use
InternalContent content = ...; // you don't know the actual type
Copier copier = new Copier();
Copy copy = content.accept(new InternalContentVisitor<Copy>() {
#Override
public Copy visitText(InternalTextContent c) {
return copier.copyText(c.getText());
}
#Override
public Copy visitImage(InternalImageContent c) {
return copier.copyImage(c.getImage());
}
});
So, as you can see, there is no need for a switch case. Everything is still done in a polymorphic way, even though the InternalContent class and its subclasses have no dependency at all on the Copier class that only exists in project1.
And if a new InternalSoundContent class appears, all you have to do is to add a visitSound() method in the visitor interface, and implement it in all the implementations of this interface.
Currently, I try to design some things with OO principles in mind. So let's say, before processing user input, I need to validate it. According to OO, a separate Validator class would be the correct way. This would look as follows:
public class Validator{
public void validate(String input) throws ValidationException{
if (input.equals("")) throw new ValidationException("Input was empty");
}
}
Then, my processing class, which got the validator object before via dependency injection would call validator.validate(input)
A good point about this design is, that
My processing class can get a mock for the validator via DI which makes testing easier
The Validator class can be tested independently
However, my doubts are in the design of the Validator. According to OO, it misses some kind of state. With this design, it is as util class and the validate method could be static. And I read a lot of times that (static) Util classes are bad OO design. So, how can this be done with more OO while keeping the two advantages I mentioned?
PS.: Maybe, OO is simply a bad solution for this kind of problem. However, I would like to see how the OO solution would look like and form my own opinion.
The validator in your example doesn't have a state (and doesn't need any), but another validator could require one (say with a format):
Example:
public class RegExValidator {
private Pattern pattern;
public RegExValidator(String re) {
pattern = Pattern.compile(re);
}
public void validate(String input) throws ValidationException {
if (!pattern.matcher(input).matches()) {
throw new ValidationException("Invalid syntax [" + input + "]");
}
}
}
Concentrating on the OOP aspect of your question (rather than the question if an Exception is the correct way to handle your validation):
Why have a single validator?
interface Validator<T> {
void validate(T toValidate) throws ValidationException;
}
would enable you to write classes that can validate any class T and be very testable. Your validator would look like this:
class EmptyStringValidator implements Validator<String> {
public void validate(String toValidate) {
if(toValidate == null || toValidate.isEmpty()) throw new ValidationException("empty!!!");
}
}
and you could test it very easily.
In fact, if you're using Java 8, this would be a functional interface, so a single utility class could host several validators:
class ValidationUtil {
public static void emptyString(String val) // same code as above
}
and ValidationUtil::emptyString would implement Validator<String>.
You would combine several validators with a composite pattern.
You could also have a validator with a state if that's what you need...
class ListIsSortedValidator implements Validator<Integer> {
private int lastInt = Integer.MIN_VALUE;
public void validate(Integer val) throw ValidationException {
if (val < lastInt) throw new ValidationException("not sorted");
lastInt = val;
}
}
That you could use to for instance validate a list:
List<Integer> list = createList();
Validator<Integer> validator = new ListIsSortedValidator();
list.forEach(validator::validate);
It depends on the circumstances of course, but I think your instinct is correct. This design could be more Object-Oriented.
It is not just that Validator has no state, which is a purely mechanical indicator that it is likely not a correct abstraction, but the name itself tells us something. Usually Validator (or even EmptyStringValidator) is not part of the problem domain. It is always a bad sign when you have to create something purely technical (although sometimes it is the less of two evils).
I assume you are not writing a web-framework, you are trying to write an application that has some domain. For example it has user registration. Then, RegistrationForm is part of the problem domain. Users know about the "registration form", you can talk about it and they will know what you mean.
In this case, an Object-Oriented solution for validation would be that this object is responsible for the validation of itself during the "submitting" of itself.
public final class RegistrationForm extends Form {
...
#Override
public void submit() {
// Do validation here
// Set input fields to error if there are problems
// If everything ok do logic
}
}
I know this is not the solution normally seen or even supported by web-frameworks. But it is how an Object-Oriented solution would look like.
The two important points to always keep in mind are:
Don't "get" data from objects, ask them to do something instead. This is as applicable to UI code as anything else.
OO makes sense when the objects focus on meaningful things, i.e. the problem domain. Avoid over-representing technical (unimportant) objects, like Validator (if that's not your application's domain).
This is one of those topics I don't even know how to search in google (tried already, most of the results were for C#), so here I go:
I'm messing around with our huge application, trying to get to work a brand new DAO/Entity/Service/DTO.. euh...thing. I've been left more or less on my own, and, again, more or less, I'm getting to understand some of the hows and maybe one or two of the whys.
The thing is that I got all, the way "up", from the DB to the Service:
I got a DAO class which executes a query stored on an Entity class. After executing it, it returns the Entity with the values.
The service receives the Entity and, somehow, transforms the Entity to a DTO and returns it to whenever is needed.
My problem is with the "somehow" thing the code goes like this:
DTOClass dto = ClassTransformerFromEntityToDTO.INSTANCE.apply(entityQueryResult);
I went into ClassTransformerFromEntityToDTO and found this:
public enum ClassTransfomerFromEntityToDTO implements Function<EntityClass,DTO Class> ) {
INSTANCE;
#Override
public DTOClass apply(EntityClass entityInstance) {
/*Code to transform the Entity to DTO and the return*/
}
}
The class that this... thing, implements, is this:
package com. google .common . base;
import com. google .common . annotations. GwtCompatible ;
import javax. annotation .Nullable ;
#GwtCompatible
public abstract interface Function <F , T >
{
#Nullable
public abstract T apply (#Nullable F paramF) ;
public abstract boolean equals (#Nullable Object paramObject) ;
}
I'm in the classic "everyone who where at the beginning of the project fled", and no one knows why is this or what is this (The wisest one told me that maybe it had something to do with Spring), so, I have two main questions (which can be more or less answered in the same side):
1) What's this? What's the point of using an enum with a function to make a conversion?
2) What's the point of this? Why can I just make a class with a single function and forget about this wizardry?
not sure there's much to answer here... And I'm adding an answer to illustrate my thoughts with some code I've seen, but that you have is horrible. I've actually seem similar stuff. My guess is that that codes actually precedes Spring. It's used as some sort of Singleton.
I have seen code like this, which is worse:
public interface DTO {
find(Object args)
}
public class ConcreteDTO1 implements DTO {
...
}
public class ConcreteDTO2 implements DTO {
...
}
public enum DTOType {
CONCRETE_DTO1(new ConcreteDTO1(someArgs)),
CONCRETE_DTO2(new ConcreteDTO2(someOtherArgs))
private DTO dto;
public DTOType(DTO dto) {
this.dto = dto;
}
public DTO dto() {
return dto;
}
}
and then the DTOs are basically accessed through the Enum Type:
DTOType.CONCRETE_DTO1.dto().find(args);
So everyone trying to get hold of a DTO accesses it through the enum. With Spring, you don't need any of that. The IoC container is meant to avoid this kind of nonsense, that's why my guess is that it precedes Spring, from some ancient version of the app when Spring was not there. But it could be that someone was wired to do such things regardless of whether Spring was already in the app or not.
For that kind of stuff you're trying to do, you're better of with the Visitor pattern. Here's an example from a different answer: passing different type of objects dynamically on same method
It's me. From the future.
Turns out that this construct is a propossed Singleton Implementation, at least on "Effective Java 2nd edition".
So, yeah, Ulise's guess was well oriented.
We're trying to figure out a robust way of persisting enums using JPA. The common approach of using #Enumerated is not desirable, because it's too easy to break the mappings when refactoring. Each enum should have a separate database value that can be different than the enum name/order, so that you can safely change the name or internal ordering (e.g. the ordinal values) of the enum without breaking anything. E.g. this blog post has an example on how to achieve this, but we feel the suggested solution adds too much clutter to the code. We'd like to achieve a similar result by using the new AttributeConverter mechanism introduced in JPA 2.1. We have an interface that each enum should implement that defines a method for getting the value that is used to store the enum in the database. Example:
public interface PersistableEnum {
String getDatabaseValue();
}
...
public enum SomeEnum implements PersistableEnum {
FOO("foo"), BAR("bar");
private String databaseValue;
private SomeEnum(String databaseValue) {
this.databaseValue = databaseValue;
}
public void getDatabaseValue() {
return databaseValue;
}
}
We also have a base converter that has the logic for converting enums to Strings and vice versa, and separate concrete converter classes for each enum type (AFAIK, a fully generic enum converter is not possible to implement, this is also noted in this SO answer). The concrete converters then simply call the base class that does the conversion, like this:
public abstract class EnumConverter<E extends PersistableEnum> {
protected String toDatabaseValue(E value) {
// Do the conversion...
}
protected E toEntityAttribute(Class<E> enumClass, String value) {
// Do the conversion...
}
}
...
#Converter(autoApply = true)
public class SomeEnumConverter extends EnumConverter<SomeEnum>
implements AttributeConverter<SomeEnum, String> {
public String convertToDatabaseColumn(SomeEnum attribute) {
return toDatabaseValue(attribute);
}
public SomeEnum convertToEntityAttribute(String dbData) {
return toEntityAttribute(SomeEnum.class, dbData);
}
}
However, while this approach works very nicely in a technical sense, there's still a pretty nasty pitfall: Whenever someone creates a new enum class whose values need to be stored to the database, that person also needs to remember to make the new enum implement the PersistableEnum interface and write a converter class for it. Without this, the enum will get persisted without a problem, but the conversion will default to using #Enumerated(EnumType.ORDINAL), which is exactly what we want to avoid. How could we prevent this? Is there a way to make JPA (in our case, Hibernate) NOT default to any mapping, but e.g. throw an exception if no #Enumerated is defined on a field and no converter can be found for the type? Or could we create a "catch all" converter that is called for all enums that don't have their own specific converter class and always throw an exception from there? Or do we just have to suck it up and try to remember the additional steps each time?
You want to ensure that all Enums are instances of PersistableEnum.
You need to set a Default Entity Listener (an entity listener whose callbacks apply to all entities in the persistence unit).
In the Default Entity Listener class implement the #PrePersist method and make sure all the Enums are instances of PersistableEnum.