I'm trying to code domain objects that can create themselves from other objects which implement the same interface. I'm also coding it so they can transform themselves into other implementations, basically simple domain transfer objects. I'm using jackson to automatically convert between implementations to reduce error prone boiler plate of manual object conversion.
It's probably easier to show with an example:
//base class
public abstract class DO<T extends Object> {
public abstract T toDTO();
public abstract DO<T> fromDTO(T t);
}
//concrete implementation
public class MyDO extends DO<MyDOInterface> implements MyDOInterface {
public MyDO fromDTO(MyDTO r){
ObjectMapper mapper = new ObjectMapper();
return mapper.convertValue(r, MyDO.class);
}
public MyDTO toDTO() {
ObjectMapper mapper = new ObjectMapper();
return mapper.convertValue(this, MyDO.class);
}
//getters and setters from MyDOInterface
}
Now this works fine when creating DTOs, but is a bit of a pain the other way around. To create my domain objects I'm having to do this:
MyDO myDO = new MyDO().fromDTO(aDTOInstance);
Which creates an empty object in order to call fromDTO(...) on it.
I've got a feeling I'm missing something simple that would either allow me to pass the DTO in a constructor or a static method to avoid this. Or even a factory method in DO itself but I can't work out what it is. Can anyone see a way of making this work?
Maybe have a look at #JsonCreator annotation: it allows you to mark constructors and (static) factory methods to be used. Specifically, so-called "delegating" creator like:
Another possibility when serializing would be #JsonValue, which allows certain conversions during serialization process.
I don't know if these help with specific problem, since you are doing more conversion than reading/writing JSON, but they seem related.
Related
I'm using Kotlin with Apache Beam and I have a set of DTOs that reference each other and all serialize great for any encoder with Kotlinx Serialization. When I try to use them with Beam I end up having issues because it's looking for all objects, type parameters and nested objects to implement the Java Serializable interface. Problem is, I'm not in control of that with all object types because some come from 3rd-party libraries.
I've implemented my own CustomCoder<T> type that uses Kotlinx Serialization but then I run into issues with my custom coder not being serializable, particularly due to the Kotlinx Serialization plugin-generated Companion object serializer not serializing. Since it's compile-time generated code I don't really have control over that and I can't flag it as #Transient. I tried implementing Externalizable on the coder and it fails as soon as I pass a type argument for T that doesn't implement Serializable or has a nested type argument that doesn't.
Also, Kotlinx Serialization is nice because it doesn't use reflection. It would make a lot of my current headaches disappear if I could just swap out the serialization mechanism somehow and not have to rely on standard Java serialization methods at all or somehow implement Externalizable in a way that just calls out to my own serialization mechanism and ignores the type parameter. Are there any solutions? I don't care how hacky it is, even if the solution involves messing with stuff in the Gradle build config to override something. I'm just not sure how to go about it so any pointers would be a great help!
Alternatively, if I abandon Kotlinx Serialization, are there any simple solutions to make any arbitrarily complex data type serialization just work with Java, even using reflection, without a lot of custom, manual work to handle encoding and decoding? I feel like maybe I'm just missing something obvious. This is my first project with Apache Beam but so far the google is little help.
Mybe late, I develop an annotation processor called beanknife recently, it support generate DTO from any class. You need config by annotation. But you don't need change the original class. This library support configuring on a separate class. Of course you can choose which property you want and which you not need. And you can add new property by the static method in the config class. The most power feature of this library is it support automatically convert a object property to the DTO version. for example
class Pojo1 {
String a;
Pojo b; // circular reference to Pojo2
}
class Pojo2 {
Pojo1 a;
List<Pojo1> b;
Map<List<Pojo1>>[] c;
}
// remove the circular reference in the DTO
#ViewOf(value = Pojo1.class, includePattern = ".*", excludes={Pojo1Meta.b})
class ConfigureOfPojo2 {}
// use the no circular reference versioned dto replace the Pojo1
#ViewOf(value = Pojo2.class, includePattern = ".*")
class ConfigureOfPojo2 {
// convert b to dto version
#OverrideViewProperty(Pojo2Meta.b)
private List<Pojo1View> b;
// convert c to dto version
#OverrideViewProperty(Pojo2Meta.c)
private Map<List<Pojo1View>>[] c;
}
will generate
// meta class, you can use it to reference the property name in a safe way.
class Pojo1Meta {
public final String a = "a";
public final String b = "b";
}
// generated DTO class. The actual one will be more complicate, there are many other method.
class Pojo1View {
private String a;
public Pojo1View read(Pojo1 source) { ... }
... getters and setters ...
}
class Pojo2Meta {
public final String a = "a";
public final String b = "b";
public final String c = "c";
}
class Pojo2View {
private String a;
private List<Pojo1View> b;
private Map<List<Pojo1View>>[] c;
public Pojo1View read(Pojo2 source) { ... }
... getters and setters ...
}
The interest things here is you can safely use the class not exist yet in the source. Although the compiler may complain, all will be ok after compiled. Because all the extra class will be automatically generated just before compiled.
A better approach may be to compile step by step, first add #ViewOf annotations, and then compile, so that all the classes that need to be used later are generated. Compile again after the configuration is complete. The advantage of this is that the IDE will not have grammatical error prompts, and can make better use of the IDE's auto-complete function.
With the support of using generated DTO in the configure class. You can define a Dto without circular reference just like the example. Furthermore, you can define another dto for Pojo2, and remove all property reference the Pojo1 and use it to replace the property b in Pojo1.
When I create a User defined class "Asset".
public class Asset {
private UUID id;
private String name;
}
And set an object of this class as a response.
#GetMapping("/testSerialization")
public Asset testSerialization() {
return new Asset()
}
This controller works successfully.
But when the same controller uses Geometry Types the request fails,
import com.vividsolutions.jts.geom.Point;
// Does not work
#GetMapping("/testSerialization")
public Point testSerialization() {
GeometryFactory geometryFactory = new GeometryFactory();
Point point = geometryFactory.createPoint(new Coordinate(1, 2));
return point;
}
I know that I have to add Serialization & De-Serialization references to Jackson, either manually or using a library like Jackson-datatype-jts, to enable Jackson to work with Geometry classes
My Question is, why do I have to do this explicitly for Geomtery types, whereas my Custom classes work without manipulating any configurations?
Jackson works well without any extra configuration with all regular POJO classes. Problem appears, when POJO classes are not regular: for example, do not have getters, setters, no-arg constructor, etc.
In your case, two or more classes have circular reference between them. When default serialiser wants to serialise all properties it dives in infinite recursion because of that. In that case we need to provide custom serialiser which handles this case properly.
This is why you need to provide custom serialisers and deserialisers for com.vividsolutions.jts.geom package.
We're trying to figure out a robust way of persisting enums using JPA. The common approach of using #Enumerated is not desirable, because it's too easy to break the mappings when refactoring. Each enum should have a separate database value that can be different than the enum name/order, so that you can safely change the name or internal ordering (e.g. the ordinal values) of the enum without breaking anything. E.g. this blog post has an example on how to achieve this, but we feel the suggested solution adds too much clutter to the code. We'd like to achieve a similar result by using the new AttributeConverter mechanism introduced in JPA 2.1. We have an interface that each enum should implement that defines a method for getting the value that is used to store the enum in the database. Example:
public interface PersistableEnum {
String getDatabaseValue();
}
...
public enum SomeEnum implements PersistableEnum {
FOO("foo"), BAR("bar");
private String databaseValue;
private SomeEnum(String databaseValue) {
this.databaseValue = databaseValue;
}
public void getDatabaseValue() {
return databaseValue;
}
}
We also have a base converter that has the logic for converting enums to Strings and vice versa, and separate concrete converter classes for each enum type (AFAIK, a fully generic enum converter is not possible to implement, this is also noted in this SO answer). The concrete converters then simply call the base class that does the conversion, like this:
public abstract class EnumConverter<E extends PersistableEnum> {
protected String toDatabaseValue(E value) {
// Do the conversion...
}
protected E toEntityAttribute(Class<E> enumClass, String value) {
// Do the conversion...
}
}
...
#Converter(autoApply = true)
public class SomeEnumConverter extends EnumConverter<SomeEnum>
implements AttributeConverter<SomeEnum, String> {
public String convertToDatabaseColumn(SomeEnum attribute) {
return toDatabaseValue(attribute);
}
public SomeEnum convertToEntityAttribute(String dbData) {
return toEntityAttribute(SomeEnum.class, dbData);
}
}
However, while this approach works very nicely in a technical sense, there's still a pretty nasty pitfall: Whenever someone creates a new enum class whose values need to be stored to the database, that person also needs to remember to make the new enum implement the PersistableEnum interface and write a converter class for it. Without this, the enum will get persisted without a problem, but the conversion will default to using #Enumerated(EnumType.ORDINAL), which is exactly what we want to avoid. How could we prevent this? Is there a way to make JPA (in our case, Hibernate) NOT default to any mapping, but e.g. throw an exception if no #Enumerated is defined on a field and no converter can be found for the type? Or could we create a "catch all" converter that is called for all enums that don't have their own specific converter class and always throw an exception from there? Or do we just have to suck it up and try to remember the additional steps each time?
You want to ensure that all Enums are instances of PersistableEnum.
You need to set a Default Entity Listener (an entity listener whose callbacks apply to all entities in the persistence unit).
In the Default Entity Listener class implement the #PrePersist method and make sure all the Enums are instances of PersistableEnum.
I want to use method canDeserialize, because at moment deserialization i want to get type class for apply at custom deserialization, as about next example :
public T deserialize(byte[] bytes) throws SerializationException {
bolean isAccount = this.objectMapper.canDeserialize(??????).
T t = null;
if(isAccount)
t = (T)this.objectMapper.readValue(bytes,Account.class);
else
t = (T) this.objectMapper.readValue(bytes, 0, bytes.length, new TypeReference<Object>(){});
return t;
}
In this case Account class have annotation #JsonDeserialize for a custom deserialization .
To directly answer your question, this is how you use the canDeserialize method:
final ObjectMapper mapper = new ObjectMapper();
mapper.canDeserialize(mapper.constructType(Bean.class));
Where Bean is the name of your Java class to be checked.
But wait, you are trying to solve the wrong problem. You are struggling with the logic for your method because it has not been designed properly. You are really asking too much of the Java runtime (and Jackson library), by trying to make them infer all the required information about the type to be instantiated (based on the parameterized return). To solve this you should include the class representing the type to be deserialized as a parameter to the method, greatly simplifying the logic:
public <T> T deserialize(byte[] bytes, Class<T> clazz) throws IOException,
JsonProcessingException {
return new ObjectMapper().readValue(bytes, clazz);
}
At this point you have probably realized that the method above provides no additional functionality over just calling ObjectMapper.readValue directly, so ... just do that! No need to define custom methods, just use ObjectMapper and you are good to go. Keep in mind that you do not need to do anything explicit to trigger custom deserialization of classes. The Jackson runtime automatically detects when a class has a custom deserializer and invokes it.
I want to convert a JSON string into java object, but the class of this object contains abstract fields, which Jackson can't instantiate, and doesn't produce the object. What is the easiest way to tell it about some default implementation of an abstract class, like
setDefault(AbstractAnimal.class, Cat.class);
or to decide about the implementation class based on JSON attribute name, eg. for JSON object:
{
...
cat: {...}
...
}
i would just wite:
setImpl("cat", Cat.class);
I know it's possible in Jackson to embed class information inside JSON, but I don't want to complicate the JSON format I use. I want to decide what class to use just by setting default implementation class, or by the attribute name ('cat') - like in XStream library, where you write:
xStream.alias("cat", Cat.class);
Is there a way to do so, especially in one line, or does it require some more code?
There are multiple ways; before version 1.8, simplest way is probably to do:
#JsonDeserialize(as=Cat.class)
public abstract class AbstractAnimal { ... }
as to deciding based on attribute, that is best done using #JsonTypeInfo, which does automatic embeddeding (when writing) and use of type information.
There are multiple kinds of type info (class name, logical type name), as well as inclusion mechanisms (as-included-property, as-wrapper-array, as-wrapper-object). This page: https://github.com/FasterXML/jackson-docs/wiki/JacksonPolymorphicDeserialization explains some of the concepts.
A full fledged answer with a very clear example can be found here: https://stackoverflow.com/a/30386694/584947
Jackson refers to this as Polymorphic Deserialization.
It definitely helped me with my issue. I had an abstract class that I was saving in a database and needed to unmarshal it to a concrete instance of a class (understandably).
It will show you how to properly annotate the parent abstract class and how to teach jackson how to pick among the available sub-class candidates at run-time when unmarshaling.
If you want to pollute neither your JSON with extra fields nor your classes with annotation, you can write a very simple module and deserializer that uses the default subclass you want. It is more than one line due to some boilerplate code, but it is still relatively simple.
class AnimalDeserializer extends StdDeserializer<Animal> {
public AnimalDeserializer() {
super(Animal.class);
}
public Animal deserialize(JsonParser jsonParser, DeserializationContext context) throws IOException {
return jsonParser.readValueAs(Cat.class);
}
}
class AnimalModule extends SimpleModule {
{
addDeserializer(Animal.class, new AnimalDeserializer());
}
}
Then register this module for the ObjectMapper and that's it (Zoo is the container class that has an Animal field).
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new AnimalModule());
return objectMapper.readValue(json, Zoo.class);
The problem can be solved with the annotation #JsonDeserialize on the abstract class.
Refers to Jackson Exceptions Problems and Solutions for more info