Returning entities in Rest API with Spring - java

Creating a restful api for a web application in Spring is pretty easy.
Let's say we have a Movie entity, with a name, year, list of genres and list of actors. In order to return a list of all movies in json format, we just create a method in some controller that will query a database and return a list as a body of ResponseEntity. Spring will magically serialize it, and all works great :)
But, what if I, in some case, want that list of actors in a movie to be serialized, and not in other? And in some other case, alongside the fields of the movie class, I need to add some other properties, for each movie in the list, which values are dynamically generated?
My current solution is to use #JsonIgnore on some fields or to create a MovieResponse class with fields like in Movie class and additional fields that are needed, and to convert from Movie to MovieResponse class each time.
Is there a better way to do this?

The point of the JSONIgnore annotation is to tell the DispatcherServlet (or whatever component in Spring handles rendering the response) to ignore certain fields if those fields are null or otherwise omitted.
This can provide you with some flexibility in terms of what data you expose to the client in certain cases.
Downside to JSONIgnore:
However, there are some downsides to using this annotation that I've recently encountered in my own projects. This applies mainly to the PUT method and cases where the object that your controller serializes data to is the same object that is used to store that data in the database.
The PUT method implies that you're either creating a new collection on the server or are replacing a collection on the server with the new collection you're updating.
Example of Replacing a Collection on the server:
Imagine that you're making a PUT request to your server, and the RequestBody contains a serialized Movie entity, but this Movie entity contains no actors because you've omitted them! Later on down the road, you implement a new feature that allows your users to edit and correct spelling errors in the Movie description, and you use PUT to send the Movie entity back to the server, and you update the database.
But, let's say that -- because it's been so long since you added JSONIgnore to your objects -- you've forgotten that certain fields are optional. In the client side, you forget to include the collection of actors, and now your user accidentally overwrites Movie A with actors B, C, and D, with Movie A with no actors whatsoever!
Why is JSONIgnore opt-in?
It stands to reason that the intention behind forcing you to opt-out of making certain fields required is precisely so that these types of data integrity issues are avoided. In a world where you're not using JSONIgnore, you guarantee that your data can never be replaced with partial data unless you explicitly set that data yourself. With JSONIgnore, you remove these safeguards.
With that said, JSONIgnore is very valuable, and I use it myself in precisely the same manner to reduce the size of the payload sent to the client. However, I'm beginning to rethink this strategy and instead opt for one where I use POJO classes in a separate layer for sending data to the frontend than what I use to interact with the database.
Possible Better Setup?:
The ideal setup -- from my experience dealing with this particular problem -- is to use Constructor injection for your Entity objects instead of setters. Force yourself to have to pass in every parameter at instantiation time so that your entities are never partially filled. If you try to partially fill them, the compiler stops you from doing something you may regret.
For sending data to the client side, where you may want to omit certain pieces of data, you could use a separate, disconnected entity POJO, or use a JSONObject from org.json.
When sending data from the client to the server, your frontend entity objects receive the data from the model database layer, partially or full, since you don't really care if the frontend gets partial data. But then when storing the data in the datastore, you would first fetch the already-stored object from the datastore, update its properties, and then store it back in the datastore. In other words, if you were missing the actors, it wouldn't matter because the object you're updating from the datastore already has the actors assigned to it's properties. Thus, you only replace the fields that you explicitly intend to replace.
While there would be more maintenance overhead and complexity to this setup, you would gain a powerful advantage: The Java compiler would have your back! It won't let you or even a hapless colleague do anything in the code that might compromise the data in the datastore. If you attempt to create an entity on the fly in your model layer, you'll be forced to use the constructor, and forced to provide all of the data. If you don't have all of the data and cannot instantiate the object, then you'll either need to pass empty values (which should signal a red flag to you) or fetch that data from the datastore first.

I ran into this problem, and really wanted to keep using #JsonIgnore, but also use the entities/POJO's to use in the JSON calls.
After a lot of digging I came up with the solution of automatically retrieving the ignored fields from the database, on every call of the object mapper.
Ofcourse there are some requirements which are needed for this solution. Like you have to use the repository, but in my case this works just the way I need it.
For this to work you need to make sure the ObjectMapper in MappingJackson2HttpMessageConverter is intercepted and the fields marked with #JsonIgnore are filled. Therefore we need our own MappingJackson2HttpMessageConverter bean:
public class MvcConfig extends WebMvcConfigurerAdapter {
#Override
public void extendMessageConverters(List<HttpMessageConverter<?>> converters) {
for (HttpMessageConverter converter : converters) {
if (converter instanceof MappingJackson2HttpMessageConverter) {
((MappingJackson2HttpMessageConverter)converter).setObjectMapper(objectMapper());
}
}
}
#Bean
public ObjectMapper objectMapper() {
ObjectMapper objectMapper = new FillIgnoredFieldsObjectMapper();
Jackson2ObjectMapperBuilder.json().configure(objectMapper);
return objectMapper;
}
}
Each JSON request is than converted into an object by our own objectMapper, which fills the ignored fields by retrieving them from the repository:
/**
* Created by Sander Agricola on 18-3-2015.
*
* When fields or setters are marked as #JsonIgnore, the field is not read from the JSON and thus left empty in the object
* When the object is a persisted entity it might get stored without these fields and overwriting the properties
* which where set in previous calls.
*
* To overcome this property entities with ignored fields are detected. The same object is than retrieved from the
* repository and all ignored fields are copied from the database object to the new object.
*/
#Component
public class FillIgnoredFieldsObjectMapper extends ObjectMapper {
final static Logger logger = LoggerFactory.getLogger(FillIgnoredFieldsObjectMapper.class);
#Autowired
ListableBeanFactory listableBeanFactory;
#Override
protected Object _readValue(DeserializationConfig cfg, JsonParser jp, JavaType valueType) throws IOException, JsonParseException, JsonMappingException {
Object result = super._readValue(cfg, jp, valueType);
fillIgnoredFields(result);
return result;
}
#Override
protected Object _readMapAndClose(JsonParser jp, JavaType valueType) throws IOException, JsonParseException, JsonMappingException {
Object result = super._readMapAndClose(jp, valueType);
fillIgnoredFields(result);
return result;
}
/**
* Find all ignored fields in the object, and fill them with the value as it is in the database
* #param resultObject Object as it was deserialized from the JSON values
*/
public void fillIgnoredFields(Object resultObject) {
Class c = resultObject.getClass();
if (!objectIsPersistedEntity(c)) {
return;
}
List ignoredFields = findIgnoredFields(c);
if (ignoredFields.isEmpty()) {
return;
}
Field idField = findIdField(c);
if (idField == null || getValue(resultObject, idField) == null) {
return;
}
CrudRepository repository = findRepositoryForClass(c);
if (repository == null) {
return;
}
//All lights are green: fill the ignored fields with the persisted values
fillIgnoredFields(resultObject, ignoredFields, idField, repository);
}
/**
* Fill the ignored fields with the persisted values
*
* #param object Object as it was deserialized from the JSON values
* #param ignoredFields List with fields which are marked as JsonIgnore
* #param idField The id field of the entity
* #param repository The repository for the entity
*/
private void fillIgnoredFields(Object object, List ignoredFields, Field idField, CrudRepository repository) {
logger.debug("Object {} contains fields with #JsonIgnore annotations, retrieving their value from database", object.getClass().getName());
try {
Object storedObject = getStoredObject(getValue(object, idField), repository);
if (storedObject == null) {
return;
}
for (Field field : ignoredFields) {
field.set(object, getValue(storedObject, field));
}
} catch (IllegalAccessException e) {
logger.error("Unable to fill ignored fields", e);
}
}
/**
* Get the persisted object from database.
*
* #param id The id of the object (most of the time an int or string)
* #param repository The The repository for the entity
* #return The object as it is in the database
* #throws IllegalAccessException
*/
#SuppressWarnings("unchecked")
private Object getStoredObject(Object id, CrudRepository repository) throws IllegalAccessException {
return repository.findOne((Serializable)id);
}
/**
* Get the value of a field for an object
*
* #param object Object with values
* #param field The field we want to retrieve
* #return The value of the field in the object
*/
private Object getValue(Object object, Field field) {
try {
field.setAccessible(true);
return field.get(object);
} catch (IllegalAccessException e) {
logger.error("Unable to access field value", e);
return null;
}
}
/**
* Test if the object is a persisted entity
* #param c The class of the object
* #return true when it has an #Entity annotation
*/
private boolean objectIsPersistedEntity(Class c) {
return c.isAnnotationPresent(Entity.class);
}
/**
* Find the right repository for the class. Needed to retrieve the persisted object from database
*
* #param c The class of the object
* #return The (Crud)repository for the class.
*/
private CrudRepository findRepositoryForClass(Class c) {
return (CrudRepository)new Repositories(listableBeanFactory).getRepositoryFor(c);
}
/**
* Find the Id field of the object, the Id field is the field with the #Id annotation
*
* #param c The class of the object
* #return the id field
*/
private Field findIdField(Class c) {
for (Field field : c.getDeclaredFields()) {
if (field.isAnnotationPresent(Id.class)) {
return field;
}
}
return null;
}
/**
* Find a list of all fields which are ignored by json.
* In some cases the field itself is not ignored, but the setter is. In this case this field is also returned.
*
* #param c The class of the object
* #return List with ignored fields
*/
private List findIgnoredFields(Class c) {
List ignoredFields = new ArrayList();
for (Field field : c.getDeclaredFields()) {
//Test if the field is ignored, or the setter is ignored.
//When the field is ignored it might be overridden by the setter (by adding #JsonProperty to the setter)
if (fieldIsIgnored(field) ? setterDoesNotOverrideIgnore(field) : setterIsIgnored(field)) {
ignoredFields.add(field);
}
}
return ignoredFields;
}
/**
* #param field The field we want to retrieve
* #return True when the field is ignored by json
*/
private boolean fieldIsIgnored(Field field) {
return field.isAnnotationPresent(JsonIgnore.class);
}
/**
* #param field The field we want to retrieve
* #return true when the setter is ignored by json
*/
private boolean setterIsIgnored(Field field) {
return annotationPresentAtSetter(field, JsonIgnore.class);
}
/**
* #param field The field we want to retrieve
* #return true when the setter is NOT ignored by json, overriding the property of the field.
*/
private boolean setterDoesNotOverrideIgnore(Field field) {
return !annotationPresentAtSetter(field, JsonProperty.class);
}
/**
* Test if an annotation is present at the setter of a field.
*
* #param field The field we want to retrieve
* #param annotation The annotation looking for
* #return true when the annotation is present
*/
private boolean annotationPresentAtSetter(Field field, Class annotation) {
try {
Method setter = getSetterForField(field);
return setter.isAnnotationPresent(annotation);
} catch (NoSuchMethodException e) {
return false;
}
}
/**
* Get the setter for the field. The setter is found based on the name with "set" in front of it.
* The type of the field must be the only parameter for the method
*
* #param field The field we want to retrieve
* #return Setter for the field
* #throws NoSuchMethodException
*/
#SuppressWarnings("unchecked")
private Method getSetterForField(Field field) throws NoSuchMethodException {
Class c = field.getDeclaringClass();
return c.getDeclaredMethod(getSetterName(field.getName()), field.getType());
}
/**
* Build the setter name for a fieldName.
* The Setter name is the name of the field with "set" in front of it. The first character of the field
* is set to uppercase;
*
* #param fieldName The name of the field
* #return The name of the setter
*/
private String getSetterName(String fieldName) {
return String.format("set%C%s", fieldName.charAt(0), fieldName.substring(1));
}
}
Maybe not the most clean solution in all cases, but in my case it does the trick just the way I want it to work.

Related

Where exactly is a model object created in Spring MVC?

After going through some tutorials and initial document reading from the docs.spring.org reference I understood that it is created in the controller of a POJO class created by the developer.
But while reading this I came across the paragraph below:
An #ModelAttribute on a method argument indicates the argument should be retrieved from the model. If not present in the model, the argument should be instantiated first and then added to the model. Once present in the model, the argument's fields should be populated from all request parameters that have matching names. This is known as data binding in Spring MVC, a very useful mechanism that saves you from having to parse each form field individually.
#RequestMapping(value="/owners/{ownerId}/pets/{petId}/edit", method = RequestMethod.POST)
public String processSubmit(#ModelAttribute Pet pet) {
}
Spring Documentation
In the paragraph what is most disturbing is the line:
"If not present in the model ... "
How can the data be there in the model? (Because we have not created a model - it will be created by us.)
Also, I have seen a few controller methods accepting the Model type as an argument. What does that mean? Is it getting the Model created somewhere? If so who is creating it for us?
If not present in the model, the argument should be instantiated first and then added to the model.
The paragraph describes the following piece of code:
if (mavContainer.containsAttribute(name)) {
attribute = mavContainer.getModel().get(name);
} else {
// Create attribute instance
try {
attribute = createAttribute(name, parameter, binderFactory, webRequest);
}
catch (BindException ex) {
...
}
}
...
mavContainer.addAllAttributes(attribute);
(taken from ModelAttributeMethodProcessor#resolveArgument)
For every request, Spring initialises a ModelAndViewContainer instance which records model and view-related decisions made by HandlerMethodArgumentResolvers and HandlerMethodReturnValueHandlers during the course of invocation of a controller method.
A newly-created ModelAndViewContainer object is initially populated with flash attributes (if any):
ModelAndViewContainer mavContainer = new ModelAndViewContainer();
mavContainer.addAllAttributes(RequestContextUtils.getInputFlashMap(request));
It means that the argument won't be initialised if it already exists in the model.
To prove it, let's move to a practical example.
The Pet class:
public class Pet {
private String petId;
private String ownerId;
private String hiddenField;
public Pet() {
System.out.println("A new Pet instance was created!");
}
// setters and toString
}
The PetController class:
#RestController
public class PetController {
#GetMapping(value = "/internal")
public void invokeInternal(#ModelAttribute Pet pet) {
System.out.println(pet);
}
#PostMapping(value = "/owners/{ownerId}/pets/{petId}/edit")
public RedirectView editPet(#ModelAttribute Pet pet, RedirectAttributes attributes) {
System.out.println(pet);
pet.setHiddenField("XXX");
attributes.addFlashAttribute("pet", pet);
return new RedirectView("/internal");
}
}
Let's make a POST request to the URI /owners/123/pets/456/edit and see the results:
A new Pet instance was created!
Pet[456,123,null]
Pet[456,123,XXX]
A new Pet instance was created!
Spring created a ModelAndViewContainer and didn't find anything to fill the instance with (it's a request from a client; there weren't any redirects). Since the model is empty, Spring had to create a new Pet object by invoking the default constructor which printed the line.
Pet[456,123,null]
Once present in the model, the argument's fields should be populated from all request parameters that have matching names.
We printed the given Pet to make sure all the fields petId and ownerId had been bound correctly.
Pet[456,123,XXX]
We set hiddenField to check our theory and redirected to the method invokeInternal which also expects a #ModelAttribute. As we see, the second method received the instance (with own hidden value) which was created for the first method.
To answer the question i found few snippets of code with the help of #andrew answer. Which justify a ModelMap instance[a model object] is created well before our controller/handler is called for specific URL
public class ModelAndViewContainer {
private boolean ignoreDefaultModelOnRedirect = false;
#Nullable
private Object view;
private final ModelMap defaultModel = new BindingAwareModelMap();
....
.....
}
If we see the above snippet code (taken from spring-webmvc-5.0.8 jar). BindingAwareModelMap model object is created well before.
For better Understanding adding the comments for the class BindingAwareModelMap
/**
* Subclass of {#link org.springframework.ui.ExtendedModelMap} that automatically removes
* a {#link org.springframework.validation.BindingResult} object if the corresponding
* target attribute gets replaced through regular {#link Map} operations.
*
* <p>This is the class exposed to handler methods by Spring MVC, typically consumed through
* a declaration of the {#link org.springframework.ui.Model} interface. There is no need to
* build it within user code; a plain {#link org.springframework.ui.ModelMap} or even a just
* a regular {#link Map} with String keys will be good enough to return a user model.
*
#SuppressWarnings("serial")
public class BindingAwareModelMap extends ExtendedModelMap {
....
....
}

How to generate an example POJO from Swagger ApiModelProperty annotations?

We are creating a REST API which is documented using Swagger's #ApiModelProperty annotations. I am writing end-to-end tests for the API, and I need to generate the JSON body for some of the requests. Assume I need to post the following JSON to an endpoint:
{ "name": "dan", "age": "33" }
So far I created a separate class containing all the necessary properties and which can be serialized to JSON using Jackson:
#JsonIgnoreProperties(ignoreUnknown = true)
public class MyPostRequest {
private String name;
private String age;
// getters and fluid setters omitted...
public static MyPostRequest getExample() {
return new MyPostRequest().setName("dan").setAge("33");
}
}
However, we noticed that we already have a very similar class in the codebase which defines the model that the API accepts. In this model class, the example values for each property are already defined in #ApiModelProperty:
#ApiModel(value = "MyAPIModel")
public class MyAPIModel extends AbstractModel {
#ApiModelProperty(required = true, example = "dan")
private String name;
#ApiModelProperty(required = true, example = "33")
private String age;
}
Is there a simple way to generate an instance of MyAPIModel filled with the example values for each property? Note: I need to be able to modify single properties in my end-to-end test before converting to JSON in order to test different edge cases. Therefore it is not sufficient to generate the example JSON directly.
Essentially, can I write a static method getExample() on MyAPIModel (or even better on the base class AbstractModel) which returns an example instance of MyAPIModel as specified in the Swagger annotations?
This does not seem to be possible as of the time of this answer. The closest possibilities I found are:
io.swagger.converter.ModelConverters: The method read() creates Model objects, but the example member in those models is null. The examples are present in the properties member in String form (taken directly from the APIModelParameter annotations).
io.swagger.codegen.examples.ExampleGenerator: The method resolveModelToExample() takes the output from ModelConverters.read(), and generates a Map representing the object with its properties (while also parsing non-string properties such as nested models). This method is used for serializing to JSON. Unfortunately, resolveModelToExample() is private. If it were publicly accessible, code to generate a model default for an annotated Swagger API model class might look like this:
protected <T extends AbstractModel> T getModelExample(Class<T> clazz) {
// Get the swagger model instance including properties list with examples
Map<String,Model> models = ModelConverters.getInstance().read(clazz);
// Parse non-string example values into proper objects, and compile a map of properties representing an example object
ExampleGenerator eg = new ExampleGenerator(models);
Object resolved = eg.resolveModelToExample(clazz.getSimpleName(), null, new HashSet<String>());
if (!(resolved instanceof Map<?,?>)) {
// Model is not an instance of io.swagger.models.ModelImpl, and therefore no example can be resolved
return null;
}
T result = clazz.newInstance();
BeanUtils.populate(result, (Map<?,?>) resolved);
return result;
}
Since in our case all we need are String, boolean and int properties, there is at least the possibility to parse the annotations ourselves in a crazy hackish manner:
protected <T extends MyModelBaseClass> T getModelExample(Class<T> clazz) {
try {
T result = clazz.newInstance();
for(Field field : clazz.getDeclaredFields()) {
if (field.isAnnotationPresent(ApiModelProperty.class)) {
String exampleValue = field.getAnnotation(ApiModelProperty.class).example();
if (exampleValue != null) {
boolean accessible = field.isAccessible();
field.setAccessible(true);
setField(result, field, exampleValue);
field.setAccessible(accessible);
}
}
}
return result;
} catch (InstantiationException | IllegalAccessException e) {
throw new IllegalArgumentException("Could not create model example", e);
}
}
private <T extends MyModelBaseClass> void setField(T model, Field field, String value) throws IllegalArgumentException, IllegalAccessException {
Class<?> type = field.getType();
LOGGER.info(type.toString());
if (String.class.equals(type)) {
field.set(model, value);
} else if (Boolean.TYPE.equals(type) || Boolean.class.equals(type)) {
field.set(model, Boolean.parseBoolean(value));
} else if (Integer.TYPE.equals(type) || Integer.class.equals(type)) {
field.set(model, Integer.parseInt(value));
}
}
I might open an Issue / PR on Github later to propose adding functionality to Swagger. I am very surprised that nobody else has seemed to request this feature, given that our use case of sending exemplary model instances to the API as a test should be common.

Postgres NoSQL and hibernate [duplicate]

I have a table with a column of type JSON in my PostgreSQL DB (9.2). I have a hard time to map this column to a JPA2 Entity field type.
I tried to use String but when I save the entity I get an exception that it can't convert character varying to JSON.
What is the correct value type to use when dealing with a JSON column?
#Entity
public class MyEntity {
private String jsonPayload; // this maps to a json column
public MyEntity() {
}
}
A simple workaround would be to define a text column.
If you're interested, here are a few code snippets to get the Hibernate custom user type in place. First extend the PostgreSQL dialect to tell it about the json type, thanks to Craig Ringer for the JAVA_OBJECT pointer:
import org.hibernate.dialect.PostgreSQL9Dialect;
import java.sql.Types;
/**
* Wrap default PostgreSQL9Dialect with 'json' type.
*
* #author timfulmer
*/
public class JsonPostgreSQLDialect extends PostgreSQL9Dialect {
public JsonPostgreSQLDialect() {
super();
this.registerColumnType(Types.JAVA_OBJECT, "json");
}
}
Next implement org.hibernate.usertype.UserType. The implementation below maps String values to the json database type, and vice-versa. Remember Strings are immutable in Java. A more complex implementation could be used to map custom Java beans to JSON stored in the database as well.
package foo;
import org.hibernate.HibernateException;
import org.hibernate.engine.spi.SessionImplementor;
import org.hibernate.usertype.UserType;
import java.io.Serializable;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;
/**
* #author timfulmer
*/
public class StringJsonUserType implements UserType {
/**
* Return the SQL type codes for the columns mapped by this type. The
* codes are defined on <tt>java.sql.Types</tt>.
*
* #return int[] the typecodes
* #see java.sql.Types
*/
#Override
public int[] sqlTypes() {
return new int[] { Types.JAVA_OBJECT};
}
/**
* The class returned by <tt>nullSafeGet()</tt>.
*
* #return Class
*/
#Override
public Class returnedClass() {
return String.class;
}
/**
* Compare two instances of the class mapped by this type for persistence "equality".
* Equality of the persistent state.
*
* #param x
* #param y
* #return boolean
*/
#Override
public boolean equals(Object x, Object y) throws HibernateException {
if( x== null){
return y== null;
}
return x.equals( y);
}
/**
* Get a hashcode for the instance, consistent with persistence "equality"
*/
#Override
public int hashCode(Object x) throws HibernateException {
return x.hashCode();
}
/**
* Retrieve an instance of the mapped class from a JDBC resultset. Implementors
* should handle possibility of null values.
*
* #param rs a JDBC result set
* #param names the column names
* #param session
* #param owner the containing entity #return Object
* #throws org.hibernate.HibernateException
*
* #throws java.sql.SQLException
*/
#Override
public Object nullSafeGet(ResultSet rs, String[] names, SessionImplementor session, Object owner) throws HibernateException, SQLException {
if(rs.getString(names[0]) == null){
return null;
}
return rs.getString(names[0]);
}
/**
* Write an instance of the mapped class to a prepared statement. Implementors
* should handle possibility of null values. A multi-column type should be written
* to parameters starting from <tt>index</tt>.
*
* #param st a JDBC prepared statement
* #param value the object to write
* #param index statement parameter index
* #param session
* #throws org.hibernate.HibernateException
*
* #throws java.sql.SQLException
*/
#Override
public void nullSafeSet(PreparedStatement st, Object value, int index, SessionImplementor session) throws HibernateException, SQLException {
if (value == null) {
st.setNull(index, Types.OTHER);
return;
}
st.setObject(index, value, Types.OTHER);
}
/**
* Return a deep copy of the persistent state, stopping at entities and at
* collections. It is not necessary to copy immutable objects, or null
* values, in which case it is safe to simply return the argument.
*
* #param value the object to be cloned, which may be null
* #return Object a copy
*/
#Override
public Object deepCopy(Object value) throws HibernateException {
return value;
}
/**
* Are objects of this type mutable?
*
* #return boolean
*/
#Override
public boolean isMutable() {
return true;
}
/**
* Transform the object into its cacheable representation. At the very least this
* method should perform a deep copy if the type is mutable. That may not be enough
* for some implementations, however; for example, associations must be cached as
* identifier values. (optional operation)
*
* #param value the object to be cached
* #return a cachable representation of the object
* #throws org.hibernate.HibernateException
*
*/
#Override
public Serializable disassemble(Object value) throws HibernateException {
return (String)this.deepCopy( value);
}
/**
* Reconstruct an object from the cacheable representation. At the very least this
* method should perform a deep copy if the type is mutable. (optional operation)
*
* #param cached the object to be cached
* #param owner the owner of the cached object
* #return a reconstructed object from the cachable representation
* #throws org.hibernate.HibernateException
*
*/
#Override
public Object assemble(Serializable cached, Object owner) throws HibernateException {
return this.deepCopy( cached);
}
/**
* During merge, replace the existing (target) value in the entity we are merging to
* with a new (original) value from the detached entity we are merging. For immutable
* objects, or null values, it is safe to simply return the first parameter. For
* mutable objects, it is safe to return a copy of the first parameter. For objects
* with component values, it might make sense to recursively replace component values.
*
* #param original the value from the detached entity being merged
* #param target the value in the managed entity
* #return the value to be merged
*/
#Override
public Object replace(Object original, Object target, Object owner) throws HibernateException {
return original;
}
}
Now all that's left is annotating the entities. Put something like this at the entity's class declaration:
#TypeDefs( {#TypeDef( name= "StringJsonObject", typeClass = StringJsonUserType.class)})
Then annotate the property:
#Type(type = "StringJsonObject")
public String getBar() {
return bar;
}
Hibernate will take care of creating the column with json type for you, and handle the mapping back and forth. Inject additional libraries into the user type implementation for more advanced mapping.
Here's a quick sample GitHub project if anyone wants to play around with it:
https://github.com/timfulmer/hibernate-postgres-jsontype
See PgJDBC bug #265.
PostgreSQL is excessively, annoyingly strict about data type conversions. It won't implicitly cast text even to text-like values such as xml and json.
The strictly correct way to solve this problem is to write a custom Hibernate mapping type that uses the JDBC setObject method. This can be a fair bit of hassle, so you might just want to make PostgreSQL less strict by creating a weaker cast.
As noted by #markdsievers in the comments and this blog post, the original solution in this answer bypasses JSON validation. So it's not really what you want. It's safer to write:
CREATE OR REPLACE FUNCTION json_intext(text) RETURNS json AS $$
SELECT json_in($1::cstring);
$$ LANGUAGE SQL IMMUTABLE;
CREATE CAST (text AS json) WITH FUNCTION json_intext(text) AS IMPLICIT;
AS IMPLICIT tells PostgreSQL it can convert without being explicitly told to, allowing things like this to work:
regress=# CREATE TABLE jsontext(x json);
CREATE TABLE
regress=# PREPARE test(text) AS INSERT INTO jsontext(x) VALUES ($1);
PREPARE
regress=# EXECUTE test('{}')
INSERT 0 1
Thanks to #markdsievers for pointing out the issue.
Maven dependency
The first thing you need to do is to set up the following Hibernate Types Maven dependency in your project pom.xml configuration file:
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-52</artifactId>
<version>${hibernate-types.version}</version>
</dependency>
Domain model
Now, you need to declare the JsonType on either class level or in a package-info.java package-level descriptor, like this:
#TypeDef(name = "json", typeClass = JsonType.class)
And, the entity mapping will look like this:
#Type(type = "json")
#Column(columnDefinition = "jsonb")
private Location location;
If you're using Hibernate 5 or later, then the JSON type is registered automatically by the Postgre92Dialect.
Otherwise, you need to register it yourself:
public class PostgreSQLDialect extends PostgreSQL91Dialect {
public PostgreSQL92Dialect() {
super();
this.registerColumnType( Types.JAVA_OBJECT, "jsonb" );
}
}
In case someone is interested, you can use JPA 2.1 #Convert / #Converter functionality with Hibernate. You would have to use the pgjdbc-ng JDBC driver though. This way you don't have to use any proprietary extensions, dialects and custom types per field.
#javax.persistence.Converter
public static class MyCustomConverter implements AttributeConverter<MuCustomClass, String> {
#Override
#NotNull
public String convertToDatabaseColumn(#NotNull MuCustomClass myCustomObject) {
...
}
#Override
#NotNull
public MuCustomClass convertToEntityAttribute(#NotNull String databaseDataAsJSONString) {
...
}
}
...
#Convert(converter = MyCustomConverter.class)
private MyCustomClass attribute;
I tried many methods I found on the Internet, most of them are not working, some of them are too complex. The below one works for me and is much more simple if you don't have that strict requirements for PostgreSQL type validation.
Make PostgreSQL jdbc string type as unspecified, like
<connection-url>
jdbc:postgresql://localhost:test?stringtype=‌​unspecified
</connect‌​ion-url>
I had a similar problem with Postgres (javax.persistence.PersistenceException: org.hibernate.MappingException: No Dialect mapping for JDBC type: 1111) when executing native queries (via EntityManager) that retrieved json fields in the projection although the Entity class has been annotated with TypeDefs.
The same query translated in HQL was executed without any problem.
To solve this I had to modify JsonPostgreSQLDialect this way:
public class JsonPostgreSQLDialect extends PostgreSQL9Dialect {
public JsonPostgreSQLDialect() {
super();
this.registerColumnType(Types.JAVA_OBJECT, "json");
this.registerHibernateType(Types.OTHER, "myCustomType.StringJsonUserType");
}
Where myCustomType.StringJsonUserType is the class name of the class implementing the json type (from above, Tim Fulmer answer) .
There is an easier to to do this which doesn't involve creating a function by using WITH INOUT
CREATE TABLE jsontext(x json);
INSERT INTO jsontext VALUES ($${"a":1}$$::text);
ERROR: column "x" is of type json but expression is of type text
LINE 1: INSERT INTO jsontext VALUES ($${"a":1}$$::text);
CREATE CAST (text AS json)
WITH INOUT
AS ASSIGNMENT;
INSERT INTO jsontext VALUES ($${"a":1}$$::text);
INSERT 0 1
I was running into this and didn't want to enable stuff via connection string, and allow implicit conversions. At first I tried to use #Type, but because I'm using a custom converter to serialize/deserialize a Map to/from JSON, I couldn't apply a #Type annotation. Turns out I just needed to specify columnDefinition = "json" in my #Column annotation.
#Convert(converter = HashMapConverter.class)
#Column(name = "extra_fields", columnDefinition = "json")
private Map<String, String> extraFields;
All the above solution did not work for me. Finally I made use of native queries to insert the data.
Step -1 Create an abstract class AbstractEntity which will implements Persistable
with annotation #MappedSuperclass (part of javax.persistence)
Step -2 In this class create your sequence generator because you can not generate a sequencer with the native queries. #Id #GeneratedValues #Column private Long seqid;
Dont forget - Your entity class should extends your abstract class. (helping your sequence to work as well it may works on date as well(check for date i am not sure))
Step- 3 In repo interface write the native query.
value="INSERT INTO table(?,?)values(:?,:cast(:jsonString as json))",nativeQuery=true
Step - 4 This will convert your java string object to json and insert/store in database and also you will be able to increment the sequence on each insertion as well.
I got casting error when I worked using converter. Also type-52 personally I avoided to use that in my project.
Please upvote my ans if it works for you guys.
I ran into this issue when I migrated my projects from MySQL 8.0.21 to Postgres 13. My project uses Spring boot with the Hibernate types dependency version 2.7.1. In my case the solution was simple.
All I needed to do was change that and it worked.
Referenced from the Hibernate Types Documentation page.
I encountered the column "roles" is of type json but expression is of type character varying exception with the following entity with Postgres:
#Entity
#TypeDefs(#TypeDef(name = "json", typeClass = JsonBinaryType.class))
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
#EqualsAndHashCode(of = "extId")
public class ManualTaskUser {
#Id
private String extId;
#Type(type = "json")
#Column(columnDefinition = "json")
private Set<Role> roles;
}
It should be mentioned that Role is an enum and not a POJO.
In the generated SQL I could see that the Set was correctly serialized like this: ["SYSTEM","JOURNEY","ADMIN","OBJECTION","DEVOPS","ASSESSMENT"].
Changing the typeClass in the TypeDef annotation from JsonStringType to JsonBinaryType solved the problem! Thanks to Joseph Waweru for the hint!

How to convert custom annotations to UIMA CAS structures and serialize them to XMI

I am having a problem converting custom annotated documents to UIMA CASes and then serializing them to XMI in order to view the annotations through the UIMA annotation viewer GUI.
I am using uimaFIT to construct my components due to the fact that it is more easy to control, test and debug. The pipeline is constructed from 3 components:
CollectionReader component reading files with raw text.
Annotator component for converting annotations from the custom documents to UIMA annotations
CasConsumer component which serializes the CASes to XMI
My pipeline works and outputs XMI files at the end but without the annotations. I do not understand very clearly how do the CAS objects get passed between the components. The annotator logic consists in making RESTful calls to certain endpoints and by using the client SDK provided by the service I am trying to convert the annotation models. The conversion logic part of the Annotator component looks like this:
public class CustomDocumentToUimaCasConverter implements UimaCasConverter {
private TypeSystemDescription tsd;
private AnnotatedDocument startDocument;
private ArrayFS annotationFeatureStructures;
private int featureStructureArrayCapacity;
public AnnotatedDocument getStartDocument() {
return startDocument;
}
public CustomDocumentToUimaCasConverter(AnnotatedDocument startDocument) {
try {
this.tsd = TypeSystemDescriptionFactory.createTypeSystemDescription();
} catch (ResourceInitializationException e) {
LOG.error("Error when creating default type system", e);
}
this.startDocument = startDocument;
}
public TypeSystemDescription getTypeSystemDescription() {
return this.tsd;
}
#Override
public void convertAnnotations(CAS cas) {
Map<String, List<Annotation>> entities = this.startDocument.entities;
int featureStructureArrayIndex = 0;
inferCasTypeSystem(entities.keySet());
try {
/*
* This is a hack allowing the CAS object to have an updated type system.
* We are creating a new CAS by passing the new TypeSystemDescription which actually
* should have been updated by an internal call of typeSystemInit(cas.getTypeSystem())
* originally part of the CasInitializer interface that is now deprecated and the CollectionReader
* is calling it internally in its implementation. The problem consists in the fact that now the
* the typeSystemInit method of the CasInitializer_ImplBase has an empty implementation and
* nothing changes!
*/
LOG.info("Creating new CAS with updated typesystem...");
cas = CasCreationUtils.createCas(tsd, null, null);
} catch (ResourceInitializationException e) {
LOG.info("Error creating new CAS!", e);
}
TypeSystem typeSystem = cas.getTypeSystem();
this.featureStructureArrayCapacity = entities.size();
this.annotationFeatureStructures = cas.createArrayFS(featureStructureArrayCapacity);
for (Map.Entry<String, List<Annotation>> entityEntry : entities.entrySet()) {
String annotationName = entityEntry.getKey();
annotationName = UIMA_ANNOTATION_TYPES_PACKAGE + removeDashes(annotationName);
Type type = typeSystem.getType(annotationName);
List<Annotation> annotations = entityEntry.getValue();
LOG.info("Get Type -> " + type);
for (Annotation ann : annotations) {
AnnotationFS afs = cas.createAnnotation(type, (int) ann.startOffset, (int) ann.endOffset);
cas.addFsToIndexes(afs);
if (featureStructureArrayIndex + 1 == featureStructureArrayCapacity) {
resizeArrayFS(featureStructureArrayCapacity * 2, annotationFeatureStructures, cas);
}
annotationFeatureStructures.set(featureStructureArrayIndex++, afs);
}
}
cas.removeFsFromIndexes(annotationFeatureStructures);
cas.addFsToIndexes(annotationFeatureStructures);
}
#Override
public void inferCasTypeSystem(Iterable<String> originalTypes) {
for (String typeName : originalTypes) {
//UIMA Annotations are not allowed to contain dashes
typeName = removeDashes(typeName);
tsd.addType(UIMA_ANNOTATION_TYPES_PACKAGE + typeName,
"Automatically generated type for " + typeName, "uima.tcas.Annotation");
LOG.info("Inserted new type -> " + typeName);
}
}
/**
* Removes dashes from UIMA Annotations because they are not allowed to contain dashes.
*
* #param typeName the annotation name of the current annotation of the source document
* #return the transformed annotation name suited for the UIMA typesystem
*/
private String removeDashes(String typeName) {
if (typeName.contains("-")) {
typeName = typeName.replaceAll("-", "_");
}
return typeName;
}
#Override
public void setSourceDocumentText(CAS cas) {
cas.setSofaDataString(startDocument.text, "text/plain");
}
private void resizeArrayFS(int newCapacity, ArrayFS originalArray, CAS cas) {
ArrayFS biggerArrayFS = cas.createArrayFS(newCapacity);
biggerArrayFS.copyFromArray(originalArray.toArray(), 0, 0, originalArray.size());
this.annotationFeatureStructures = biggerArrayFS;
this.featureStructureArrayCapacity = annotationFeatureStructures.size();
}
}
`
If someone has dealt with annotation convertions to UIMA types I would appreciate some help.
I think your understanding of CASes and Annotations may be wrong:
From
* This is a hack allowing the CAS object to have an updated type system.
and
LOG.info("Creating new CAS with updated typesystem...");
cas = CasCreationUtils.createCas(tsd, null, null);
I gather that you try to create a new CAS in your Annotator's process() method (I assume that the code you posted is executed there). Unless you are implementing a CAS multiplier, this is not the way to do it. Typically, the collectionreader ingests raw data and creates a CAS for you in its getNext() method. This CAS is passed down the whole UIMA pipeline, and all you need to do is add UIMA annotations to it.
For each Annotation that you want to add, the type system should be known to UIMA. If you use JCasGen and the code it generates, this should not be a problem. Make sure that your types can be autodetected as described here: http://uima.apache.org/d/uimafit-current/tools.uimafit.book.html#d5e531).
This allows you to instantiate Annotations using Java Objects, instead of using low-level Fs calls. The following snippet adds an annotation over the whole document text. It should be trivial to add iterating logic over tokens the in the text and their ingested (non-UIMA) annotations (using your web service).
#Override
public void process(JCas aJCas) throws AnalysisEngineProcessException {
String text = aJCas.getDocumentText();
SomeAnnotation a = new SomeAnnotation(aJCas);
// set the annotation properties
// for each property, JCasGen should have
// generated a setter
a.setSomePropertyValue(someValue);
// add your annotation to the indexes
a.setBegin(0);
a.setEnd(text.length());
a.addToIndexes(aJCas);
}
In order to avoid messing around with starting and ending String indexes, I suggest you use some Token annotation (from DKPro Core, for example: https://dkpro.github.io/dkpro-core/), that you can use as anchor point for your custom annotations.

Separate database model from Network model

Im using GreenDAO and Volley. So I have the following problem: When I make a network request I need to parse with GSON so I have a model to represent entities retrieved from server and other model to represent the GreenDAO objects. Is there any way to only have 1 class per model to represent as a GSON and a Class of ORM?
class Product:
#SerializedName("id")
private String id;
#SerializedName("pictures")
private List<Picture> pictures;
get & set
class PersistentProduct:
private Long id;
private List<Picture> pictures;
/** To-many relationship, resolved on first access (and after reset). Changes to to-many relations are not persisted, make changes to the target entity. */
public List<PersistencePicture> getPictures() {
if (pictures == null) {
if (daoSession == null) {
throw new DaoException("Entity is detached from DAO context");
}
PersistencePictureDao targetDao = daoSession.getPersistencePictureDao();
List<PersistencePicture> picturesNew = targetDao._queryPersistenceProduct_Pictures(id);
synchronized (this) {
if(pictures == null) {
pictures = picturesNew;
}
}
}
return pictures;
}
First I thought to make a Interface, but when you retrieve the data from a DAO the DAO returns the class and not the interface, so I think cannot do in this way, the only solution I found is to make a "ProductUtils" that converts from a "PersistentProduct" to a "Product" and vice versa.
The most elegant way would be to implement a small extension for greendao, so that you can specify the serialized name during schema-creation.
For Example:
de.greenrobot.daogenerator.Property.java:
// in PropertyBuilder append these lines
public PropertyBuilder setSerializedName(String sname) {
// Check the sname on correctness (i.e. not empty, not containing illegal characters)
property.serializedName = sname;
return this;
}
// in Property append these lines
private String serializedName = null;
public boolean isSerialized() {
return serializedName != null;
}
In entity.ftl add this line after line 24 (after package ${entity.javaPackage};):
<#if property.serializedName??>
import com.google.gson.annotations.SerializedName;
</#if>
And after line 55 (after: <#list entity.properties as property>)
<#if property.serializedName??>
#SerializedName("${property.serializedName}")
</#if>
Afterwards you should be able to use you generated greendao-entity for volley with the following restrictions:
If you get a Product over network, nothing is changed in the db, yet. You have to call insertOrReplace().
If you get a Product from db and send it via network some undesired fields might be serialized (i.e. myDao and daoSession)
If you get a Product via network and call insertOrReplace() the "network"-Product will be persisted and a already existing Product will be replaced by it BUT the referenced entities won't get updated or persisted if insertOrReplace() isn't called for each of them!
If you get a Product via network and call insertOrReplace() for every referenced entity toMany-entities that were referenced by the db-Product are still referenced by the updated Product, although they are not listed in the updated Product. You have to call resetPictures() and getPictures() to get the correct list, which will contain all toMany()-entities references by either the original Product stored in DB or the updated Product from network.
Update addressing 2.
To prevent daoSession and myDao from being serialized, you can use the following ExclusionStrategy:
private static class TransientExclusionStrategy implements ExclusionStrategy {
public boolean shouldSkipClass(Class<?> clazz) {
return (clazz.getModifiers() & java.lang.reflect.Modifier.TRANSIENT) != 0;
}
public boolean shouldSkipField(FieldAttributes f) {
return f.hasModifier(java.lang.reflect.Modifier.TRANSIENT);
}
}
Update addressing 1.,3. and 4.
As a fast solution you can add the following method in the KEEP-SECTIONS of your entity:
public void merge(DaoSession s) {
s.insertOrReplace(this);
// do this for all toMany-relations accordingly
for (Picture p : getPictures()) {
s.insertOrReplace(p);
newPics.add(p.getId());
}
resetPictures();
}
This will result in the original entity being updated and attached to the session and dao. Also every Picture that is references by the network-product will be persisted or updated. Pictures reference by the original entity, but not by the network-entity remain untouched and get merged into the list.
This is far from perfect, but it shows where to go and what to do. The next steps would be to do everything that is done in merge() inside one transaction and then to integrate different merge-methods into dao.ftl.
NOTE
The code given in this answer is neither complete nor tested and is meant as a hint on how to solve this. As pointed out above this solution still has some restrictions, that have to be dealt with.

Categories

Resources