Is there a way to disable enhancement for play framework model classes?
The reason why I'm asking this is because I want to serialize the model objects to JSON but as soon as json serializer touches any non initialized model,
that model will be initialized causing an extra database hit and the resulting json will be bloated with unnecessary model objects. I tried excluding the models.* from the application.conf and using a ServerConfigStartup with this call serverConfig.addClass(Model.class) or that call serverConfig.addPackage("models") but neither of them worked for me.
Ebean requires that model classes be enhanced. You can't use a model class with ebean if it's not enhanced.
So, your options are, don't use ebean, or don't serialise your model objects to JSON. The latter is considered best practice, tying your REST API data objects to your database models is not a good idea, for the reason of the problem that you are experiencing now - the two models usually are conceptually different - the database model has references to other models, while the JSON model doesn't. So use different classes to represent the different models.
There is another option, use Jackson annotations like #JsonIgnore to ignore these properties. But really, that's a slippery slope, as your codebase evolves it becomes next to impossible to reason about what your JSON will look like as you start to use more of these annotations on your classes, and maintaining the models, ensuring you don't break your public REST API, becomes a nightmare.
I found a way around the loading of models that are not needed during the json serialization.
First I had to modify the way Jackson serializes the model objects by using this:
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.setVisibility(PropertyAccessor.ALL, Visibility.NONE);
objectMapper.setVisibility(PropertyAccessor.FIELD, Visibility.PUBLIC_ONLY);
This forces Jackson to use the field directly but your model fields have to be public. This is still not enough because uninitialized model sets will still be loaded during the serialization.
To get around this, I had to null the sets before the serialization process by using reflection as follows:
for (Field field : model.getClass().getFields())
{
Object fieldObject = field.get(model);
if (fieldObject instanceof BeanSet)
{
BeanSet beanSet = (BeanSet) fieldObject;
//checks if the set is loaded or not
if (beanSet.isReference())
{
field.set(model, null);
}
}
}
Now your model is safe to pass to the Jackson serializer without uninitialized model objects loaded during the serialization.
String jsonString = objectMapper.writeValueAsString(model);
JsonNode jsonNode = Json.parse(jsonString);
Using #JsonIgnore will also work but it is not reliable in the long run as James mentioned because you may have a need for the model object that you are ignoring later.
Related
I've just started to learn about serialization/deserialization and I'm a bit confused about which type is used where and when... let me explain:
I have an object containing many fields some of which are full of "useless" info.
Now, when I log the contents of such object for debugging purposes I would like the output to be in a nice json format.
So in the toString() method of this object I do the following:
import com.fasterxml.jackson.databind.ObjectMapper;
...
...
#Override
public String toString() {
ObjectMapper objectMapper = new ObjectMapper();
String s = "";
try{
s = objectMapper.writeValueAsString(this);
} catch (Exception e) {
}
return s;
}
but this also logs all the useless fields.
So I've looked around and found the #JsonIgnore annotation from com.fasterxml.jackson.annotation.JsonIgnore which I can put on top of the useless fields so as not to log them.
But from what I've understood serialization is a process of transforming a java object into a bytestream so that it can be written to file, saved in session, sent across the internet. So my noob question is: is it possible that using the #JsonIgnore annotation on top of certain fields will result in those fields not being saved into session (I use an hazelcast map), or not being sent in the http responses I send, or not being written to a file If I ever decide to do that?
If the answer to the previous question is NO, then is that because those types of actions (saving in session, writing to file, sending as http response) use different types of serialization than objectMapper.writeValueAsString(this); so they don't conflict?
In your case, you're using Jackson's ObjectMapper to convert your object to a string representation (in JSON format). The #JsonIgnore annotation is part of Jackson's annotations and will prevent fields annotated with it from being included in the JSON representation of your object.
However, this only affects the string representation created by the ObjectMapper, not other forms of serialization/deserialization. If you want to persist the object in a specific way, you may need to use a different form of serialization (such as binary serialization) or create a custom representation that excludes the fields you don't want to save.
So to answer your questions:
No, using #JsonIgnore will not affect the object saved in a session or sent as an HTTP response.
Yes, that's correct. Different forms of serialization/deserialization may handle fields differently, even if they are part of the same object.
I am receiving massive json objects from a service and so far i've been creating POJOs to match the json that comes in.
However, this is getting far too tedious as with every different service I hit I have to build 15-20 new model classes to represent the new service i'm hitting.
In short, what i'm looking for is a way to get a value I need from a neested object in the json as below (sorry for format):
random1 {
random2 {
arrayOfRandoms
}
random3 {
random4 {
random5 {
someValueIWant
}
}
}
}
so in this case I want random5s someValueIWant object. I want to get it without creating the models for random1/3/4/5 as i've been doing this whole time.
I should mention that I use Jacksons ObjectMapper to turn the json into java objects.
Hope this makes sense.
You could experiment with this online pojo generator:
http://www.jsonschema2pojo.org/
It will generate java classes from plain json (or json schema) and even add jackson annotations.
Make sure you check "Allow additional properties".
It requires valid json as input, so don't forget double quotes around fields names and values
If you find yourself doing that often, there's even scriptable versions and maven plugins.
I have implemented some REST API with springMVC+Jackson+hibernate.
All I needed to do is retrieve objects from database, return it as a list, the conversion to JSON is implicit.
But there is one problem. If I want to add some more information to those object before return/response. For example I am returning a list of "store" object, but I want to add a name of the person who is attending right now.
JAVA does not have dynamic type (how I solve this problem in C#). So, how do we solve this problem in JAVA?
I thought about this, and have come up with a few not so elegant solution.
1. use factory pattern, define another class which contain the name of that person.
2. covert store object to JSON objects (ObjectNode from jackson), put a new attribute into json objects, return json objects.
3. use reflection to inject a new property to store object, return objects, maybe SpringMVC conversion will generate JSON correctly?
option 1 looks bad, will end up with a lot of boiler plate class which doesn't really useful. option 2 looks ok, but is this the best we could do with springMVC?
option 1
Actually your JSON domain is different from your core domain. I would decouple them and create a seperate domain for your JSON objects, as this is a seperate concern and you don't want to mix it. This however might require a lot of 1-to-1 mapping. This is your option 1, with boilerplate. There are frameworks that help you with the boilerplate (such as dozer, MapStruct), but you will always have a performance penalty with frameworks that use generic reflection.
option 2, 3
If you really insist on hacking it in because it's only a few exceptions and not a common pattern, I would certainly not alter the JSON nodes or use reflection (your option 2 and 3). This is certainly not the way to do it in Java.
option 4 [hack]
What you could do is extend your core domain with new types that contain the extra information and in a post-processing step replace the old objects with the new domain objects:
UnaryOperator<String> toJsonStores = domainStore -> toJsonStore(domainStore);
list.replaceAll(toJsonStores);
where the JSONStore extends the domain Store and toJsonStore maps the domain Store to the JSONStore object by adding the person name.
That way you preserve type safety and keep the codebase comprehensive. But if you have to do it more then in a few exceptional cases, you should change strategy.
Are you looking for a rest service that return list of objects that contain not just one type, but many type of objects? If so, Have you tried making the return type of that service method to List<Object>?
I recommend to create a abstract class BaseRestResponse that will be extended by all the items in the list which you want return by your rest service method.
Then make return type as List<BaseRestResponse>.
BaseRestResponse should have all the common properties and the customized object can have the property name as you said
Let's say we have following JSON,
{
"id": "imgsId1",
"type": "Fruits",
"name": "Tropical",
"image":
{
"url": "images/img1.jpg",
"width": 300,
"height": 300
},
"thumbnail":
{
"url": "images/thumbnails/img11.jpg",
"width": 50,
"height": 50
}
}
And in Java Class, we have all fields matching with above JSON.
Each time list of fields to be Deserialized depends on customer who sends the information.
For example for customer 1, we want to only read back following values, (and skip other properties even if provided in JSON)
String[] propertiesToFilter1 = {"type","image.url"};
For example for customer 2, we want to read back following values, (and skip other properties even if provided in JSON)
String[] propertiesToFilter2 = {"type","image.url", "image.width"};
When Deserializing JSON using Jackson, is it possible to provide above array which includes which fields need to be Deserialized,
ImageInfo obj1 = (ImageInfo)objectMapper.readValue(jsonStr, ImageInfo.class);
Update:
On researching on net, i saw that one of the options could be via using
FilterProvider filterProvider = new SimpleFilterProvider().addFilter("filterName1",
SimpleBeanPropertyFilter.serializeAllExcept(propertiesToFilter1));
objectMapper.setFilters(filterProvider);
But i think this is good, if we want to keep reusing the same "filterName1" for multiple customers.
In this scenario, it's little bit different because, we customize list of fields each customer can update. So each customer has different list of JSON fields they can update in different classes.
If we start defining different filter names for each customer, it will be a long list, and lookup will have performance impact.
So was looking for solution, where i can check list of fields allowed to be processed at runtime, when constructing back object using objectMapper.readValue() method.
Update 2 (Apr 25 2016):
Going through other Jackson questions, saw a similar question here,
Jackson Dynamic filtering of properties during deserialization
Using the approach listed below by creating custom "static ObjectMapper", the issue with this approach is we running Reflection API multiple times.
First time Jackson parser is populating all fields using
Reflection API when Deserializing JSON to Java Object
Second time, since we can't take all fields that were populated by
Jackson parser, for populate data into another object, we again need to run through Reflection API to populate another object.
This could result in lot of overhead.
Using the approach defined in above link provided, i think using "BeanDeserializerModifier" seems to be best approach. Now the question is, since we are also using Factory based approach to initialize ObjectMapper, we don't want to hard code all arrays for different customers.
Wanted to check if it's possible to provide the String[] array with list of Properties to be considered at runtime to "BeanDeserializerModifier"?
something similar to,
String[] propertiesToFilter2 = {"type","image.url", "image.width"};
BeanDeserializerModifier curBeanDeserializerModifier =
getBeanDeserializerModifierInstance();
curBeanDeserializerModifier.setPropertiesToConsider(propertiesToFilter2);
Thanks
use #JsonIgnoreProperties with configure parameters
http://www.programcreek.com/java-api-examples/index.php?api=com.fasterxml.jackson.annotation.JsonIgnoreProperties
I am not sure if there is a possibility to configure the deserialization dynamically with annotations.
I would suggest to create a class with a static ObjectMapper. In this class you can create different implementations of deserialization. The business logic of your application should then decide which implementation should be used for which customer. Inside your different implementations you are able to configure the ObjectMapper like you do it with annotations.
A second solution can be to deserialize the full json for every customer and let the business logic decide which fields/objects of the Pojo is used. This needs also an implementation in your application.
The benefit of the implementation of the configuration in the business logic is that you will have cleaner code and one place where your configuration is done for every customer.
static ObjectMapper information
I am currently working on a vertx.io application and wanted to use the provide mongo api for data storage. I currently have a rather clunky abstraction on top of the stock JsonObject classes where all get and set methods are replaced with things like:
this.backingObject.get(KEY_FOR_THIS_PROPERTY);
This is all well and good for now, but it won't scale particularly well. it also seems dirty, specifically when using nested arrays or objects. For example, if I want to be able to fill fields only when actual data is known, I have to check if the array exists, and if it doesn't create it and store it in the object. Then I can add an element to the list. For example:
if (this.backingObject.getJsonArray(KEY_LIST) == null) {
this.backingObject.put(KEY_LIST, new JsonArray());
}
this.backingObject.getJsonArray(KEY_LIST).add(p.getBackingObject());
I have thought about potential solutions but don't particularly like any of them. Namely, I could use Gson or some similar library with annotation support to handle loading the object for the purposes of manipulating the data in my code, and then using the serialize and unserialize function of both Gson and Vertx to convert between the formats (vertx to load data -> json string -> gson to parse json into pojos -> make changes -> serialize to json string -> parse with vertx and save) but that's a really gross and inefficient workflow. I could also probably come up with some sort of abstract wrapper that extends/implements the vertx json library but passes all the functionality through to gson, but that also seems like a lot of work.
Is there any good way to achieve more friendly and maintainable serialization using vertx?
I just submitted a patch to Vert.x that defines two new convenience functions for converting between JsonObject and Java object instances without the inefficiency of going through an intermediate JSON string representation. This will be in version 3.4.
// Create a JsonObject from the fields of a Java object.
// Faster than calling `new JsonObject(Json.encode(obj))`.
public static JsonObject mapFrom(Object obj)
// Instantiate a Java object from a JsonObject.
// Faster than calling `Json.decodeValue(Json.encode(jsonObject), type)`.
public <T> T mapTo(Class<T> type)
Internally this uses ObjectMapper#convertValue(...), see Tim Putnam's answer for caveats of this approach. The code is here.
I believe Jackson's ObjectMapper.convertValue(..) functions don't convert via String, and Vert.x is using Jackson for managing JsonObject anyway.
JsonObject just has an underlying map representing the values, accessible via JsonObject.getMap(), and a Jackson serializer/deserializer on the public ObjectMapper instance in io.vertx.core.json.Json.
To switch between JsonObject and a data model expressed in Pojos serializable with Jackson, you can do:
JsonObject myVertxMsg = ...
MyPojo pojo = Json.mapper.convertValue ( myVertxMsg.getMap(), MyPojo.class );
I would guess this is more efficient than going via a String (but its just a guess), and I hate the idea of altering the data class just to suit the environment, so it depends on the context - form vs performance.
To convert from Pojo to JsonObject, convert to a map with Jackson and then use the constructor on JsonObject:
JsonObject myobj = new JsonObject ( Json.mapper.convertValue ( pojo, Map.class ));
If you have implied nested JsonObjects or JsonArray objects in your definition, they will get instantiated as Maps and Lists by default. JsonObject will internally re-wrap these when you access fields specifying those types (e.g. with getJsonArray(..).
Because JsonObject is freeform and you're converting to a static type, you may get some unwanted UnrecognizedPropertyException to deal with. It may be useful to create your own ObjectMapper, add the vertx JsonObjectSerializer and JsonArraySerializer, and then make configuration changes to suit (such as DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES in Jackson).
Not sure if I've understood you correctly, but it sounds like you're trying to find a simple way of converting POJOs to JsonObject?
So, we have lots of pojos that we send over the EventBus as JsonObjects
I've found the easiest way is to use the vert.x Json class which has loads of helper methods to convert to / from Json Strings
JsonObject jsonObject = new JsonObject(Json.encode(myPojo));
Sometimes you need to add some custom (de)serializers, but we always stick with Jackson - that is what Vert.x is using so they work out of the box.
What we actually do, is provide an interface like the following:
public JsonObjectSerializable {
public JsonObject toJson();
}
And all our pojos that need to be sent over the EventBus have to implement this interface.
Then our EventBus sending code looks something like (simplified):
public <T extends JsonObjectSerializable> Response<T> dispatch(T eventPayload);
Also, as we generally don't unit test Pojos, adding this interface encourages the developers to unit test their conversion.
Hope this helps,
Will
Try this:
io.vertx.core.json.Json.mapper.convertValue(json.getMap(), cls)
I think that using Gson as you described is the best possible solution at the current time.
While I agree that if a protocol layer was included in Vert.x it would indeed be first prize, using Gson keeps your server internals pretty organised and is unlikely to be the performance bottleneck.
When and only when this strategy becomes the performance bottleneck have you reached the point to engineer a better solution. Anything before that is premature optimisation.
My two cents.
You can try:
new JsonObject().mapFrom(object)