spring data mongodb calling save twice leads to duplicate key exception - java

I try to save an entity with spring data mongodb repository. I have an EventListener that cascades saves.
The problem is, that I need to save an entity to get its internal id and perform further state mutations and saving the entity afterwards.
#Test
void testUpdate() {
FooDto fooDto = getResource("/json/foo.json", new TypeReference<FooDto>() {
});
Foo foo = fooMapper.fromDTO(fooDto);
foo = fooService.save(foo);
log.info("Saved foo: " + foo);
foo.setState(FooState.Bar);
foo = fooService.save(foo);
log.info("Updated foo: " + foo);
}
I have an index on a child collection of foo. It will not update children but will try to insert them twice which leads to org.springframework.dao.DuplicateKeyException.
Why does it not save but tries to insert it again?
Related:
Spring Data MongoRepository save causing Duplicate Key error
Edit: versions:
mongodb 4,
spring boot 2.3.3.RELEASE
Edit more details:
Repository:
public interface FooRepository extends MongoRepository<Foo, String>
Entity:
#Document
public class Foo {
#Id
private String id;
private FooState state;
#DBRef
#Cascade
private Collection<Bar> bars = new ArrayList<>();
...
}
CascadeMongoEventListener:
//from https://mflash.dev/blog/2019/07/08/persisting-documents-with-mongorepository/#unit-tests-for-the-accountrepository
public class CascadeMongoEventListener extends AbstractMongoEventListener<Object> {
private #Autowired
MongoOperations mongoOperations;
public #Override void onBeforeConvert(final BeforeConvertEvent<Object> event) {
final Object source = event.getSource();
ReflectionUtils
.doWithFields(source.getClass(), new CascadeSaveCallback(source, mongoOperations));
}
private static class CascadeSaveCallback implements ReflectionUtils.FieldCallback {
private final Object source;
private final MongoOperations mongoOperations;
public CascadeSaveCallback(Object source, MongoOperations mongoOperations) {
this.source = source;
this.mongoOperations = mongoOperations;
}
public #Override void doWith(final Field field)
throws IllegalArgumentException, IllegalAccessException {
ReflectionUtils.makeAccessible(field);
if (field.isAnnotationPresent(DBRef.class) && field.isAnnotationPresent(Cascade.class)) {
final Object fieldValue = field.get(source);
if (Objects.nonNull(fieldValue)) {
final var callback = new IdentifierCallback();
final CascadeType cascadeType = field.getAnnotation(Cascade.class).value();
if (cascadeType.equals(CascadeType.PERSIST) || cascadeType.equals(CascadeType.ALL)) {
if (fieldValue instanceof Collection<?>) {
((Collection<?>) fieldValue).forEach(mongoOperations::save);
} else {
ReflectionUtils.doWithFields(fieldValue.getClass(), callback);
mongoOperations.save(fieldValue);
}
}
}
}
}
}
private static class IdentifierCallback implements ReflectionUtils.FieldCallback {
private boolean idFound;
public #Override void doWith(final Field field) throws IllegalArgumentException {
ReflectionUtils.makeAccessible(field);
if (field.isAnnotationPresent(Id.class)) {
idFound = true;
}
}
public boolean isIdFound() {
return idFound;
}
}
}
Edit: expected behaviour
From the docs in org.springframework.data.mongodb.core.MongoOperations#save(T):
Save the object to the collection for the entity type of the object to
save. This will perform an insert if the object is not already
present, that is an 'upsert'.
Edit - new insights:
it might be related to the index on the Bar child collection. (DbRef and Cascade lead to mongoOperations::save being called from the EventListener)
I created another similar test with another entity and it worked.
The index on the child "Bar" entity (which is held as collection in parent "Foo" entity):
#CompoundIndex(unique = true, name = "fooId_name", def = "{'fooId': 1, 'name': 1}")
update: I think I found the problem. Since I am using a custom serialization/deserialization in my Converter (Document.parse()) the id field is not mapped properly. This results in id being null and therefore this leads to an insert instead of an update.
I will write an answer if I resolved this properly.
public class MongoResultConversion {
#Component
#ReadingConverter
public static class ToResultConverter implements Converter<Document, Bar> {
private final ObjectMapper mapper;
#Autowired
public ToResultConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public MeasureResult convert(Document source) {
String json = toJson(source);
try {
return mapper.readValue(json, new TypeReference<Bar>() {
});
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
protected String toJson(Document source) {
return source.toJson();
}
}
#Component
#WritingConverter
public static class ToDocumentConverter implements Converter<Bar, Document> {
private final ObjectMapper mapper;
#Autowired
public ToDocumentConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Document convert(Bar source) {
String json = toJson(source);
return Document.parse(json);
}
protected String toJson(Bar source) {
try {
return mapper.writeValueAsString(source);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
}
}

As stated in my last edit the problem was with the custom serialization/deserialization and mongo document conversion. This resulted in id being null and therefore an insert was done instead of an upsert.
The following code is my implementation of my custom converter to map the objectid:
public class MongoBarConversion {
#Component
#ReadingConverter
public static class ToBarConverter implements Converter<Document, Bar> {
private final ObjectMapper mapper;
#Autowired
public ToBarConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Bar convert(Document source) {
JsonNode json = toJson(source);
setObjectId(source, json);
return mapper.convertValue(json, new TypeReference<Bar>() {
});
}
protected void setObjectId(Document source, JsonNode jsonNode) {
ObjectNode modifiableObject = (ObjectNode) jsonNode;
String objectId = getObjectId(source);
modifiableObject.put(ID_FIELD, objectId);
}
protected String getObjectId(Document source) {
String objectIdLiteral = null;
ObjectId objectId = source.getObjectId("_id");
if (objectId != null) {
objectIdLiteral = objectId.toString();
}
return objectIdLiteral;
}
protected JsonNode toJson(Document source) {
JsonNode node = null;
try {
String json = source.toJson();
node = mapper.readValue(json, JsonNode.class);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
return node;
}
}
#Component
#WritingConverter
public static class ToDocumentConverter implements Converter<Bar, Document> {
private final ObjectMapper mapper;
#Autowired
public ToDocumentConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Document convert(Bar source) {
try {
JsonNode jsonNode = toJson(source);
setObjectId(source, jsonNode);
String json = mapper.writeValueAsString(jsonNode);
return Document.parse(json);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
protected void setObjectId(Bar source, JsonNode jsonNode) throws JsonProcessingException {
ObjectNode modifiableObject = (ObjectNode) jsonNode;
JsonNode objectIdJson = getObjectId(source);
modifiableObject.set("_id", objectIdJson);
modifiableObject.remove(ID_FIELD);
}
protected JsonNode getObjectId(Bar source) throws JsonProcessingException {
ObjectNode _id = null;
String id = source.getId();
if (id != null) {
_id = JsonNodeFactory.instance.objectNode();
_id.put("$oid", id);
}
return _id;
}
protected JsonNode toJson(Bar source) {
return mapper.convertValue(source, JsonNode.class);
}
}
}
So to conclude: two subsequent saves should (and will) definitely lead to an upsert if the id is non null. The bug was in my code.

All MongoDB drivers include functionality to generate ids on the client side. If you only save to get the id, research how to use client-side id generation and remove the first save entirely.

I believe you are facing this issue as you try to save for the second time without fetching from db. You are changing the object returned by the save, not the object saved into the db. Try retrieving existing foo by using a method like findById and then perform next steps and saving it

Related

Consume a Kafka message using a different model object than the one used at the producer

My application is a Kafka consumer which receives a big fat custom message from the producer.
We use Jackson to serialize and deserialize the messages.
A dummy of my consumer is here.
public class LittleCuteConsumer {
#KafkaListener(topics = "${kafka.bigfat.topic}", containerFactory = “littleCuteConsumerFactory")
public void receive(BigFatMessage message) {
// do cute stuff
}
}
And the message that's been transferred
#JsonIgnoreProperties(ignoreUnknown = true)
public class BigFatMessage {
private String fieldOne;
private String fieldTwo;
...
private String fieldTen;
private CustomeFieldOne cf1;
...
private CustomeFieldTen cf10;
// setters and getters
}
Here is the object I want to deserialize the original message to.
#JsonIgnoreProperties(ignoreUnknown = true)
public class ThinMessage {
private String fieldOne;
private String fieldTwo;
// setters and getters
}
Original deserializer
public class BigFatDeserializer implements Deserializer<BigFatMessage> {
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
// Default implementation of configure method
}
#Override
public BigFatMessage deserialize(String topic, byte[] data) {
ObjectMapper mapper = new ObjectMapper();
BigFatMessage biggie = null;
try {
biggie = mapper.readValue(data, BigFatMessage.class);
} catch (Exception e) {
// blame others
}
return biggie;
}
#Override
public void close() {
// Default implementation of close method
}
}
As we can see here, the message contains a lot of fields and dependent objects which are actually useless for my consumer, and I don't want to define all the dependent classes in my consumer as well.
Hence, I need a way I to receive the message using a simple different model class and deserialize it to ignore the unnecessary fields from the original message!
How I'm trying to deserialize
public class ThinDeserializer implements Deserializer<ThinMessage> {
#Override
public void configure(Map<String, ?> configs, boolean isKey) {
// Default implementation of configure method
}
#Override
public ThinMessage deserialize(String topic, byte[] data) {
ObjectMapper mapper = new ObjectMapper();
ThinMessage cutie = null;
try {
cutie = mapper.readValue(data, ThinMessage.class);
} catch (Exception e) {
// blame others
}
return cutie;
}
#Override
public void close() {
// Default implementation of close method
}
}
And get the below Jackson error:
com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of com.myapp.ThinMessage (no Creators, like default construct, exist): cannot deserialize from Object value (no delegate- or property-based Creator)\n
Accompanied by below Kafka exception.
org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method could not be invoked with the incoming message\n
org.springframework.messaging.handler.annotation.support.MethodArgumentNotValidException: Could not resolve method parameter at index 0
Try to change
public class ThinMessage {
private String fieldOne;
private String fieldTwo;
}
to
#JsonIgnoreProperties(ignoreUnknown = true)
public class ThinMessage {
private String fieldOne;
private String fieldTwo;
public ThinMessage() {
}
public String getFieldOne() {
return fieldOne;
}
public void setFieldOne(String fieldOne) {
this.fieldOne = fieldOne;
}
public String getFieldTwo() {
return fieldTwo;
}
public void setFieldTwo(String fieldTwo) {
this.fieldTwo = fieldTwo;
}
}
and set
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
check this link : (https://docs.spring.io/spring-kafka/docs/2.3.x/reference/html/#json)
you have two options : remove typeInfo from producer or ingnore typeInfo from consumer
#Bean
public DefaultKafkaProducerFactory pf(KafkaProperties properties) {
Map<String, Object> props = properties.buildProducerProperties();
DefaultKafkaProducerFactory pf = new DefaultKafkaProducerFactory(props,
new JsonSerializer<>(MyKeyType.class)
.forKeys()
.noTypeInfo(),
new JsonSerializer<>(MyValueType.class)
.noTypeInfo());
}
#Bean
public DefaultKafkaConsumerFactory pf(KafkaProperties properties) {
Map<String, Object> props = properties.buildConsumerProperties();
DefaultKafkaConsumerFactory pf = new DefaultKafkaConsumerFactory(props,
new JsonDeserializer<>(MyKeyType.class)
.forKeys()
.ignoreTypeHeaders(),
new JsonSerializer<>(MyValueType.class)
.ignoreTypeHeaders());
}

Spring Boot not able to update sharded collection on azure cosmos db(MongoDb)

I have a collection "documentDev" present in the database with sharding key as 'dNumber'
Sample Document :
{
"_id" : "12831221wadaee23",
"dNumber" : "115",
"processed": false
}
If I try to update this document through any query tool using a command like -
db.documentDev.update({
"_id" : ObjectId("12831221wadaee23"),
"dNumber":"115"
},{
$set:{"processed": true}},
{ multi: false, upsert: false}
)}`
It updates the document properly.
But if I do use spring boot's mongorepository command like
DocumentRepo.save(Object)
it throws an exception
Caused by: com.mongodb.MongoCommandException: Command failed with error 61: 'query in command must target a single shard key' on server by3prdddc01-docdb-3.documents.azure.com:10255. The full response is { "_t" : "OKMongoResponse", "ok" : 0, "code" : 61, "errmsg" : "query in command must target a single shard key", "$err" : "query in command must target a single shard key" }
This is my DocumentObject:
#Document(collection = "documentDev")
public class DocumentDev
{
#Id
private String id;
private String dNumber;
private String fileName;
private boolean processed;
}
This is my Repository Class -
#Repository
public interface DocumentRepo extends MongoRepository<DocumentDev,
String> { }
and value i am trying to update
Value : doc :
{
"_id" : "12831221wadaee23",
"dNumber" : "115",
"processed": true
}
the function I am trying to execute :
#Autowired
DocumentRepo docRepo;
docRepo.save(doc); // Fails to execute
Note: I have sharding enabled on dNumber field. And I am successfully able to update using Native queries on NoSQL Tool.
I was also able to execute the Repository save operation on Non sharded collection.
Update: I am able to update the document by creating native query using MongoTemplate - My Query looks like this -
public DocumentDev updateProcessedFlag(DocumentDev request) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(request.getId()));
query.addCriteria(Criteria.where("dNumber").is(request.getDNumber()));
Update update = new Update();
update.set("processed", request.isProcessed());
mongoTemplate.updateFirst(query, update, request.getClass());
return request;
}
But this is not a generic solution as any other field might have update and my document may have other fields as well.
I had the same issue, solved with following hack:
#Configuration
public class ReactiveMongoConfig {
#Bean
public ReactiveMongoTemplate reactiveMongoTemplate(ReactiveMongoDatabaseFactory reactiveMongoDatabaseFactory,
MongoConverter converter, MyService service) {
return new ReactiveMongoTemplate(reactiveMongoDatabaseFactory, converter) {
#Override
protected Mono<UpdateResult> doUpdate(String collectionName, Query query, UpdateDefinition update,
Class<?> entityClass, boolean upsert, boolean multi) {
query.addCriteria(new Criteria("shardKey").is(service.getShardKey()));
return super.doUpdate(collectionName, query, update, entityClass, upsert, multi);
}
};
}
}
Would be nice to have an annotation #ShardKey to mark document field as shard and have it added to query automatically.
Following the custom repository approach, I got an error because spring is expecting a Cosmos entity to be available in the custom implementation {EntityName}CustomRepositoryImpl, so I renamed the implementation. I also added code for:
The case when entity has inherited fields
Shard key is not always the Id, we should add it along with the id: { "shardkeyName": "shardValue" }
Adding generated ObjectId to the entity for new documents
public class DocumentRepositoryImpl<T> implements CosmosRepositoryCustom<T> {
#Autowired
protected MongoTemplate mongoTemplate;
#Override
public T customSave(T entity) {
WriteResult writeResult = mongoTemplate.upsert(createQuery(entity), createUpdate(entity), entity.getClass());
setIdForEntity(entity,writeResult);
return entity;
}
#Override
public T customSave(T entity, String collectionName) {
WriteResult writeResult = mongoTemplate.upsert(createQuery(entity), createUpdate(entity), collectionName);
setIdForEntity(entity,writeResult);
return entity;
}
#Override
public void customSave(List<T> entities) {
if(CollectionUtils.isNotEmpty(entities)){
entities.forEach(entity -> customSave(entity));
}
}
public <T> Update createUpdate(T entity){
Update update = new Update();
for (Field field : getAllFields(entity)) {
try {
field.setAccessible(true);
if (field.get(entity) != null) {
update.set(field.getName(), field.get(entity));
}
} catch (IllegalArgumentException | IllegalAccessException e) {
LOGGER.error("Error creating update for entity",e);
}
}
return update;
}
public <T> Query createQuery(T entity) {
Criteria criteria = new Criteria();
for (Field field : getAllFields(entity)) {
try {
field.setAccessible(true);
if (field.get(entity) != null) {
if (field.getName().equals("id")) {
Query query = new Query(Criteria.where("id").is(field.get(entity)));
query.addCriteria(new Criteria(SHARD_KEY_NAME).is(SHARD_KEY_VALUE));
return query;
}
criteria.and(field.getName()).is(field.get(entity));
}
} catch (IllegalArgumentException | IllegalAccessException e) {
LOGGER.error("Error creating query for entity",e);
}
}
return new Query(criteria);
}
private <T> List<Field> getAllFields(T entity) {
List<Field> fields = new ArrayList<>();
fields.addAll(Arrays.asList(entity.getClass().getDeclaredFields()));
Class<?> c = entity.getClass().getSuperclass();
if(!c.equals(Object.class)){
fields.addAll(Arrays.asList(c.getDeclaredFields()));
}
return fields;
}
public <T> void setIdForEntity(T entity,WriteResult writeResult){
if(null != writeResult && null != writeResult.getUpsertedId()){
Object upsertId = writeResult.getUpsertedId();
entity.setId(upsertId.toString());
}
}
}
I am using spring-boot-starter-mongodb:1.5.1 with spring-data-mongodb:1.9.11
i am hacking this by create a custom repository:
public interface CosmosCustomRepository<T> {
void customSave(T entity);
void customSave(T entity, String collectionName);
}
the implement for this repository:
public class CosmosCustomRepositoryImpl<T> implements CosmosCustomRepository<T> {
#Autowired
private MongoTemplate mongoTemplate;
#Override
public void customSave(T entity) {
mongoTemplate.upsert(createQuery(entity), createUpdate(entity), entity.getClass());
}
#Override
public void customSave(T entity, String collectionName) {
mongoTemplate.upsert(createQuery(entity), createUpdate(entity), collectionName);
}
private Update createUpdate(T entity) {
Update update = new Update();
for (Field field : entity.getClass().getDeclaredFields()) {
try {
field.setAccessible(true);
if (field.get(entity) != null) {
update.set(field.getName(), field.get(entity));
}
} catch (IllegalArgumentException | IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return update;
}
private Query createQuery(T entity) {
Criteria criteria = new Criteria();
for (Field field : entity.getClass().getDeclaredFields()) {
try {
field.setAccessible(true);
if (field.get(entity) != null) {
if (field.getName().equals("id")) {
return new Query(Criteria.where("id").is(field.get(entity)));
}
criteria.and(field.getName()).is(field.get(entity));
}
} catch (IllegalArgumentException | IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
return new Query(criteria);
}
}
your DocumentRepo will extends this new custom repository.
#Repository
public interface DocumentRepo extends MongoRepository<DocumentDev, String>, CosmosCustomRepository<DocumentDev> { }
To save new document, just use new customSave
#Autowired
DocumentRepo docRepo;
docRepo.customSave(doc);
A more recent and simple approach I've figured out recently is to use the #Sharded({shardKey1, shardKey2, ...}) spring annotation like so:
#Document(collection = "documentDev")
#Sharded(shardKey = {"dNumber"})
public class DocumentDev {
#Id private String id;
private String dNumber;
private String fileName;
private boolean processed;
}
This is going to be picked up by MongoRepository automatically.
And here is the doc: https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#sharding
Enjoy coding!

Jackson - how to get view-dependant CsvSchema?

I am trying to convert my POJO into 2 different CSV representations.
My POJO:
#NoArgsConstructor
#AllArgsConstructor
public static class Example {
#JsonView(View.Public.class)
private String a;
#JsonView(View.Public.class)
private String b;
#JsonView(View.Internal.class)
private String c;
#JsonView(View.Internal.class)
private String d;
public static final class View {
interface Public {}
interface Internal extends Public {}
}
}
Public view exposed fields a and b, and Internal view exposes all fields.
The problem is that if I construct the ObjectWriter with .writerWithSchemaFor(Example.class) all my fields are included but ignored as defined by the view. ObjectWriter will create the schema as defined by the Example.class but if I apply .withView it will only hide the fields, not ignore them.
This means that I must construct the schema manually.
Tests:
#Test
public void testJson() throws JsonProcessingException {
final ObjectMapper mapper = new ObjectMapper();
final Example example = new Example("1", "2", "3", "4");
final String result = mapper.writerWithView(Example.View.Public.class).writeValueAsString(example);
System.out.println(result); // {"a":"1","b":"2"}
}
#Test
public void testCsv() throws JsonProcessingException {
final CsvMapper mapper = new CsvMapper();
final Example example = new Example("1", "2", "3", "4");
final String result = mapper.writerWithSchemaFor(Example.class).withView(Example.View.Public.class).writeValueAsString(example);
System.out.println(result); // 1,2,,
}
#Test
public void testCsvWithCustomSchema() throws JsonProcessingException {
final CsvMapper mapper = new CsvMapper();
CsvSchema schema = CsvSchema.builder()
.addColumn("a")
.addColumn("b")
.build();
final Example example = new Example("1", "2", "3", "4");
final String result = mapper.writer().with(schema).withView(Example.View.Public.class).writeValueAsString(example);
System.out.println(result); // 1,2
}
testCsv test has 4 fields, but 2 are excluded. testCsvWithCustomSchema test has only the fields I want.
Is there a way to get CsvSchema that will match my #JsonView without having to construct it myself?
Here is a solution I did with reflection, I am not really happy with it since it is still "manually" building the schema.
This solution is also bad since it ignores mapper configuration like MapperFeature.DEFAULT_VIEW_INCLUSION.
This seems like doing something that should be already available from the library.
#AllArgsConstructor
public class GenericPojoCsvSchemaBuilder {
public CsvSchema build(final Class<?> type) {
return build(type, null);
}
public CsvSchema build(final Class<?> type, final Class<?> view) {
return build(CsvSchema.builder(), type, view);
}
public CsvSchema build(final CsvSchema.Builder builder, final Class<?> type) {
return build(builder, type, null);
}
public CsvSchema build(final CsvSchema.Builder builder, final Class<?> type, final Class<?> view) {
final JsonPropertyOrder propertyOrder = type.getAnnotation(JsonPropertyOrder.class);
final List<Field> fieldsForView;
// DO NOT use Arrays.asList because it uses an internal fixed length implementation which cannot use .removeAll (throws UnsupportedOperationException)
final List<Field> unorderedFields = Arrays.stream(type.getDeclaredFields()).collect(Collectors.toList());
if (propertyOrder != null && propertyOrder.value().length > 0) {
final List<Field> orderedFields = Arrays.stream(propertyOrder.value()).map(s -> {
try {
return type.getDeclaredField(s);
} catch (final NoSuchFieldException e) {
throw new IllegalArgumentException(e);
}
}).collect(Collectors.toList());
if (propertyOrder.value().length < type.getDeclaredFields().length) {
unorderedFields.removeAll(orderedFields);
orderedFields.addAll(unorderedFields);
}
fieldsForView = getJsonViewFields(orderedFields, view);
} else {
fieldsForView = getJsonViewFields(unorderedFields ,view);
}
final JsonIgnoreFieldFilter ignoreFieldFilter = new JsonIgnoreFieldFilter(type.getDeclaredAnnotation(JsonIgnoreProperties.class));
fieldsForView.forEach(field -> {
if (ignoreFieldFilter.matches(field)) {
builder.addColumn(field.getName());
}
});
return builder.build();
}
private List<Field> getJsonViewFields(final List<Field> fields, final Class<?> view) {
if (view == null) {
return fields;
}
return fields.stream()
.filter(field -> {
final JsonView jsonView = field.getAnnotation(JsonView.class);
return jsonView != null && Arrays.stream(jsonView.value()).anyMatch(candidate -> candidate.isAssignableFrom(view));
})
.collect(Collectors.toList());
}
private class JsonIgnoreFieldFilter implements ReflectionUtils.FieldFilter {
private final List<String> fieldNames;
public JsonIgnoreFieldFilter(final JsonIgnoreProperties jsonIgnoreProperties) {
if (jsonIgnoreProperties != null) {
fieldNames = Arrays.asList(jsonIgnoreProperties.value());
} else {
fieldNames = null;
}
}
#Override
public boolean matches(final Field field) {
if (fieldNames != null && fieldNames.contains(field.getName())) {
return false;
}
final JsonIgnore jsonIgnore = field.getDeclaredAnnotation(JsonIgnore.class);
return jsonIgnore == null || !jsonIgnore.value();
}
}
}

Handle Polymorphic with StdDeserializer Jackson 2.5

I have three classes which inherits from a super class (SensorData)
#JsonDeserialize(using = SensorDataDeserializer.class)
public abstract class SensorData {
}
public class HumiditySensorData extends SensorData {
}
public class LuminositySensorData extends SensorData {
}
public class TemperatureSensorData extends SensorData {
}
I want convert a json input into one of this classes depending on a parameter. I'm trying to use Jackson StdDeserializer and I create a custom deserializer
#Component
public class SensorDataDeserializer extends StdDeserializer<SensorData> {
private static final long serialVersionUID = 3625068688939160875L;
#Autowired
private SensorManager sensorManager;
private static final String discriminator = "name";
public SensorDataDeserializer() {
super(SensorData.class);
SpringBeanProvider.getInstance().autowireBean(this);
}
#Override
public SensorData deserialize(JsonParser parser,
DeserializationContext context) throws IOException,
JsonProcessingException {
ObjectMapper mapper = (ObjectMapper) parser.getCodec();
ObjectNode root = (ObjectNode) mapper.readTree(parser);
ObjectNode sensor = (ObjectNode) root.get("data");
String type = root.get(discriminator).asText();
Class<? extends SensorData> clazz = this.sensorManager
.getCachedSensorsMap().get(type).sensorDataClass();
if (clazz == null) {
// TODO should throw exception
return null;
}
return mapper.readValue(sensor.traverse(), clazz);
}
}
My problem is that when I determine the correct type to mapping the concrete class, the mapper call again to the custom StdDeserializer. So I need a way
to broke the cycle when I have the correct type. The stacktrace is the next one
java.lang.NullPointerException
at com.hp.psiot.mapping.SensorDataDeserializer.deserialize(SensorDataDeserializer.java:38)
at com.hp.psiot.mapping.SensorDataDeserializer.deserialize(SensorDataDeserializer.java:1)
at com.fasterxml.jackson.databind.ObjectMapper._readValue(ObjectMapper.java:3532)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:1868)
at com.hp.psiot.mapping.SensorDataDeserializer.deserialize(SensorDataDeserializer.java:47)
at com.hp.psiot.mapping.SensorDataDeserializer.deserialize(SensorDataDeserializer.java:1)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3560)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2660)
at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.readJavaType(AbstractJackson2HttpMessageConverter.java:205)
at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.read(AbstractJackson2HttpMessageConverter.java:200)
at org.springframework.web.servlet.mvc.method.annotation.AbstractMessageConverterMethodArgumentResolver.readWithMessageConverters (AbstractMessageConverterMethodArgumentResolver.java:138)
at org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor.readWithMessageConverters(RequestResponseBodyMethodProcessor.java:184)
at org.springframework.web.servlet.mvc.method.annotation.RequestResponseBodyMethodProcessor.resolveArgument(RequestResponseBodyMethodProcessor.java:105)
An example of input
{
"name":"temperature",
"data": {
"value":20
}
}
I only include the stacktrace to show that the mapper is calling again to the deserializer. The reason for the nullPointerException is that when the second the ObjectMapper is called the input is
"value":20
So, An exception is threw because we don't have the information to determine the type and it doesn't check if the input is correct
I want to avoid using JsonSubTypes and JsonTypeInfo if it's posible.
Thanks in advance!
Partial solution
In my case the SensorData is wrapped in other class (ServiceData)
class ServiceData {
#JsonDeserialize(using = SensorDataDeserializer.class)
List<SensorData> sensors;
}
So, I get rid of JsonDeserializer in SensorData class and put it in the field avoiding the cycle. The solution isn't the best, but in my case it helps me. But in the case that the class isn't wrapped in another one we still have the same problem.
Note that if you have a Collection and you annotate with JsonDeserialize that field you have to handle all the collection. Here is the modification
in my case
#Component
public class SensorDataDeserializer extends StdDeserializer<List<SensorData>> {
private static final long serialVersionUID = 3625068688939160875L;
#Autowired
private SensorManager sensorManager;
private static final String discriminator = "name";
public SensorDataDeserializer() {
super(SensorData.class);
SpringBeanProvider.getInstance().autowireBean(this);
}
#Override
public List<SensorData> deserialize(JsonParser parser,
DeserializationContext context) throws IOException,
JsonProcessingException {
try {
ObjectMapper mapper = (ObjectMapper) parser.getCodec();
ArrayNode root = (ArrayNode) mapper.readTree(parser);
int size = root.size();
List<SensorData> sensors = new ArrayList<SensorData>();
for (int i = 0; i < size; ++i) {
ObjectNode sensorHead = (ObjectNode) root.get(i);
ObjectNode sensorData = (ObjectNode) sensorHead.get("data");
String tag = sensorHead.get(discriminator).asText();
Class<? extends SensorData> clazz = this.sensorManager
.getCachedSensorsMap().get(tag).sensorDataClass();
if (clazz == null) {
throw new InvalidJson("unbound sensor");
}
SensorData parsed = mapper.readValue(sensorData.traverse(),
clazz);
if (parsed == null) {
throw new InvalidJson("unbound sensor");
}
sensors.add(parsed);
}
return sensors;
} catch (Throwable e) {
throw new InvalidJson("invalid data");
}
}
}
Hope it helps someone :)
Why don't you just use #JsonTypeInfo? Polymorphic handling is the specific use case for it.
In this case, you would want to use something like:
#JsonTypeInfo(use=Id.NAME, include=As.PROPERTY, property="name")
#JsonSubTypes({ HumiditySensorData.class, ... }) // or register via mapper
public abstract class SensorData { ... }
#JsonTypeName("temperature")
public class TemperaratureSensorData extends SensorData {
public TemperaratureSensorData(#JsonProperty("data") JsonNode data) {
// extract pieces out
}
}
which would handle resolution from 'name' into sub-type, bind contents of 'data' as JsonNode (or, if you prefer can use Map or Object or whatever type matches).

jackson: ignore getter, but not with #JsonView

I'm looking for possibility to serialize transient information only in some cases:
#JsonInclude(Include.NON_NULL)
#Entity
public class User {
public static interface AdminView {}
... id, email and others ...
#Transient
private transient Details details;
#JsonIgnore // Goal: ignore all the time, except next line
#JsonView(AdminView.class) // Goal: don't ignore in AdminView
public Details getDetails() {
if (details == null) {
details = ... compute Details ...
}
return details;
}
}
public class UserDetailsAction {
private static final ObjectWriter writer = new ObjectMapper();
private static final ObjectWriter writerAdmin = writer
.writerWithView(User.AdminView.class);
public String getUserAsJson(User user) {
return writer.writeValueAsString(user);
}
public String getUserAsJsonForAdmin(User user) {
return writerAdmin.writeValueAsString(user);
}
}
If I call getUserAsJson I expected to see id, email and other fields, but not details. This works fine. But I see same for getUserAsJsonForAdmin, also without detail. If I remove #JsonIgnore annotation - I do see details in both calls.
What do I wrong and is there good way to go? Thanks!
You may find the use of the dynamic Jackson filtering slightly more elegant for your use case. Here is an example of the filtering of POJO fields based on a custom annotation sharing one object mapper instance:
public class JacksonFilter {
static private boolean shouldIncludeAllFields;
#Retention(RetentionPolicy.RUNTIME)
public static #interface Admin {}
#JsonFilter("admin-filter")
public static class User {
public final String email;
#Admin
public final String details;
public User(String email, String details) {
this.email = email;
this.details = details;
}
}
public static class AdminPropertyFilter extends SimpleBeanPropertyFilter {
#Override
protected boolean include(BeanPropertyWriter writer) {
// deprecated since 2.3
return true;
}
#Override
protected boolean include(PropertyWriter writer) {
if (writer instanceof BeanPropertyWriter) {
return shouldIncludeAllFields || ((BeanPropertyWriter) writer).getAnnotation(Admin.class) == null;
}
return true;
}
}
public static void main(String[] args) throws JsonProcessingException {
User user = new User("email", "secret");
ObjectMapper mapper = new ObjectMapper();
mapper.setFilters(new SimpleFilterProvider().addFilter("admin-filter", new AdminPropertyFilter()));
System.out.println(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(user));
shouldIncludeAllFields = true;
System.out.println(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(user));
}
}
Output:
{
"email" : "email"
}
{
"email" : "email",
"details" : "secret"
}
It's look like jackson have horrible concept on very cool feature like #JsonView. The only way I discover to solve my problem is:
#JsonInclude(Include.NON_NULL)
#Entity
public class User {
public static interface BasicView {}
public static interface AdminView {}
... id and others ...
#JsonView({BasicView.class, AdminView.class}) // And this for EVERY field
#Column
private String email;
#Transient
private transient Details details;
#JsonView(AdminView.class)
public Details getDetails() {
if (details == null) {
details = ... compute Details ...
}
return details;
}
}
public class UserDetailsAction {
private static final ObjectWriter writer = new ObjectMapper()
.disable(MapperFeature.DEFAULT_VIEW_INCLUSION)
.writerWithView(User.BasicView.class);
private static final ObjectWriter writerAdmin = new ObjectMapper()
.disable(MapperFeature.DEFAULT_VIEW_INCLUSION)
.writerWithView(User.AdminView.class);
public String getUserAsJson(User user) {
return writer.writeValueAsString(user);
}
public String getUserAsJsonForAdmin(User user) {
return writerAdmin.writeValueAsString(user);
}
}
Maybe it's help some one. But I hope to find better solution and because doesn't accept my own answer.
EDIT: because interface can extends (multiple) interfaces, I can use:
public static interface AdminView extends BasicView {}
and just
#JsonView(BasicView.class)
instead of
#JsonView({BasicView.class, AdminView.class})

Categories

Resources