I use Dozer Mapper to map my objects between dao layer (MongoDB is used) and business logic layer. Actually, objects structure is identical.
Document class (semantically is closer to business logic layer)
public class Document {
private String id;
private String schema;
private Map<String, Object> attributes = new HashMap<>();
}
DocumentEntity class (semantically is closer to dao layer)
public class DocumentEntity {
#Id
private String id;
#Field("schema")
private String schema;
#Field("attributes")
private Map<String, Object> attributes = new HashMap<>();
}
Here's an example of dozer-bean-mappings.xml
<mapping>
<class-a>DocumentEntity</class-a>
<class-b>Document</class-b>
</mapping>
And here's the method that converts one object into another before saving it to MongoDB
DocumentEntity toEntity(Document document) {
DocumentEntity entity = new DocumentEntity();
mapper.map(document, entity);
return entity;
}
As you can see one the field attributes is Map<String, Object>. Everything worked fine until I tried to map complex type "inside" Object. I needed to have list of objects with two fields each as value of this Map.
At the top level there's REST API that provides saving objects to MongoDB. But because of Dozer's incorrect mapping types become invalid.
JSON input
{
"schema":"sch_1",
"attributes": {
"objUID": "obj_1",
"nestedObjects": [
{
"objUID": "obj_1_1",
"objSchema": "sch_1_1"
},
{
"objUID": "obj_1_2",
"objSchema": "sch_1_2"
}
]
}
}
And that was saved after mapping:
{
"schema":"sch_1",
"attributes": {
"objUID": "obj_1",
"nestedObjects": [
"{objUID=obj_1_1, objSchema=sch_1_1}",
"{objUID=obj_1_2, objSchema=sch_1_2}"
]
}
}
So, instead getting list of objects I just get list of strings.
How should I configure dozer to get correct objects mapping?
Related
I'm using a Converter class to store my entity field (type - Map<String, Object>) in MySql DB as a JSON text:
#Entity
#Table(name = "some_table")
public class SomeEntity {
...
#Column(name = "meta_data", nullable = false)
#Convert(converter = MetaDataConverter.class)
Map<String, Object> metaData = null;
...
}
Here is the MetaDataConverter class:
#Converter
public class MetaDataConverter implements AttributeConverter<Map<String, Object>, String> {
private final ObjectMapper objectMapper = new ObjectMapper();
#Override
public String convertToDatabaseColumn(Map<String, Object> metadata) {
try {
return objectMapper.writeValueAsString(metadata);
} catch (Exception e) {
...
}
}
#Override
public Map<String, Object> convertToEntityAttribute(String dbData) {
try {
return objectMapper.readValue(dbData, Map.class);
} catch (Exception e) {
...
}
}
}
Here is the my service class:
#Service
#RequiredArgsConstructor
#Transactional
public class MetaDataService {
private final JpaRepository<MetaDataEntity, String> metaDataRepository;
public MetaDataEntity updateOnlyMetadata(String someParameter, Map<String, Object> newMetaData) {
MetaDataEntity metaDataEntity = metaDataRepository.findBySomeParameter(someParameter);
metaDataEntity.setMetaData(newMetaData);
return metaDataRepository.save(metaDataEntity);
}
}
For the creation it works fine, but it doesn't work with the updating of the converted field. If i try to update only the
metaData field the appropriate column in database is not updated. However, the metaData is updated in case of updating with the other entity fields.
I've already seen the same questions (JPA not updating column with Converter class and Data lost because of JPA AttributeConverter?), but i have not found the answer. Is there something like a standard or best practice for such a case?
FYI: For the CRUD operations i'm using the Spring Data JpaRepository class and its methhods.
Implement proper equals and hashcode methods on your map value objects. JPA can then use those methods to identify that your Map is dirty.
I had a similar problem. i figure out that hibernate used the equals method to determine if one of attribute is dirty and then make an update.
you have two choices : implement correctly the equals method for the entity including the converted attribute
or
instead of using "Map" try to use "HashMap" as attribute type.
the HashMap has an already implemented equals method and hibernate will use it
I am creating a new endpoint in springboot that will return simple stats on users generated from an aggregate query in a mongo database. However I get a PropertyReferenceException. I have read multiple stackoverflow questions about it, but didn't find one that solved this problem.
We have a mongo data scheme like this:
{
"_id" : ObjectId("5d795993288c3831c8dffe60"),
"user" : "000001",
"name" : "test",
"attributes" : {
"brand" : "Chrome",
"language" : "English" }
}
The database is filled with multiple users and we want using Springboot aggregate the stats of users per brand. There could be any number of attributes in the attributes object.
Here is the aggregation we are doing
Aggregation agg = newAggregation(
group("attributes.brand").count().as("number"),
project("number").and("type").previousOperation()
);
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, Profile.class, Stats.class);
return groupResults.getMappedResults();
Which produces this mongo query which works:
> db.collection.aggregate([
{ "$group" : { "_id" : "$attributes.brand" , "number" : { "$sum" : 1}}} ,
{ "$project" : { "number" : 1 , "_id" : 0 , "type" : "$_id"}} ])
{ "number" : 4, "type" : "Chrome" }
{ "number" : 2, "type" : "Firefox" }
However when running a simple integration test we get this error:
org.springframework.data.mapping.PropertyReferenceException: No property brand found for type String! Traversed path: Profile.attributes.
From what I understand, it seems that since attributes is a Map<String, String> there might be a schematic problem. And in the mean time I can't modify the Profile object.
Is there something I am missing in the aggregation, or anything I could change in my Stats object?
For reference, here are the data models we're using, to work with JSON and jackson.
The Stats data model:
#Document
public class Stats {
#JsonProperty
private String type;
#JsonProperty
private int number;
public Stats() {}
/* ... */
}
The Profile data model:
#Document
public class Profiles {
#NotNull
#JsonProperty
private String user;
#NotNull
#JsonProperty
private String name;
#JsonProperty
private Map<String, String> attributes = new HashMap<>();
public Stats() {}
/* ... */
}
I found a solution, which was a combination of two problems:
The PropertyReferenceException was indeed caused because attributes is a Map<String, String> which means there is no schemes for Mongo.
The error message No property brand found for type String! Traversed path: Profile.attributes. means that the Map object doesn't have a brand property in it.
In order to fix that without touching my orginal Profile class, I had to create a new custom class which would map the attributes to an attributes object having the properties I want to aggreate on like:
public class StatsAttributes {
#JsonProperty
private String brand;
#JsonProperty
private String language;
public StatsAttributes() {}
/* ... */
}
Then I created a custom StatsProfile which would leverage my StatsAttributes and would be similar to the the original Profile object without modifying it.
#Document
public class StatsProfile {
#JsonProperty
private String user;
#JsonProperty
private StatsAttributes attributes;
public StatsProfile() {}
/* ... */
}
With that I made disapear my problem with the PropertyReferenceException using my new class StatsAggregation in the aggregation:
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, StatsProfile.class, Stats.class);
However I would not get any results. It seems the query would not find any document in the database. That's where I realied that production mongo objects had the field "_class: com.company.dao.model.Profile" which was tied to the Profile object.
After some research, for the new StatsProfile to work it would need to be a #TypeAlias("Profile"). After looking around, I found that I also needed to precise a collection name which would lead to:
#Document(collection = "profile")
#TypeAlias("Profile")
public class StatsProfile {
/* ... */
}
And with all that, finally it worked!
I suppose that's not the prettiest solution, I wish I would not need to create a new Profile object and just consider the attributes as a StatsAttributes.class somehow in the mongoTemplate query. If anyone knows how to, please share 🙏
I have a spring JPA entity with numeric properties which should be serialized as string values that are the result of a lookup in a code-list.
#Entity
class Test {
String name ;
int status ;
}
Should be serialized by looking up the numeric value in a code-list like so:
{ "name" : "mytest" , "status" : "APPROVED" }
The code-list is implemented as another entity and can be accessed using a spring-boot JPA repository.
The solution I am looking for must be scalable in that
the code-list cannot be loaded from the database again for each serialization or new object
the serialization code must be generic so that it can be re-used in other entities.
This is, other entities also have numeric properties and their own corresponding code-list and repository.
I understand one could either use a custom Jackson serializer, or implement the lookup as part of the entity. However, neither seems to satisfy the conditions above. A custom jackson serializer can only have the repository autowired if I share it or the map using a static field. The static field makes it hard to re-use the serializer implementation for other entities.
Implementing the lookup as part of the entity, say as custom getter, would make the code hard to re-use, especially since the map for the code-list lookup must be shared across instances.
Update: A third strategy would be to add JPA relationships to the code-list in my entities and to expose the value in a custom getter. For the deserialization rather inefficient lookups would be required, though.
This works, but the static map prevents re-using the code.
#Component
public class NewApprovalSerializer extends JsonSerializer<Integer>
{
#Autowired
ApprovalStatusRefRepository repo;
static Map<Integer, String> map = new HashMap<Integer,String>() ;
#PostConstruct
public void init() {
for ( TGoApprovalStatusRef as : repo.findAll() ) {
Integer key = new Integer( as.getApprovalStatusId() );
String val = as.getNameTx();
map.put( key , val );
}
}
public NewApprovalSerializer() {
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(this);
}
public void serialize(Integer value, JsonGenerator gen, SerializerProvider serializers) throws IOException {
gen.writeObject( map.get( value ) );
}
}
I could change the entity like such, but I again have a static map and the code is even harder to re-use in another entity.
#Entity
class Test {
String name ;
#JsonIgnore
int status ;
static Map<Integer, String> map = new HashMap<Integer,String>() ;
public Test() {
... init static map
}
public String getStatus() {
return map.get(this.status);
}
}
What is the standard way to implement a lookup of values uppon serialization (and vice versa in deserialization)?
I'm on Spring boot 1.4.x branch and Spring Data MongoDB.
I want to extend a Pojo from HashMap to give it the possibility to save new properties dynamically.
I know I can create a Map<String, Object> properties in the Entry class to save inside it my dynamics values but I don't want to have an inner structure. My goal is to have all fields at the root's entry class to serialize it like that:
{
"id":"12334234234",
"dynamicField1": "dynamicValue1",
"dynamicField2": "dynamicValue2"
}
So I created this Entry class:
#Document
public class Entry extends HashMap<String, Object> {
#Id
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
}
And the repository like this:
public interface EntryRepository extends MongoRepository<Entry, String> {
}
When I launch my app I have this error:
Error creating bean with name 'entryRepository': Invocation of init method failed; nested exception is org.springframework.data.mapping.model.MappingException: Could not lookup mapping metadata for domain class java.util.HashMap!
Any idea?
TL; DR;
Do not use Java collection/map types as a base class for your entities.
Repositories are not the right tool for your requirement.
Use DBObject with MongoTemplate if you need dynamic top-level properties.
Explanation
Spring Data Repositories are repositories in the DDD sense acting as persistence gateway for your well-defined aggregates. They inspect domain classes to derive the appropriate queries. Spring Data excludes collection and map types from entity analysis, and that's why extending your entity from a Map fails.
Repository query methods for dynamic properties are possible, but it's not the primary use case. You would have to use SpEL queries to express your query:
public interface EntryRepository extends MongoRepository<Entry, String> {
#Query("{ ?0 : ?1 }")
Entry findByDynamicField(String field, Object value);
}
This method does not give you any type safety regarding the predicate value and only an ugly alias for a proper, individual query.
Rather use DBObject with MongoTemplate and its query methods directly:
List<DBObject> result = template.find(new Query(Criteria.where("your_dynamic_field")
.is(theQueryValue)), DBObject.class);
DBObject is a Map that gives you full access to properties without enforcing a pre-defined structure. You can create, read, update and delete DBObjects objects via the Template API.
A last thing
You can declare dynamic properties on a nested level using a Map, if your aggregate root declares some static properties:
#Document
public class Data {
#Id
private String id;
private Map<String, Object> details;
}
Here we can achieve using JSONObject
The entity will be like this
#Document
public class Data {
#Id
private String id;
private JSONObject details;
//getters and setters
}
The POJO will be like this
public class DataDTO {
private String id;
private JSONObject details;
//getters and setters
}
In service
Data formData = new Data();
JSONObject details = dataDTO.getDetails();
details.put("dynamicField1", "dynamicValue1");
details.put("dynamicField2", "dynamicValue2");
formData.setDetails(details);
mongoTemplate.save(formData );
i have done as per my business,refer this code and do it yours. Is this helpful?
I have a document that can have dynamic key names:
{
"_id" : ObjectId("51a29f6413dc992c24e0283e"),
"envinfo" : {
"appName" : "MyJavaApp",
"environment" : {
"cpuCount" : 12,
"heapMaxBytes" : 5724766208,
"osVersion" : "6.2",
"arch" : "amd64",
"javaVendor" : "Sun Microsystems Inc.",
"pid" : 44996,
"javaVersion" : "1.6.0_38",
"heapInitialBytes" : 402507520,
}
Here envinfo 's keys are not known in advance.
What is the best way to create an entity class in Spring Data Mongodb which will map this document?
This is one way of doing it. There may be other better ways.
Create a map of attributes and store the map in mongo.
public class Env {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private ObjectId id;
#Field
private Envinfo envinfo;
public static class Envinfo {
#Field
private String appName;
#Field
private Map<String, String> attributes;
}
}
If you know the keys in advance, you may add those attributes in Envinfo and keep those out of attributes map.
Here is what I'll do.
class EnvDocuemnt {
#Id
private String id; //getter and setter omitted
#Field(value = "envinfo")
private BasicDBObject infos;
public Map getInfos() {
// some documents don't have any infos, in this case return null...
if ( null!= infos)
return infos.toMap();
return null;
}
public void setInfos(Map infos) {
this.infos = new BasicDBObject( infos );
}
}
This way, getInfos() returns a Map<String,Object> you can explore with String keys when needed, and that can have nested Map.
For your dependencies, it is better not to expose the BasicDBObject field directly, so this can be used via interface in a code not including any MongoDb library.
Note that if there is some frequent accessed fields in envinfo, then it would be better to declare them as fields in your class, to have a direct accessor, and so not to spend to much time in browsing the map again and again.