I'm using a Converter class to store my entity field (type - Map<String, Object>) in MySql DB as a JSON text:
#Entity
#Table(name = "some_table")
public class SomeEntity {
...
#Column(name = "meta_data", nullable = false)
#Convert(converter = MetaDataConverter.class)
Map<String, Object> metaData = null;
...
}
Here is the MetaDataConverter class:
#Converter
public class MetaDataConverter implements AttributeConverter<Map<String, Object>, String> {
private final ObjectMapper objectMapper = new ObjectMapper();
#Override
public String convertToDatabaseColumn(Map<String, Object> metadata) {
try {
return objectMapper.writeValueAsString(metadata);
} catch (Exception e) {
...
}
}
#Override
public Map<String, Object> convertToEntityAttribute(String dbData) {
try {
return objectMapper.readValue(dbData, Map.class);
} catch (Exception e) {
...
}
}
}
Here is the my service class:
#Service
#RequiredArgsConstructor
#Transactional
public class MetaDataService {
private final JpaRepository<MetaDataEntity, String> metaDataRepository;
public MetaDataEntity updateOnlyMetadata(String someParameter, Map<String, Object> newMetaData) {
MetaDataEntity metaDataEntity = metaDataRepository.findBySomeParameter(someParameter);
metaDataEntity.setMetaData(newMetaData);
return metaDataRepository.save(metaDataEntity);
}
}
For the creation it works fine, but it doesn't work with the updating of the converted field. If i try to update only the
metaData field the appropriate column in database is not updated. However, the metaData is updated in case of updating with the other entity fields.
I've already seen the same questions (JPA not updating column with Converter class and Data lost because of JPA AttributeConverter?), but i have not found the answer. Is there something like a standard or best practice for such a case?
FYI: For the CRUD operations i'm using the Spring Data JpaRepository class and its methhods.
Implement proper equals and hashcode methods on your map value objects. JPA can then use those methods to identify that your Map is dirty.
I had a similar problem. i figure out that hibernate used the equals method to determine if one of attribute is dirty and then make an update.
you have two choices : implement correctly the equals method for the entity including the converted attribute
or
instead of using "Map" try to use "HashMap" as attribute type.
the HashMap has an already implemented equals method and hibernate will use it
Related
I have a spring JPA entity with numeric properties which should be serialized as string values that are the result of a lookup in a code-list.
#Entity
class Test {
String name ;
int status ;
}
Should be serialized by looking up the numeric value in a code-list like so:
{ "name" : "mytest" , "status" : "APPROVED" }
The code-list is implemented as another entity and can be accessed using a spring-boot JPA repository.
The solution I am looking for must be scalable in that
the code-list cannot be loaded from the database again for each serialization or new object
the serialization code must be generic so that it can be re-used in other entities.
This is, other entities also have numeric properties and their own corresponding code-list and repository.
I understand one could either use a custom Jackson serializer, or implement the lookup as part of the entity. However, neither seems to satisfy the conditions above. A custom jackson serializer can only have the repository autowired if I share it or the map using a static field. The static field makes it hard to re-use the serializer implementation for other entities.
Implementing the lookup as part of the entity, say as custom getter, would make the code hard to re-use, especially since the map for the code-list lookup must be shared across instances.
Update: A third strategy would be to add JPA relationships to the code-list in my entities and to expose the value in a custom getter. For the deserialization rather inefficient lookups would be required, though.
This works, but the static map prevents re-using the code.
#Component
public class NewApprovalSerializer extends JsonSerializer<Integer>
{
#Autowired
ApprovalStatusRefRepository repo;
static Map<Integer, String> map = new HashMap<Integer,String>() ;
#PostConstruct
public void init() {
for ( TGoApprovalStatusRef as : repo.findAll() ) {
Integer key = new Integer( as.getApprovalStatusId() );
String val = as.getNameTx();
map.put( key , val );
}
}
public NewApprovalSerializer() {
SpringBeanAutowiringSupport.processInjectionBasedOnCurrentContext(this);
}
public void serialize(Integer value, JsonGenerator gen, SerializerProvider serializers) throws IOException {
gen.writeObject( map.get( value ) );
}
}
I could change the entity like such, but I again have a static map and the code is even harder to re-use in another entity.
#Entity
class Test {
String name ;
#JsonIgnore
int status ;
static Map<Integer, String> map = new HashMap<Integer,String>() ;
public Test() {
... init static map
}
public String getStatus() {
return map.get(this.status);
}
}
What is the standard way to implement a lookup of values uppon serialization (and vice versa in deserialization)?
I use Dozer Mapper to map my objects between dao layer (MongoDB is used) and business logic layer. Actually, objects structure is identical.
Document class (semantically is closer to business logic layer)
public class Document {
private String id;
private String schema;
private Map<String, Object> attributes = new HashMap<>();
}
DocumentEntity class (semantically is closer to dao layer)
public class DocumentEntity {
#Id
private String id;
#Field("schema")
private String schema;
#Field("attributes")
private Map<String, Object> attributes = new HashMap<>();
}
Here's an example of dozer-bean-mappings.xml
<mapping>
<class-a>DocumentEntity</class-a>
<class-b>Document</class-b>
</mapping>
And here's the method that converts one object into another before saving it to MongoDB
DocumentEntity toEntity(Document document) {
DocumentEntity entity = new DocumentEntity();
mapper.map(document, entity);
return entity;
}
As you can see one the field attributes is Map<String, Object>. Everything worked fine until I tried to map complex type "inside" Object. I needed to have list of objects with two fields each as value of this Map.
At the top level there's REST API that provides saving objects to MongoDB. But because of Dozer's incorrect mapping types become invalid.
JSON input
{
"schema":"sch_1",
"attributes": {
"objUID": "obj_1",
"nestedObjects": [
{
"objUID": "obj_1_1",
"objSchema": "sch_1_1"
},
{
"objUID": "obj_1_2",
"objSchema": "sch_1_2"
}
]
}
}
And that was saved after mapping:
{
"schema":"sch_1",
"attributes": {
"objUID": "obj_1",
"nestedObjects": [
"{objUID=obj_1_1, objSchema=sch_1_1}",
"{objUID=obj_1_2, objSchema=sch_1_2}"
]
}
}
So, instead getting list of objects I just get list of strings.
How should I configure dozer to get correct objects mapping?
I'm on Spring boot 1.4.x branch and Spring Data MongoDB.
I want to extend a Pojo from HashMap to give it the possibility to save new properties dynamically.
I know I can create a Map<String, Object> properties in the Entry class to save inside it my dynamics values but I don't want to have an inner structure. My goal is to have all fields at the root's entry class to serialize it like that:
{
"id":"12334234234",
"dynamicField1": "dynamicValue1",
"dynamicField2": "dynamicValue2"
}
So I created this Entry class:
#Document
public class Entry extends HashMap<String, Object> {
#Id
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
}
And the repository like this:
public interface EntryRepository extends MongoRepository<Entry, String> {
}
When I launch my app I have this error:
Error creating bean with name 'entryRepository': Invocation of init method failed; nested exception is org.springframework.data.mapping.model.MappingException: Could not lookup mapping metadata for domain class java.util.HashMap!
Any idea?
TL; DR;
Do not use Java collection/map types as a base class for your entities.
Repositories are not the right tool for your requirement.
Use DBObject with MongoTemplate if you need dynamic top-level properties.
Explanation
Spring Data Repositories are repositories in the DDD sense acting as persistence gateway for your well-defined aggregates. They inspect domain classes to derive the appropriate queries. Spring Data excludes collection and map types from entity analysis, and that's why extending your entity from a Map fails.
Repository query methods for dynamic properties are possible, but it's not the primary use case. You would have to use SpEL queries to express your query:
public interface EntryRepository extends MongoRepository<Entry, String> {
#Query("{ ?0 : ?1 }")
Entry findByDynamicField(String field, Object value);
}
This method does not give you any type safety regarding the predicate value and only an ugly alias for a proper, individual query.
Rather use DBObject with MongoTemplate and its query methods directly:
List<DBObject> result = template.find(new Query(Criteria.where("your_dynamic_field")
.is(theQueryValue)), DBObject.class);
DBObject is a Map that gives you full access to properties without enforcing a pre-defined structure. You can create, read, update and delete DBObjects objects via the Template API.
A last thing
You can declare dynamic properties on a nested level using a Map, if your aggregate root declares some static properties:
#Document
public class Data {
#Id
private String id;
private Map<String, Object> details;
}
Here we can achieve using JSONObject
The entity will be like this
#Document
public class Data {
#Id
private String id;
private JSONObject details;
//getters and setters
}
The POJO will be like this
public class DataDTO {
private String id;
private JSONObject details;
//getters and setters
}
In service
Data formData = new Data();
JSONObject details = dataDTO.getDetails();
details.put("dynamicField1", "dynamicValue1");
details.put("dynamicField2", "dynamicValue2");
formData.setDetails(details);
mongoTemplate.save(formData );
i have done as per my business,refer this code and do it yours. Is this helpful?
I have a hibernate Entity with relationship mapping with fetch=FetchType.LAZY
Like:
....
private ConsumerEntity consumerEntity;
#ManyToOne(fetchType.LAZY)
#JoinColumn(name="orderId", insertable=false, updateable=false)
public ConsumerEntity getConsumerEntity(){
return this.consumerEntity;
}
....
And I want to transfer the entity object to HashMap<String, Object>, I do that with Introspector and currently just ignore the child-entities ,only parse the non-entity member to the map:
protected Map<String, Object> transBean2Map(Object beanObj){
Map<String, Object> map = new HashMap<String, Object>();
try {
BeanInfo beanInfo = Introspector.getBeanInfo(beanObj.getClass());
PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
for (PropertyDescriptor property : propertyDescriptors) {
String key = property.getName();
if (!key.equals("class")
&& !key.endsWith("Entity")) {
Method getter = property.getReadMethod();
Object value = getter.invoke(beanObj);
map.put(key, value);
}
}
} catch (Exception e) {
Logger.getAnonymousLogger().log(Level.SEVERE,"transBean2Map Error " + e);
}
return map;
}
I want to put every child-entities in the map as a embedded map, ONLY IF they were already fetched(maybe by explicitly invoking the getter() method or accidentally loaded by other method, giving bonus information when not bothering a lot will always a good idea, right?).
And NO, I don't want to make every thing as fetchType.EAGER. Just want to detect if the child-entities are already loaded, then transfer and embedding them to the parent Map, otherwise, do nothing(won't query the db to fetch it).
Doing the embedding won't bother a lot, maybe just some recursions. So what I need is to know whether or not the child-entities are already loaded in the the parent entity, just like the consumerEntity in the example above.
Is there any way I can do that?
Hibernate provides some tools for this, here is what you can try
if (HibernateProxy.class.isInstance(entity.getConsumerEntity())) {
HibernateProxy proxy = HibernateProxy.class.cast(entity.getConsumerEntity());
if (proxy.getHibernateLazyInitializer().isUninitialized()) {
// getConsumerEntity() IS NOT initialized
} else {
// getConsumerEntity() IS initialized
}
}
I'm using spring-data-elasticsearch and for the beginning everything works fine.
#Document( type = "products", indexName = "empty" )
public class Product
{
...
}
public interface ProductRepository extends ElasticsearchRepository<Product, String>
{
...
}
In my model i can search for products.
#Autowired
private ProductRepository repository;
...
repository.findByIdentifier( "xxx" ).getCategory() );
So, my problem is - I've the same Elasticsearch type in different indices and I want to use the same document for all queries. I can handle more connections via a pool - but I don't have any idea how I can implement this.
I would like to have, something like that:
ProductRepository customerRepo = ElasticsearchPool.getRepoByCustomer("abc", ProductRepository.class);
repository.findByIdentifier( "xxx" ).getCategory();
Is it possible to create a repository at runtime, with an different index ?
Thanks a lot
Marcel
Yes. It's possible with Spring. But you should use ElasticsearchTemplate instead of Repository.
For example. I have two products. They are stored in different indices.
#Document(indexName = "product-a", type = "product")
public class ProductA {
#Id
private String id;
private String name;
private int value;
//Getters and setters
}
#Document(indexName = "product-b", type = "product")
public class ProductB {
#Id
private String id;
private String name;
//Getters and setters
}
Suppose if they have the same type, so they have the same fields. But it's not necessary. Two products can have totally different fields.
I have two repositories:
public interface ProductARepository extends ElasticsearchRepository<ProductA, String> {
}
public interface ProductBRepository
extends ElasticsearchRepository<ProductB, String> {
}
It's not necessary too. Only for testing. The fact that ProductA is stored in "product-a" index and ProductB is stored in "product-b" index.
How to query two(ten, dozen) indices with the same type?
Just build custom repository like this
#Repository
public class CustomProductRepositoryImpl {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public List<ProductA> findProductByName(String name) {
MatchQueryBuilder queryBuilder = QueryBuilders.matchPhrasePrefixQuery("name", name);
//You can query as many indices as you want
IndicesQueryBuilder builder = QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder).build();
return elasticsearchTemplate.query(searchQuery, response -> {
SearchHits hits = response.getHits();
List<ProductA> result = new ArrayList<>();
Arrays.stream(hits.getHits()).forEach(h -> {
Map<String, Object> source = h.getSource();
//get only id just for test
ProductA productA = new ProductA()
.setId(String.valueOf(source.getOrDefault("id", null)));
result.add(productA);
});
return result;
});
}
}
You can search as many indices as you want and you can transparently inject this behavior into ProductARepository adding custom behavior to single repositories
Second solution is to use indices aliases, but you had to create custom model or custom repository too.
We can use the withIndices method to switch the index if needed:
NativeSearchQueryBuilder nativeSearchQueryBuilder = nativeSearchQueryBuilderConfig.getNativeSearchQueryBuilder();
// Assign the index explicitly.
nativeSearchQueryBuilder.withIndices("product-a");
// Then add query as usual.
nativeSearchQueryBuilder.withQuery(allQueries)
The #Document annotation in entity will only clarify the mapping, to query against a specific index, we still need to use above method.
#Document(indexName="product-a", type="_doc")