Hibernate Validator for proxied collection - java

I was trying to use the hibernate #Size validation on a OneToMany collection which is lazy initialized. If i am creating the parent entity with the children added in this collection, the validation is applied on trying to persist.
But if i simply find the parent entity and then do a getChildren(), the validation is not applied at all. I even tried putting the annotation on the getter. so i am using #Size(max=1) but still hibernate doesn't throw any exception even if children are more than 1. even EAGER fetch does not help.
As of now i had to put validation logic myself in the getter but obviously this is not the clean way. kindly let me know if someone has faced this issue before and if there is any elegant way of doing this.

Event based validation is triggered on persist, update and remove. These are the events JPA defines for Bean Validation to occur (refer to the JSR-317 and JSR-338 specifications for more details). There is no validation when loading entities/associations from the database. The assumption is that persisted data is already validated. If you need to validate in your scenario, you need indeed to validate manually.

Hibernate Validator provides two TraversableResolvers out of the box which will be enabled automatically depending on your environment. The first is DefaultTraversableResolver which will always return true for isReachable() and isTraversable(). The second is JPATraversableResolver which gets enabled when Hibernate Validator is used in combination with JPA 2.
Create your own implementation of TraversableResolver or use DefaultTraversableResolver and configure Hibernate Validator.
public class MyTraversableResolver implements TraversableResolver {
#Override
public boolean isReachable(
Object traversableObject,
Node traversableProperty,
Class<?> rootBeanType,
Path pathToTraversableObject,
ElementType elementType) {
return true;
}
#Override
public boolean isCascadable(
Object traversableObject,
Node traversableProperty,
Class<?> rootBeanType,
Path pathToTraversableObject,
ElementType elementType) {
return true;
}
}

Related

touch equivalent for Hibernate entity

I'd like to implement repository method void touch(MyEntity myEntity) which enforces SQL call of update of entity columns to their current values. (The reason behind is the on update trigger which needs to be invoked in some point of execution.) Ideal usecase is:
void serviceMethod(Long myEntityId) {
MyEntity myEntity = myEntityRepository.findOne(myEntityId);
...
myEntityRepository.touch(myEntity);
...
}
There are already similar questions on SO which don't work for me: Force update in Hibernate (my entity is detached), Implementing “touch” on JPA entity? (doing some harmless change works but is not general and has bad impact on code readability), Hibernate Idempotent Update (similar example).
I am aware of session interceptor method findDirty and also CustomEntityDirtinessStrategy both described in this Vlad Mihalcea's article. However, it seems to use findDirty I would have to override session interceptor, which is not possible from within repository method since the interceptor is final field assigned to session at session creation. And CustomEntityDirtinessStrategy comes from SessionFactory which is global. I rather need some one-shot solution to temporary consider one concrete entity of one concrete class dirty.
The so-far-best working solution is to set invalid (array of nulls) entity snapshot into persistence context, so that the subsequent logic in flush() evaluates entity as differing from snapshot and enforce update. This works:
#Override
#Transactional
public void touch(final T entity) {
SessionImpl session = (SessionImpl)em.getDelegate();
session.update(entity);
StatefulPersistenceContext pctx = (StatefulPersistenceContext) session.getPersistenceContext();
Serializable id = session.getIdentifier(entity);
EntityPersister persister = session.getEntityPersister(null, entity);
EntityKey entityKey = session.generateEntityKey(id, persister);
int length = persister.getPropertyNames().length;
Field entitySnapshotsByKeyField = FieldUtils.getField(pctx.getClass(), "entitySnapshotsByKey", true);
Map<EntityKey,Object> entitySnapshotsByKey = (Map<EntityKey,Object>)ReflectionUtils.getField(entitySnapshotsByKeyField, pctx);
entitySnapshotsByKey.put(entityKey, new Object[length]);
session.flush();
em.refresh(entity);
}
The advice in Force update in Hibernate didn't work for me because session.evict(entity) clears entitySnapshotsByKey entry at all, which causes subsequent org.hibernate.event.internal.DefaultFlushEntityEventListener#getDatabaseSnapshot loads fresh entity from db. The question is 9 years old and I'm not sure if it's applicable to current version of Hibernate (mine is 5.2.17).
I am not satisfied with such hacky solution though. Is there some straightforward way or something I could do simpler?

#Transactional annotation Spring boot 2.0 and hibernate LazyInitializationException

I have the following question. From what I understand the #Transactional annotation is supposed to keep the session alive, thus enabling to lazy fetch child entities without the need to performe a specific joining query.
I have the following scenario where I do not understand why I'm still getting a LazyInitializationException.
My app runs a resolver in order to provide the various controller services with a resolved object so that it can be used directly.
Said resolver intercepts a header from the request and using it's value attempts to query the db in order to fetch the object. Now the object in question is quite simple is it's doings albeit it has a list of two sub-entities.
In order to perform the resolving action I'm using an extra service where I basically wrap some JpaRepository methods. The complete is below:
#Service
public class AppClientServiceImpl implements AppClientService {
private static final Logger LOGGER = LoggerFactory.getLogger(AppClientServiceImpl.class.getCanonicalName());
private final AppClientRepository repository;
#Autowired
public AppClientServiceImpl(AppClientRepository repository) {
this.repository = repository;
}
#Override
#Transactional(readOnly = true)
public AppClient getByAppClientId(final String appClientId) {
LOGGER.debug("Attempting to retrieve appClient with id:: {}", appClientId);
return repository.findByAppClientId(appClientId);
}
#Override
#Transactional
public void saveAndFlush(final AppClient appClient) {
LOGGER.debug("Attempting to save/update appClient:: {}", appClient);
repository.saveAndFlush(appClient);
}
}
As you can see both methods are annotated as #Transactional meaning that the should keep the session alive in the context of that said method.
Now, my main questions are the following:
1) Using the debugger I'm seeing even on that level getByAppClientId the list containing on the sub-entities which is lazy loaded has been resolved just fine.
2) On the resolver itself, where the object has been received from the delegating method, the list fails to be evaluated due to a LazyInitializationException.
3) Finally on the final controller service method which is also marked as #Transactional, the same as above occurs meaning that this eventually fails to it's job (since it's performing a get of the list that has failed to initialize.
Based on all the above, I would like to know what is the best approach in handling this. For once I do not want to use an Eager fetching type and I would also like to avoid using fetch queries. Also marking my resolver as #Transactional thus keeping the session open there as well is also out of the question.
I though that since the #Transactional would keep the session open, thus enabling the final service method to obtain the list of sub-entities. This seems not to be the case.
Based on all the above it seems that I need a way for the final service method that gets call (which needs the list on hand) to fetch it somehow.
What would the best approach to handle this? I've read quite a few posts here, but I cannot make out which is the most accepted methods as of Spring boot 2.0 and hibernate 5.
Update:
Seems that annotating the sub-entitie with the following:
#Fetch(FetchMode.SELECT)
#LazyCollection(LazyCollectionOption.TRUE)
Resolves the problem but I still don't know whether this is the best approach.
You initialize the collection by debugging. The debugger usually represents collections in a special way by using the collection methods which trigger the initialization, so that might be the reason why it seems to work fine during debugging. I suppose the resolver runs outside of the scope of the getByAppClientId? At that point the session is closed which is why you see the exception.
I created Blaze-Persistence Entity Views for exactly that use case. You essentially define DTOs for JPA entities as interfaces and apply them on a query. It supports mapping nested DTOs, collection etc., essentially everything you'd expect and on top of that, it will improve your query performance as it will generate queries fetching just the data that you actually require for the DTOs.
The entity views for your example could look like this
#EntityView(AppClient.class)
interface AppClientDto {
String getName();
}
Querying could look like this
List<AppClientDto> dtos = entityViewManager.applySetting(
EntityViewSetting.create(AppClientDto.class),
criteriaBuilderFactory.create(em, AppClient.class)
).getResultList();

Bean validation #ElementCollection and #Version conflict and fails validation

I am facing a very strange issue at the moment.
I have an entity that contains a property that is an element collection.
#ElementCollection(targetClass=Integer.class, fetch = FetchType.EAGER)
#CollectionTable(name="campaign_publisher", joinColumns=#JoinColumn(name="campaign_id"))
#Column(name = "publisher_id")
...
#NotEmpty(message = "campaign.publishers.missing")
public Set<Integer> getPublishers() {
return this.publishers;
}
public Campaign setPublishers(Set<Integer> publisherId) {
this.publishers = publisherId;
return this;
}
This all works fine. The values are validated and saved correct.
I also want this entity to have optimistic concurrency so I applied a #Version annotation as well.
#Version
private Long etag = 0L;
...
public Long getEtag() {
return etag;
}
public void setEtag(Long etag) {
this.etag = etag;
}
By adding the #Version annotation the #NotEmpty validation on my set of publishers always returns invalid.
To try and diagnose this I have tried the following:
Creating a custom validator at the entity level so I can inspect the values in the entity. I found that the Set of values have been replaced with an empty PersistentSet which is causing the validation to always fail.
I created some unit tests for the entity that uses a validator that is retrieved from the validationfactory and this validator seems to work as expected.
I have also tried to change the ElementCollection to a many-to-many relationship and a bi-directional one-to-many but the issue persists.
Right now I am out of ideas. The only thing I have found that works correctly is disabling the hibernate validation and manually calling the validator just before I save my data.
So my questions are:
Has anyone encountered this issue before?
Any advice on what I could try next?
Thank you all for reading!
Short answer: Set the initial value for etag = null.
// this should do the trick
#Version
private Long etag = null;
Longer one : When you are adding a optimistic locking via adding #Version annotation on a field with a default value you are making hibernate/spring-data think that the entity is not a new one (even the id is null). So on initial save instead of persisting entity undelying libraries try to do a merge. And merging transient entity forces hibernate to just one by one copy all the properties from source entity (the ones which you are persisting) to the target one (which is autocreate by hibernate with all the properties set to default values aka nulls) and here comes the problem, as hibernate will just copy the values of associations of FROM_PARENT type or in other words only associations which are hold on entity side but in your case the association is TO_PARENT (a foreign key from child to parent) hibernate will try to postpone association persistance after main entity save, but save will not work as entity will not pass #NotEmpty validation.
First I would suggest to remove the default value initialization for your #Version property. This property is maintained by hibernate, and should be initialized by it.
Second: are you sure that you are validating the fully constructed entity? i.e. you are constructing something, then do something, and for exact persist/flush cycle your entity is in wrong condition.
To clarify this, while you are on a Spring side, I would suggest to introduce service-level validation on your DAO layer. I.e. force the bean validation during initial call to DAO, rather then bean validation of entity during flush (yeap hibernate batches lots of things, and exact validation happens only during flush cycle).
To achieve this: mark your DAO #Validated and make your function arguments beign validated: FancyEntity store(#Valid #NotNull FancyEntity fancyEntity) { fancyEntity = em.persist(fancyEntity); em.flush(); return fancyEntity;}
By making this, you will be sure that you are storing valid entity: the validation would happen before store method is called. This will reveal the place where your entity became invalid: in your service layer, or in bad behaving hibernate layer.
I noticed that you use mixed access: methods and fields. In this case you can try to set #Version on the method:
#Version
public Long getEtag() {
return etag;
}
not on the field.

changing entity schema name before sessionFactory inicialization

During migration from hibernate 3 version to 4 version I faced with problem.
I use spring and hibernate in my project and during start up of my application sometimes I want to change schema of my entity classes. With 3 version hibernate and spring I make this by overriding postProcessConfiguration method in LocalSessionFactortBean class like this:
#SuppressWarnings("unchecked")
#Override
protected void postProcessAnnotationConfiguration(AnnotationConfiguration config)
{
Iterator<Table> it = config.getTableMappings();
while (it.hasNext())
{
Table table = it.next();
table.setSchema(schemaConfigurator.getSchemaName(table.getSchema()));
}
}
this work perfect for me. But in hibernate4.LocalSessionFactoryBean class all post process methods were deleted. Some people suggest to use ServiceRegistryBuilder class, but I want to use spring xml configuration for my session factory and with ServiceRegistryBuilder class I don't know how to perform this. So may be someone suggest any solution to my problem.
Looking at source code help to find solution. LocalSessionFactoryBean class has method called buildSessionFactory(newSessionFactory in previous version). With previous version of Hibernate(3 version) some operations where processed before this method call. You can see them in official docs
// Tell Hibernate to eagerly compile the mappings that we registered,
// for availability of the mapping information in further processing.
postProcessMappings(config);
config.buildMappings();
as I understand (may be I'm wrong) this buildMapping method parses all classes that specified as mapped classes or placed in packagesToScan and creates Table representation of all this classes. After this called postProcessConfiguration method.
With Hibernate 4 version we don't have such postProcess methods. But we can override buildSessionFactory method like this:
#Override
protected SessionFactory buildSessionFactory(LocalSessionFactoryBuilder sfb) {
sfb.buildMappings();
// For my task we need this
Iterator<Table> iterator = getConfiguration().getTableMappings();
while (iterator.hasNext()){
Table table = iterator.next();
if(table.getSchema() != null && !table.getSchema().isEmpty()){
table.setSchema(schemaConfigurator.getSchemaName(table.getSchema()));
}
}
return super.buildSessionFactory(sfb);
}

Prevent Dozer from triggering Hibernate lazy loading

I am using Spring transactions so the transaction is still active when POJO to DTO conversion occurs.
I would like to prevent Dozer from triggering lazy loading, so that hidden sql queries never occur : all fetching has to be done explicitly via HQL (to get the best control on performances).
Is it a good practice (I can't find it documented anywhere) ?
How to do it safely ?
I tried this before DTO conversion :
PlatformTransactionManager tm = (PlatformTransactionManager) SingletonFactoryProvider.getSingletonFactory().getSingleton("transactionManager");
tm.commit(tm.getTransaction(new DefaultTransactionDefinition()));
I don't know what happens to the transaction, but the Hibernate session doesn't get closed, and the lazy loading still occurs.
I tried this :
SessionFactory sf = (SessionFactory) SingletonFactoryProvider.getSingletonFactory().getSingleton("sessionFactory");
sf.getCurrentSession().clear();
sf.getCurrentSession().close();
And it prevents lazy loading, but is it a good practice to manipulate session directly in the application layer (which is called "facade" in my project) ? Which negative side effects should I fear ? (I've already seen that tests involving POJO -> DTO conversions could no more be launched through AbstractTransactionnalDatasource Spring test classes, because this classes try to trigger a rollback on a transaction which is no more linked to an active session).
I've also tried to set propagation to NOT_SUPPORTED or REQUIRES_NEW, but it reuse the current Hibernate session, and doesn't prevent lazy loading.
The only generic solution I have found for managing this (after looking into Custom Converters, Event Listeners & Proxy Resolvers) is by implementing a Custom Field Mapper. I found this functionality tucked away in the Dozer API (I don't believe it is documented in the User Guide).
A simple example is as follows;
public class MyCustomFieldMapper implements CustomFieldMapper
{
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping)
{
// Check if field is a Hibernate collection proxy
if (!(sourceFieldValue instanceof AbstractPersistentCollection)) {
// Allow dozer to map as normal
return false;
}
// Check if field is already initialized
if (((AbstractPersistentCollection) sourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
// Set destination to null, and tell dozer that the field is mapped
destination = null;
return true;
}
}
This will return any non-initialized PersistentSet objects as null. I do this so that when they are passed to the client I can differentiate between a NULL (non-loaded) collection and an empty collection. This allows me to define generic behaviour in the client to either use the pre-loaded set, or make another service call to retrieve the set (if required). Additionally, if you decide to eagerly load any collections within the service layer then they will be mapped as usual.
I inject the custom field mapper using spring:
<bean id="dozerMapper" class="org.dozer.DozerBeanMapper" lazy-init="false">
<property name="mappingFiles">
...
</property>
<property name="customFieldMapper" ref="dozerCustomFieldMapper" />
</bean>
<bean id="dozerCustomFieldMapper" class="my.project.MyCustomFieldMapper" />
I hope this helps anyone searching for a solution for this, as I failed to find any examples when searching the Internet.
A variation on the popular version above, makes sure to catch both PersistentBags, PersistentSets, you name it...
public class LazyLoadSensitiveMapper implements CustomFieldMapper {
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping) {
//if field is initialized, Dozer will continue mapping
// Check if field is derived from Persistent Collection
if (!(sourceFieldValue instanceof AbstractPersistentCollection)) {
// Allow dozer to map as normal
return false;
}
// Check if field is already initialized
if (((AbstractPersistentCollection) sourceFieldValue).wasInitialized()) {
// Allow dozer to map as normal
return false;
}
return true;
}
}
I didn't get the above to work (probably different versions). However this works fine
public class HibernateInitializedFieldMapper implements CustomFieldMapper {
public boolean mapField(Object source, Object destination, Object sourceFieldValue, ClassMap classMap, FieldMap fieldMapping) {
//if field is initialized, Dozer will continue mapping
return !Hibernate.isInitialized(sourceFieldValue));
}
}
Have you considered disabling lazy loading altogether?
It doesn't really seem to jive with the patterns you state you would like to use:
I would like to prevent Dozer from triggering lazy loading, so that hidden sql queries never occur : all fetching has to be done explicitly via HQL (to get the best control on performances).
This suggests you would never want to use lazy loading.
Dozer and the Hibernate-backed beans you pass to it are blissfully ignorant of each other; all Dozer knows is that it is accessing properties in the bean, and the Hibernate-backed bean is responding to calls to get() a lazy-loaded collection just as it would if you were accessing those properties yourself.
Any tricks to make Dozer aware of the Hibernate proxies in your beans or vice versa would, IMO, break down the layers of your app.
If you don't want any "hidden SQL queries" fired at unexpected times, simply disable lazy-loading.
Short version of this mapper will be
return sourceFieldValue instanceof AbstractPersistentCollection &&
!( (AbstractPersistentCollection) sourceFieldValue ).wasInitialized();
Using CustomFieldMapper may not be a good idea as it gonna invoke for every field of your source class,but our concern is only lazy association mapping(child object list),so we can set the null value in getter of the entity object,
public Set<childObject> getChild() {
if(Hibernate.isInitialized(child){
return childObject;
}else
return null;
}

Categories

Resources