How to test LazyInitializationExceptions? - java

I have some code which (in production):
In one thread, primes a cache with data from the db
In another thread, grabs the data from the cache, and starts iterating it's properties.
This threw a LazyInitializationException.
While I know how to fix the problem, I want to get a test around this. However I can't figure out how to recreate the exception in the correct part of the test.
I have to prime the DB with some test data, therefore my test is annotated with #Transactional. Failing to do so causes the set-up to fail with... you guessed it... LazyInitializationException.
Here's my current test:
#Transactional
public class UpdateCachedMarketPricingActionTest extends AbstractIntegrationTest {
#Autowired
private UpdateCachedMarketPricingAction action;
#Autowired
private PopulateMarketCachesTask populateMarketCachesTask;
#Test #SneakyThrows
public void updatesCachedValues()
{
// Populate the cache from a different thread, as this is how it happens in real life
Thread updater = new Thread(new Runnable() {
#Override
public void run() {
populateMarketCachesTask.populateCaches();
}
});
updater.start();
updater.join();
updateMessage = {...} //ommitted
action.processInstrumentUpdate(updateMessage);
}
So, I'm priming my cache in a separate thread, to try to get it outside of the current #Transaction scope. Additionally, I'm also calling entityManager.detatch(entity) inside the cache primer, to try to ensure that the entities that exist within the cache can't lazy-load their collections.
However, the test passes... no exception is thrown.
How can I forcibly get an entity to a state that when I next try to iterate it's collections, it will throw the LazyInitializationException?

You need to ensure that the transactions for each operation are committed, independent of each other. Annotating your test method or test class with #Tranactional leaves the current test transaction open and then rolls it back after execution of the entire test.
So one option is to do something like the following:
#Autowired
private PlatformTransactionManager transactionManager;
#Test
public void example() {
new TransactionTemplate(transactionManager).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// add your code here...
}
});
}
You could invoke your first operation in its own callback, and then invoke the second operation in a different callback. Then, when you access Hibernate or JPA entities after the callbacks, the entities will no longer be attached to the current unit of work (e.g., Hibernate Session). Consequently, accessing a lazy collection or field at that point would result in a LazyInitializationException.
Regards,
Sam
p.s. please note that this technique will naturally leave changes committed to your database. So if you need to clean up that modified state, consider doing so manually in an #AfterTransaction method.

Related

How can I tell if current session is dirty?

I want to publish an event if and only if there were changes to the DB. I'm running under #Transaction is Spring context and I come up with this check:
Session session = entityManager.unwrap(Session.class);
session.isDirty();
That seems to fail for new (Transient) objects:
#Transactional
public Entity save(Entity newEntity) {
Entity entity = entityRepository.save(newEntity);
Session session = entityManager.unwrap(Session.class);
session.isDirty(); // <-- returns `false` ):
return entity;
}
Based on the answer here https://stackoverflow.com/a/5268617/672689 I would expect it to work and return true.
What am I missing?
UPDATE
Considering #fladdimir answer, although this function is called in a transaction context, I did add the #Transactional (from org.springframework.transaction.annotation) on the function. but I still encounter the same behaviour. The isDirty is returning false.
Moreover, as expected, the new entity doesn't shows on the DB while the program is hold on breakpoint at the line of the session.isDirty().
UPDATE_2
I also tried to change the session flush modes before calling the repo save, also without any effect:
session.setFlushMode(FlushModeType.COMMIT);
session.setHibernateFlushMode(FlushMode.MANUAL);
First of all, Session.isDirty() has a different meaning than what I understood. It tells if the current session is holding in memory queries which still haven't been sent to the DB. While I thought it tells if the transaction have changing queries. When saving a new entity, even in transaction, the insert query must be sent to the DB in order to get the new entity id, therefore the isDirty() will always be false after it.
So I ended up creating a class to extend SessionImpl and hold the change status for the session, updating it on persist and merge calls (the functions hibernate is using)
So this is the class I wrote:
import org.hibernate.HibernateException;
import org.hibernate.internal.SessionCreationOptions;
import org.hibernate.internal.SessionFactoryImpl;
import org.hibernate.internal.SessionImpl;
public class CustomSession extends SessionImpl {
private boolean changed;
public CustomSession(SessionFactoryImpl factory, SessionCreationOptions options) {
super(factory, options);
changed = false;
}
#Override
public void persist(Object object) throws HibernateException {
super.persist(object);
changed = true;
}
#Override
public void flush() throws HibernateException {
changed = changed || isDirty();
super.flush();
}
public boolean isChanged() {
return changed || isDirty();
}
}
In order to use it I had to:
extend SessionFactoryImpl.SessionBuilderImpl to override the openSession function and return my CustomSession
extend SessionFactoryImpl to override the withOptions function to return the extended SessionFactoryImpl.SessionBuilderImpl
extend AbstractDelegatingSessionFactoryBuilderImplementor to override the build function to return the extended SessionFactoryImpl
implement SessionFactoryBuilderFactory to implement getSessionFactoryBuilder to return the extended AbstractDelegatingSessionFactoryBuilderImplementor
add org.hibernate.boot.spi.SessionFactoryBuilderFactory file under META-INF/services with value of my SessionFactoryBuilderFactory implementation full class name (for the spring to be aware of it).
UPDATE
There was a bug with capturing the "merge" calls (as tremendous7 comment), so I end up capturing the isDirty state before any flush, and also checking it once more when checking isChanged()
The following is a different way you might be able to leverage to track dirtiness.
Though architecturally different than your sample code, it may be more to the point of your actual goal (I want to publish an event if and only if there were changes to the DB).
Maybe you could use an Interceptor listener to let the entity manager do the heavy lifting and just TELL you what's dirty. Then you only have to react to it, instead of prod it to sort out what's dirty in the first place.
Take a look at this article: https://www.baeldung.com/hibernate-entity-lifecycle
It has a lot of test cases that basically check for dirtiness of objects being saved in various contexts and then it relies on a piece of code called the DirtyDataInspector that effectively listens to any items that are flagged dirty on flush and then just remembers them (i.e. keeps them in a list) so the unit test cases can assert that the things that SHOULD have been dirty were actually flushed as dirty.
The dirty data inspector code is on their github. Here's the direct link for ease of access.
Here is the code where the interceptor is applied to the factory so it can be effective. You might need to write this up in your injection framework accordingly.
The code for the Interceptor it is based on has a TON of lifecycle methods you can probably exploit to get the perfect behavior for "do this if there was actually a dirty save that occured".
You can see the full docs of it here.
We do not know your complete setup, but as #Christian Beikov suggested in the comment, is it possible that the insertion was already flushed before you call isDirty()?
This would happen when you called repository.save(newEntity) without a running transaction, since the SimpleJpaRepository's save method is annotated itself with #Transactional:
#Transactional
#Override
public <S extends T> S save(S entity) {
...
}
This will wrap the call in a new transaction if none is already active, and flush the insertion to the DB at the end of the transaction just before the method returns.
You might choose to annotate the method where you call save and isDirty with #Transactional, so that the transaction is created when your method is called, and propagated to the repository call. This way the transaction would not be committed when the save returns, and the session would still be dirty.
(edit, just for completeness: in case of using an identity ID generation strategy, the insertion of newly created entity is flushed during a repository's save call to generate the ID, before the running transaction is committed)

What happens when LockType.READ method is called by LockType.Write method

What happens when a LockType WRITE method in a singleton, container managed session bean which is of LockType READ at class level calls another method within the same bean which is of LockType READ.
#Singleton
#ConcurrencyManagement(ConcurrencyManagementType.CONTAINER)
#Lock(LockType.READ)
public class EmployeBean implements Employee {
#Lock(LockType.WRITE)
public Employee update() {
//update
}
public void calculate () {
//calculate and set
}
}
With the above bean, is it correct to have an implementation like this? What happens when this update() is being executed and at the same time some other service calls the calculate() ? Will the service wait until update() finishes or it also executes calculate() in parallel?I believe if it does go on in parallel it has high chances of corrupting the data or ending in data mismatch.
The calculate method can be made private and use only under a WRITE protected method. That way it is made sure there cannot be mismatch because of concurrent requests.
Wanted to know the impact and follow correct approach in handling concurrent requests in case like the above.

Alternative to sharing hibernate session between threads

I am working with Spring 3.0.7 / Hibernate 3.5.0.
I have some code that works with hibernate entity. It works ok. The problem occurs, when I try to make integration test for this code. Below is the scheme of how code is laid out with regards to transactions, just to give an idea what the issus is about.
class EntityDAOImpl implements EntityDAO<T>
{
public T save(T entity)
{
getHibernateTemplate().saveOrUpdate(entity);
}
}
class EntityManagerImpl : implements EntityManager
{
//EntityDAO dao;
#Transactional
public Entity createEntity()
{
Entity entity = dao.createNew();
//set up entity
dao.save(entity);
}
public Entity getFirstEntity()
{
return dao.getFirst();
}
}
The code is run across 2 threads.
Thread1:
//EntityManager entityManager
entityManager.createEntity();
//enity was saved and commited into DB
Thread thread2 = new Thread();
thread2.start();
//...
thread2.join();
...
Thread2:
//since entity was commited, second thread has no problem reading it here and work with it
Entity entity = entityDao.findFirst();
Now I have also an integration test to this code. And here where lies the problem. Transactions for this test are rollbacked, since I don't want to see any changes in the database after it is done.
#TransactionConfiguration(transactionManager="transactionManagerHibernate", defaultRollback=true)
public class SomeTest
{
#Transactional
public void test() throws Exception
{
//the same piece of code as described above is run here.
//but since entityManager.createEntity(); doesn't close transaction,
//Thread2 will never find this entity when calling entityDao.findFirst()!!!!!
}
}
I understand that sharing a hibernate session between threads is not a good idea.
What approach would you recommend to handle this situation?
My first thought was try to simplify and avoid threading - but threading helps me spair memory, otherwise I would have to hold huge chunks of data in memory.

Spring Transaction Annotations - get active transaction

In my Spring application I have service-layer methods marked as #Transactional(propagation=Propagation.REQUIRED) and am using <tx:annotation-driven />. Normally the default behavior of automatically committing the transaction when the method completes works like a charm. But in the particular case, I need to commit shortly before the end of the method - yes, even if the parts that come after that point throw an exception.
Is there a way inside such a method to get access to the current transaction? I tried this:
TransactionDefinition td = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_MANDATORY); // make sure we're talking about the same transaction already provided by the annotation
TransactionStatus status = transactionManager.getTransaction(td);
// perform various JDBC operations
transactionManager.commit(status);
methodThatNeedsToBeCalledAfterCommit();
But looking through my logs, I only see "AbstractPlatformTransactionManager.processCommit(752) | Initiating transaction commit" occurring once, and from the timestamps this appears to be after methodThatNeedsToBeCalledAfterCommit(), which would be the normal behavior for #Transactional methods.
Is there a way to actually force a commit inside such a method?
I don't think so. Moreover, Spring will try to recommit at the end of your method.
So 2 commits : bad.
You should rethink the organization of your methods.
Maybe divide the existing one in 2 methods : one with #Trnasactional, the other with your remaining lines.
This is probably because the default transaction propagation is PROPAGATION_REQUIRED, and so will commit only when the entire transaction is completed - which is the outer method for you. You can try with PROPAGATION_REQUIRES_NEW:
td.setPropagationBehavior(TransactionDefinition.PROPAGATION_REQUIRED);
Another alternative would be to use TransactionTemplate
Try this call this method from the program where you want to apply transaction.
DefaultTransactionDefinition transdefinition = new DefaultTransactionDefinition();
PlatformTransactionManager manager =new PlatformTransactionManager();
TransactionStatus status=null;
public void beginTransaction()
{
transdefinition.setPropagationBehavior(0);
status = manager.getTransaction(transdefinition);
}
public void commitTransaction()
{
if(status.isCompleted()){
manager.commit(status);
}
}
public void rollbackTransaction()
{
if(!status.isCompleted()){
manager.rollback(status);
}
}

REQUIRES_NEW within REQUIRES_NEW within REQUIRES_NEW ... on and on

JBoss 4.x
EJB 3.0
I've seen code like the following (greatly abbreviated):
#Stateless
#TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
public class EJB1 implements IEJB1
{
#EJB
private IEJB1 self;
#EJB
private IEJB2 ejb2;
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod1()
{
return someMethod2();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod2()
{
return self.someMethod3();
}
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
public boolean someMethod3()
{
return ejb2.someMethod1();
}
}
And say EJB2 is almost an exact copy of EJB1 (same three methods), and EJB2.someMethod3() calls into EJB3.someMethod1(), which then finally in EJB3.someMethod3() writes to the DB.
This is a contrived example, but have seen similar code to the above in our codebase. The code actually works just fine.
However, it feels like terrible practice and I'm concerned about the #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) on every method that doesn't even actually perform any DB writes. Does this actually create a new transaction every single time for every method call with the result of:
new transaction
-new transaction
--new transaction
---new transaction
...(many more)
-------new transaciton (DB write)
And then unwraps at that point? Would this ever be a cause for performance concern? Additional thoughts?
Does this actually create a new transaction every single time for
every method call
No, it doesn't. The new transaction will be created only when calling method by EJB reference from another bean. Invoking method2 from method1 within the same bean won't spawn the new transaction.
See also here and here. The latter is exceptionally good article, explaining transaction management in EJB.
Edit:
Thanks #korifey for pointing out, that method2 actually calls method3 on bean reference, thus resulting in a new transaction.
It really creates new JTA transaction in every EJB and this must do a serious performance effect to read-only methods (which makes only SELECTS, not updates). Use #SUPPORTS for read-only methods

Categories

Resources