Transaction rollback in OSGi - java

I have an OSGi bundle in which I declare a service and inject into it a transaction with blueprint:
<bean id="MyServiceImpl"
class="com.test.impl.MyServiceImpl">
<jpa:context property="em" unitname="mypu" />
<tx:transaction method="*" value="Required" />
</bean>
<service id="MyService" ref="MyServiceImpl" interface="com.test.api.MyService" />
In this service I have two methods each one of which is writing data in the database, something like the following:
public void createParent() throws MyException {
Parent parent = new Parent();
... // Set parent fields
em.persist(parent);
createChild();
// Checks that could throw MyException
}
public void createChild() throws MyException {
Child child = new Child();
... // Set child fields
em.persist(child);
// Checks that could throw MyException
}
My problems are the following:
If I throw a runtime exception in the createParent method between em.persist(parent); and createChild(); the transaction rolls back (as I would expect) and parent is not persisted in the DB. However if at the same point I throw MyException (which is a checked exception) the transaction commits and parent is persisted. I saw in the Aries mailing list that a declared (checked) exception in a blueprint declarative transaction does not trigger a rollback. Is there a way to configure this behavior and specify that I want my exception to rollback the transaction when thrown?
If I throw a runtime exception in the createChild method (after em.persist(child);) child is not persisted in the database, however parent is persisted, as if the two methods are running in two different transactions. Why is that? Shouldn't createChild join in the transaction started by createParent?
If I throw a runtime exception in the createParent method after the call to createChild I get the same behavior as in point 2 (ie. parent is persisted and child is not persisted) which confuses me even more since even if I assume that createChild starts a new transaction then this should not get rolled back when an exception is thrown in createParent.
If in points 2 and 3 above the two methods are in different services then everything works as expected, ie. a runtime exception thrown in any of the methods rolls back the whole transaction.
Can someone please explain the above behavior?

After getting some help form the Aries mailing list it turns out the problem was in the datasource configuration and not in the blueprint configuration. Although I was using MysqlXADataSource as a driver class the datasource service was registered as a javax.sql.DataSource instead of javax.sql.XADataSource which is what was messing up my transactions.

1: A couple of years ago I asked the same. While in Spring you can specify that some transactions should cause rollback and some not, in blueprint you cannot do this. After a while I found the book "Clean code" and read the chapter "Error handling". And I got enlightened. I do not really try to write down the same as the book says. I think after you read it you will get some useful basic thoughts to build up your opinion if this is a right behaviour.
2: There can be two options:
You throw the exception before the persist function. You catch the exception in the parent. The child function is only wrapped with intercepting logic if the function is called from outside of there is not rollback when the call from child goes back to parent in whatever way. Just think of class wrapping. At least if you do not use bytecode manipulation or runtime class inheritance (but only java proxy classes) you cannot write wrap a class in a way that it intercepts function calls between functions inside. Probably Aries with ASM tries to do the trick (is ASM-4 is present) but personally I do not like this kind of tricks.
You found a bug
3: That confuses me, too :). Are you sure you do not throw the exception after persisting in the parent but before calling the child? Probably ASM is present and if that is there jta-blueprint has a bug... Debugging would be necessary to find out what happens.
4: Nice to hear that it can work somehow :)

Related

LazyInitializationException in unit tests under Spring-Data/Spring-Boot

My unit tests are seeing org.hibernate.LazyInitializationException: could not initialize proxy [org.openapitools.entity.MenuItem#5] - no Session. I'm not sure why they expect a session in a unit test. I'm trying to write to an in-memory h2 database for the unit tests of my Controller classes that implement the RESTful APIs. I'm not using any mock objects for the test, because I want to test the actual database transactions. This worked fine when I was using Spring-Boot version 1.x, but broke when I moved to version 2. (I'm not sure if that's what caused the tests to break, since I made lots of other changes. My point is that my code has passed these tests already.)
My Repositories extend JPARepository, so I'm using a standard Hibernate interface.
There are many answers to this question on StackOverflow, but very few describe a solution that I could use with Spring-Data.
Addendum: Here's a look at the unit test:
#Test
public void testDeleteOption() throws ResponseException {
MenuItemDto menuItemDto = createPizzaMenuItem();
ResponseEntity<CreatedResponse> responseEntity
= adminApiController.addMenuItem(menuItemDto);
final CreatedResponse body = responseEntity.getBody();
assertNotNull(body);
Integer id = body.getId();
MenuItem item = menuItemApiController.getMenuItemTestOnly(id);
// Hibernate.initialize(item); // attempted fix blows up
List<String> nameList = new LinkedList<>();
for (MenuItemOption option : item.getAllowedOptions()) { // blows up here
nameList.add(option.getName());
}
assertThat(nameList, hasItems("pepperoni", "olives", "onions"));
// ... (more code)
}
My test application.properties has these settings
spring.datasource.url=jdbc:h2:mem:pizzaChallenge;DB_CLOSE_ON_EXIT=FALSE
spring.datasource.username=pizza
spring.datasource.password=pizza
spring.jpa.show-sql=true
This is not standard Hibernate, but spring data. You have to understand that Hibernate uses lazy loading to avoid loading the whole object graph from the database. If you close the session or connection to the database e.g. by ending a transaction, Hibernate can't lazy load anymore and apparently, your code tries to access state that needs lazy loading.
You can use #EntityGraph on your repository to specify that an association should be fetched or you avoid accessing the state that isn't initialized outside of a transaction. Maybe you just need to enlarge the transaction scope by putting #Transactional on the method that calls the repository and accesses the state, so that lazy loading works.
I found a way around this. I'm not sure if this is the best approach, so if anyone has any better ideas, I'd appreciate hearing from them.
Here's what I did. First of all, before reading a value from the lazy-loaded entity, I call Hibernate.initialize(item);
This throws the same exception. But now I can add a property to the test version of application.properties that says
spring.jpa.properties.hibernate.enable_lazy_load_no_trans=true
Now the initialize method will work.
P.S. I haven't been able to find a good reference for Spring properties like this one. If anyone knows where I can see the available properties, I'd love to hear about it. The folks at Spring don't do a very good job of documenting these properties. Even when they mention a specific property, they don't provide a link that might explain it more thoroughly.

JPA correct way to handle detached entity state in case of exceptions/rollback

I have this class and I tought three ways to handle detached entity state in case of persistence exceptions (which are handled elsewhere):
#ManagedBean
#ViewScoped
public class EntityBean implements Serializable
{
#EJB
private PersistenceService service;
private Document entity;
public void update()
{
// HANDLING 1. ignore errors
service.transact(em ->
{
entity = em.merge(entity);
// some other code that modifies [entity] properties:
// entity.setCode(...);
// entity.setResposible(...);
// entity.setSecurityLevel(...);
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 2. ensure entity is untouched before flush is ok
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
em.flush(); // an exception may be thrown here (rollback)
// forcing method exit without [entity] being reassigned.
entity = managed;
}); // an exception may be thrown on method return (rollback),
// but [entity] has already been reassigned with a "dirty" one.
//------------------------------------------------------------------
// HANDLING 3. ensure entity is untouched before whole transaction is ok
AtomicReference<Document> reference = new AtomicReference<>();
service.transact(em ->
{
Document managed = em.merge(entity);
// some other code that modifies [managed] properties:
// managed.setCode(...);
// managed.setResposible(...);
// managed.setSecurityLevel(...);
reference.set(managed);
}); // an exception may be thrown on method return (rollback),
// and [entity] is safe, it's not been reassigned yet.
entity = reference.get();
}
...
}
PersistenceService#transact(Consumer<EntityManager> consumer) can throw unchecked exceptions.
The goal is to maintain the state of the entity aligned with the state of the database, even in case of exceptions (prevent entity to become "dirty" after transaction fail).
Method 1. is obviously naive and doesn't guarantee coherence.
Method 2. asserts that nothing can go wrong after flushing.
Method 3. prevents the new entity assigment if there's an exception in the whole transaction
Questions:
Is method 3. really safer than method 2.?
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Is there a standard way to handle this common problem?
Thank you
Note that I'm already able to rollback the transaction and close the EntityManager (PersistenceService#transact will do it gracefully), but I need to solve database state and the business objects do get out of sync. Usually this is not a problem. In my case this is the problem, because exceptions are usually generated by BeanValidator (those on JPA side, not on JSF side, for computed values that depends on user inputs) and I want the user to input correct values and try again, without losing the values he entered before.
Side note: I'm using Hibernate 5.2.1
this is the PersistenceService (CMT)
#Stateless
#Local
public class PersistenceService implements Serializable
{
#PersistenceContext
private EntityManager em;
#TransactionAttribute(TransactionAttributeType.REQUIRED)
public void transact(Consumer<EntityManager> consumer)
{
consumer.accept(em);
}
}
#DraganBozanovic
That's it! Great explanation for point 1. and 2.
I'd just love you to elaborate a little more on point 3. and give me some advice on real-world use case.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
When you have to modify a single entity, the transactional method would just take the detached entity as parameter and return the updated entity, easy.
public Document updateDocument(Document doc)
{
Document managed = em.merge(doc);
// managed.setXxx(...);
// managed.setYyy(...);
return managed;
}
But when you need to modify more than one in a single transaction, the method can become a real pain:
public LinkTicketResult linkTicket(Node node, Ticket ticket)
{
LinkTicketResult result = new LinkTicketResult();
Node managedNode = em.merge(node);
result.setNode(managedNode);
// modify managedNode
Ticket managedTicket = em.merge(ticket);
result.setTicket(managedTicket);
// modify managedTicket
Remark managedRemark = createRemark(...);
result.setRemark(managedemark);
return result;
}
In this case, my pain:
I have to create a dedicated transactional method (maybe a dedicated #EJB too)
That method will be called only once (will have just one caller) - is a "one-shot" non-reusable public method. Ugly.
I have to create the dummy class LinkTicketResult
That class will be instantiated only once, in that method - is "one-shot"
The method could have many parameters (or another dummy class LinkTicketParameters)
JSF controller actions, in most cases, will just call a EJB method, extract updated entities from returned container and reassign them to local fields
My code will be steadily polluted with "one-shotters", too many for my taste.
Probably I'm not seeing something big that's just in front of me, I'll be very grateful if you can point me in the right direction.
Is method 3. really safer than method 2.?
Yes. Not only is it safer (see point 2), but it is conceptually more correct, as you change transaction-dependent state only when you proved that the related transaction has succeeded.
Are there cases where an exception is thrown between flush [excluded] and commit [included]?
Yes. For example:
LockMode.OPTIMISTIC:
Optimistically assume that transaction will not experience contention
for entities. The entity version will be verified near the transaction
end.
It would be neither performant nor practically useful to check optimistick lock violation during each flush operation within a single transaction.
Deferred integrity constraints (enforced at commit time in db). Not used often, but are an illustrative example for this case.
Later maintenance and refactoring. You or somebody else may later introduce additional changes after the last explicit call to flush.
Is there a standard way to handle this common problem?
Yes, I would say that your third approach is the standard one: Use the results of a complete and successful transaction.
However, I would definitely not use AtomicReference or similar cumbersome constructs. Java EE, Spring and other frameworks and application containers support declaring transactional methods via annotations: Simply use the result returned from a transactional method.
Not sure if this is entirely to the point, but there is only one way to recover after exceptions: rollback and close the EM. From https://docs.jboss.org/hibernate/entitymanager/3.6/reference/en/html/transactions.html#transactions-basics-issues
An exception thrown by the Entity Manager means you have to rollback
your database transaction and close the EntityManager immediately
(discussed later in more detail). If your EntityManager is bound to
the application, you have to stop the application. Rolling back the
database transaction doesn't put your business objects back into the
state they were at the start of the transaction. This means the
database state and the business objects do get out of sync. Usually
this is not a problem, because exceptions are not recoverable and you
have to start over your unit of work after rollback anyway.
-- EDIT--
Also see http://piotrnowicki.com/2013/03/jpa-and-cmt-why-catching-persistence-exception-is-not-enough/
ps: downvote is not mine.

Right exception to throw for the lack of a system property

Say I have a system property MY_PROP:
java -DMY_PROP="My value"
This property is necessary for my system to work.
What is the right exception to throw if this property is not set?
#PostConstruct
private void init() {
myProp = System.getProperty("MY_PROP");
if (myProp == null) {
throw new ????
}
// ...
}
Somehow IllegalArgumentException does not feel right. Maybe IllegalStateException, MissingResourceException, TypeNotPresentException? What is the standard practice for this scenario?
There is none. I would throw the IllegalStateException, because you are missing the parameter. This mean that configuration validator has failed and your application is in invalid state. In other words you should never be able to call the init() at all.
In case the value of parameter would be invalid, then i would throw an IllegalArgumentException.
If you are writing a validator, you should decide between using RuntimeException or checked one. When using for example javax.naming.ConfigurationException`, or created own one configuration exception. You API will be able to handle such exception and react properly in term of legacy.
Definitions:
IllegalStateException - Signals that a method has been invoked at an illegal or inappropriate time. In other words, the Java environment or Java application is not in an appropriate state for the requested operation.
IllegalArgumentException - Thrown to indicate that a method has been passed an illegal or inappropriate argument.
I only add to Vash's answer for the Spring Framework. If your using the Spring Framework and you want to be consistent with how most of the components in Spring do it then I would say you should use IllegalStateException (or your own derivation).
In Spring most components that do a #PostConstruct or #Override void afterPropertiesSet() throw IllegalStateException using the util org.springframework.util.Assert.state(..).
You can see this done in Spring AMQP as one example.
That being said I have actually filed bugs against Spring MVC where they used IllegalArgumentException instead of a custom and more explicit derived class. With static inline classes its very easy to create a custom exception with out creating another Java file.
Because a system property is not always defined, the standard pratice is to use a default value when you can't find the property.
I just checked some standard code in java 7 (apache tomcat, java.lang, java.awt, ...), they always use a default "fallback" when the property is null.
So maybe your problem is somewhere else ?
Why don't you take this parameters as a required argument of your jar ? Then you can use IllegalArgumentException.

Junit exception testing with spring transactions and rollbacks

So I have a interesting problem that i will need some help with. I know a bunch of questions have been asked around rollbacks in transactions using junit but I believe my problem and slightly different. To give people a better understanding of the problem let me start from the beginning.
I have implemented a UserManagementService with its respective DAO for a user management system. There is a general method called CreateUser(User obj) that is used to create a unique user. Now, there is a constraint set that email addresses are unique so if we try to invoke this method with a email address that has already been used, we throw a custom exception called UserManagementException with its respective error message. All this works fine however, the problem I am having is when it comes to the unit test. Oh, before i forget, let me mention the software stack i am using [Java, spring, hibernate]
I have my unit test class annotated with the Transactional annotations for each method that actually hits the db. These methods also have the #Rollback annotation so that all inserts, updates and deletions are rolled back at the end of each test invocation. So the problem i am facing here is I would like to test for the unique user constraint scenario. By calling the createUser(obj) a second time with a user object with the same email address I want to ensure that the UserManagementException exception is thrown. However, since it is transactional, whenever a exception is thrown, the transaction is rollback before the unit test completes and hence fails the test. Below is the test case.
#Test
#Rollback
#Transactional
public void testUniqueCreateConsoleUser() {
boolean success;
ConsoleUser newUser;
//first one
userManagementDao.createConsoleUser(user);
//second one. This shd throw a UserManagementException
try {
//now try and insert a new user with same email
newUser = new ConsoleUser("Queen", "Kong", "king.kong#blah.com", "kingkong","Universal Studios", "America/Los_Angeles", false, null);
userManagementDao.createConsoleUser(newUser);
//if this passed this is a problem. Console users should have unique email address
success = false;
} catch (UserManagementException e) {
success = true;
}
Assert.assertTrue(success);
}
The weird thing is when i am running it through the debugger, the Assert.assertTrue() method is invoked correctly but the test ultimately fails.
Another thing i tried was to add a prop to the #Transactional annotation. I added the flowing #Transactional(noRollbackFor = UserManagementException.class) in hopes that if the exception was thrown, the rollback wouldn't be invoked then but at the end of the test. I may be approaching this the wrong way so any ideas or best practices around this sort of testing would be greatly appricieated.
Note: Below is a snippet from the stacktrace..
org.springframework.transaction.UnexpectedRollbackException: Transaction rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:695)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:321)
at org.springframework.transaction.aspectj.AbstractTransactionAspect.ajc$afterReturning$org_springframework_transaction_aspectj_AbstractTransactionAspect
It's hard to tell from your example, but you seem to be testing against your actual DAO implementation. Rather than have unit test data hitting your actual database, mock your DAO with either a mock implementation or a mocking framework. You can then manipulate the data returned programmatically and contort it into whatever validation scenarios you want.
If you can confirm that an extra rollback is thrown (for example - when spring does the insert, when it sees that it fails, does it already roll the transaction back?) then you should catch the rollback, or configure spring not to roll the transaction back.
That is, clearly, the rollback which spring is implementing is conflicting with the expected rollback in your unit test. This rollback is then confusing the rollback annotation, causing an unexpected thrown exception in the "unit-test / Spring ether".
THE SIMPLE SOLUTION : Don't enable the automated rollbacks for this test. Tests don't always have to be perfectly elegant.
Rather than inserting a user and then inserting another user with the same email address I suggest first loading an existing user from the database and then attempting to insert anther with the same email address as the one that was retrieved. If so you simply need to do:
#Test(expected = UserManagementException.class)
public void insert_duplicate_user() throws Exception {
// Read user from database
final ConsoleUser user = dao.load(...);
// Create new user with same email address.
final ConsoleUser newUser = new ConsoleUser (...);
newUser.setEmail(user.getEmail());
// Write
dao.createConsoleUser(newUser);
/*
* If you get here, there is a problem with your DAO logic
* and a new user (with the same email was created).
* So, we need to clean that up
*/
// Delete new user
dao.deleteUser(newUser);
}
This test will fail unless a UserManagementException is thrown.

How to do transactional without lose encapsulation?

I have a code that saves a bean, and updates another bean in a DB via Hibernate. It must be do in the same transaction, because if something wrong occurs (f.ex launches a Exception) rollback must be executed for the two operations.
public class BeanDao extends ManagedSession {
public Integer save(Bean bean) {
Session session = null;
try {
session = createNewSessionAndTransaction();
Integer idValoracio = (Integer) session.save(bean); // SAVE
doOtherAction(bean); // UPDATE
commitTransaction(session);
return idBean;
} catch (RuntimeException re) {
log.error("get failed", re);
if (session != null) {
rollbackTransaction(session);
}
throw re;
}
}
private void doOtherAction(Bean bean) {
Integer idOtherBean = bean.getIdOtherBean();
OtherBeanDao otherBeanDao = new OtherBeanDao();
OtherBean otherBean = otherBeanDao.findById(idOtherBean);
.
. (doing operations)
.
otherBeanDao.attachDirty(otherBean)
}
}
The problem is:
In case that
session.save(bean)
launches an error, then I get AssertionFailure, because the function doOtherAction (that is used in other parts of the project) uses session after a Exception is thrown.
The first thing I thought were extract the code of the function doOtherAction, but then I have the same code duplicate, and not seems the best practice to do it.
What is the best way to refactor this?
It's a common practice to manage transactions at one level above DAOs, in services or other business logic classes. That way you can, based on the business/service logic, in one case do two DAO operations in one transaction and, in another case, do them in separate transactions.
I'm a huge fan of Declarative Transaction Management. If you can spare the time to get it working (piece of cake with an Application Server such as GlassFish or JBoss, and easy with Spring). If you annotate your business method with #TransactionAttribute(REQUIRED) (it can even be set to be done as default) and it calls the two DAO methods you will get exactly what you want: everything gets committed at once or rolled back over an Exception.
This solution is about as loosely coupled as it gets.
The others are correct in that they take in to account what are common practice currently.
But that doesn't really help you with your current practice.
What you should do is create two new DAO methods. Such as CreateGlobalSession and CommitGlobalSession.
What these do is the same thing as your current create and commit routines.
The difference is that they set a "global" session variable (most likely best done with a ThreadLocal). Then you change the current routines so that they check if this global session already exists. If your create detects the global session, then simply return it. If your commit detects the global session, then it does nothing.
Now when you want to use it you do this:
try {
dao.createGlobalSession();
beanA.save();
beanb.save();
Dao.commitGlobalSession();
} finally {
dao.rollbackGlobalSession();
}
Make sure you wrap the process in a try block so that you can reset your global session if there's an error.
While the other techniques are considered best practice and ideally you could one day evolve to something like that, this will get you over the hump with little more than 3 new methods and changing two existing methods. After that the rest of your code stays the same.

Categories

Resources