I'm using GlassFish v2ur1 (it's the one our company uses and I cannot upgrade it at this point). I have an EJB (ejbA) which gets called periodically from a timer. In the call, I'm reading a file, creating an entity bean for each line, and persisting the entity bean to a db (PostgreSQL v9.2). After calling entitymanager.persist(entityBean), an HTTP call is made to a servlet, passing the entityBean's ID, which in turn calls into another EJB (ejbB). ejbB sends a JMS message to another entity bean, ejbC. This is a production system and I must make the HTTP call, it processes the data further. ejbC is in the same enterprise application as ejbA, but uses a different EntityManager. ejbC receives the id, reads the record from the db, modifies the record, and persists it.
The problem I'm having is the entity bean's data isn't stored into the db until the transaction from the timer call completes (see below) (I understand that's the way EJB's work). When ejbB is called, it fails to find the record in the db with the id it receives. I've tried a couple of approaches to get the data stored into the db so ejbC can find it:
1) I tried setting the flush mode to COMMIT when persisting the entityBean in ejbA:
- em.setFlushMode(FlushModeType.COMMIT)
- instantiate entity bean
- em.persist(entityBean)
- em.flush()
However, the results are the same, by the time ejbC is called, no record is in the db.
2) I created ejbD and added a storeRecord method (which persists entityBean) in it with TransactionAttributeType.REQUIRES_NEW. This is supposed to suspend ejbA's transaction, start ejbD's transaction, commit it, and resume ejbA's transaction. Again, the results here are the same, by the time ejbC is called, no record is in the db. I'm also seeing a problem with this solution where the ejbA call just stops when I call the storeRecord method. No exceptions are thrown, but I don't see the EJB processing any more lines from the file even though there are more lines. It seems to abort the EJB call and roll back the transaction with no indications. Not sure if this is a GlassFish v2ur1 bug or not.
How can I ensure that the data is stored into the db in ejbA so when ejbC is called, it can find the record in the db? BTW, there are other things going on in ejbA which I don't necessarily want to commit. I'd like to only persist the entityBeans I'm trying to store into the db.
ejbA
ejbTimer called (txn starts)
read file contents
for each line
create entity bean
persist entity bean to db
make HTTP call to ejbB, passing id
<see ejbC>
return (txn ends)
ejbB
Processes data based on id
Looks up JMS queue for ejbC
Passes ejbC the id
ejbC
ejb method called (txn starts)
read record based on received id
modify record and persist
return (txn ends)
When using a transaction isolation of "read-committed", no other transaction can see changes made by an uncommitted transaction. You can specify a lower transaction isolation, but this will have no effect on PostgreSQL: it's "most chaotic" behaviour is read-committed, so you simply can't do it with PostgreSQL. And nor should you:
ejbA should not call ejbB via HTTP. Servlets should only be used to service remote client requests, not to provide internal services. ejbA should connect and invoke ejbB directly. If the method in ejbB is annotated TransactionAttributeType.MANDATORY or TransactionAttributeType.REQUIRED, ejbB will see the entity created by ejbA because it is under the same transaction.
In ejbB, persisting is unnecessary: simply load the entity with an EntityManager and make the change.
If you are completely at the mercy of this HTTP mechanism, you could use bean-managed transactions, but this is a terrible way of doing things:
read file contents
for each line
start transaction
create entity bean
persist entity bean to db
commit transaction
make HTTP call
There are two things I did to solve this. 1) I added remote methods to ejbB which perform the same functionality as the HTTP call. This way the call to ejbB is within the same transaction.
The root of the problem, which Glenn Lane pointed out, is that the transaction continues in the call from ejbA to ejbB, but it ends when ejbB sends the JMS message to ejbC...the transaction doesn't extend to the call to ejbC. This means when the id gets to ejbC, it's in a new transaction, one which cannot see the data persisted to the db by ejbA.
2) I stored the entity bean to the db in ejbA in a special state. The entity bean will be stored when the timer call to ejbA returns (and therefore txn commits). When ejbA is called again by the timer, it looks for record in the db in this special state. It then calls ejbB. ejbB sends a JMS message. When ejbC gets the id, it finds the record in the db (as it was previously committed in a previous txn), changes it's state, and continues processing.
Related
in the above code i have used hibernate with mysql and the hibernate session is managed by SpringSessionContext (that'y am using sessionFactory.currentSession class under transactional boundary)
the below image (dao layer) is straight forward use case but the exception is not rolled back i have called this method from simple service layer (i.e service layer is calling dao layer for CRUD operation)
i learned about spring proxy mechanism on transaction management in this case this below image class is implementation of Dao interface so spring will create a proxy bean using Jdkdynamic proxy and this method is called from service layer (non transactional class but the expectation was data should not be persisted exception should rollback but it was persisted in db
Hibernate persists dirty objects after the whole transaction process is completed. You should examine the first input method to the last method flow. Hibernate persisting operation is not processed when the save function is called. It stores into a buffer map, and after the transaction completes, it will be processed. Are there any transactions or try-catch blocks in your flow?
My applications is on Spring MVC 4.2 with postgres database. In my application, we are consuming an API which is written using spring boot having its own database (mysql).
#Transaction(rollbackfor = Exception.class)
updateOrder(Order order) {
// This insert is part of my application
update(order); //**STEP - 1**
//This is not part of our application &
// happening in api written in spring boot.
Integer transactionId = updateOrderWorkflow(order);// **STEP - 2**
//Below updateOrderWithTransactionId is part of my application
//Updates the order with the transaction Id
updateOrderWithTransactionId(order, transactionId); //**STEP - 3**
}
If STEP-3 fails, then I have to rollback the changes made in the consuming api. For rolling back I have written compensation/rollback method, which rolls back to the old workflow status.
Now the problem scenario:
If one process (PROCESS_1) is working on the above updateOrder() method and reaches to STEP-3, but before this process FAILS in STEP-3, another process (PROCESS_2) tries to access the updateOrder() method and updates STEP-2. Now PROCESS_1 fails in STEP-3 and it calls compensation/rollback method, but PROCESS_2 completes STEP-3 successfully.
This creates data inconsistency. How to handle this situation?
It sounds like the problem is that updateOrderWorkflow in step 2 exposes the changes made by the transaction of PROCESS_1 before it has been committed.
What I would do is:
Change updateOrderWorkflow in step 2 so that it doesn't show uncommitted changes. Any changes it makes have to be made in a temporary space of some kind, associated with the transaction ID.
Add an API endpoint to this same API, to commit or roll back the changes of a transaction. If committed, the changes in the temporary space are made visible globally. If rolled back, the changes are discarded.
Use the new commit API endpoint in your updateOrder method, and the rollback API in your rollback handler.
I am using Spring data jpa to store and read entities across diffferent applications. Below are the steps that are executed:
Application 1 creates an entity via jpa repository and stores it into db with saveAndFlush() method
It then sends an event to queue saying, entity is created
This event is then read by another application (let's say Application 2) which then tries to read the entity and processes it
Following is the example method used to store the object:
#Transactional
public Entity createEntity(final Entity entity) {
return entityRepository.saveAndFlush(entity);
}
As per the documentation, #Transactional annotation should make sure the object gets persisted once method execution is finished. However, when Application 2 receives an event and tries to look up the entity (by id), it is not found. I am using Maria DB and Spring Data JPA 1.9.4.
Do we need to do anything else to force hard commit after saveAndFlush call?
I have a Java SE(!) scenario with JMS and JPA where I might need distributed transactions as well as "regular" JDBC transactions. I have to listen to a queue which sends service requests, persist log on receiving, process the request and update the log after the request has been processed. The message shall only be acknowledged if the request has been processed successfully.
The first idea was to only use JTA (provided by Bitronix). But there I face two problems:
no log will be persisted if the request can't be processed
the request won't be processed if the log can't be updated (unlikely but yet possible)
So the other idea is to create and update the log with regular JDBC transactions. Only the entitymanager(s) for the request transaction(s) would join the user transactions and the entity managers for creating and updating the log would commit directly.
Is it possible to "mix" JTA and JPA on a single persistence unit? Or do we already have patterns for those kinds of JMS and JDBC transactions?
I actually solved my problem with a slightly different approach. Instead of "mixing" JTA and JDBC transactions I used suspend and resume to work with different user transaction.
The task is still the same: I start a (JTA) user transaction that contains some JMS and JDBC transactions (receiving a message, performing some database operations). And in the middle of that workflow, I want to write a message log but that logging shall not be rolled back when the "outer" transaction fails.
So the solution is, in pseudo code:
transactionManager.begin()
doSomeJdbcStuff();
Transaction main = transactionManager.suspend();
// do the logging
transactionManager.begin() // <- now a new transaction is created and active!
doSomeLogging();
transactionManager.commit()
// continue
transactionManager.resume(main);
doSomeMoreJdbcStuff();
transactionManager.commit();
Using Ejb3.0, Weblogic 11g, JDBC
I am invoking a method which is running remotely in another deployment EAR.
The method in the remote deployment being invoked but it's annotated with the
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
the problem is that all the logic I do in the database before the remote method is being invoked wont commit till the remote method finished.
What I willing to do is a commit to let the "before" logic take place" and when I get back after the remote call to continue normally.
Any idea?
Some code to explain:
#CallByReference
#Stateless(mappedName = "ejb/OperatorProccessBean")
#Local({ OperatorProccessBeanLocal.class })
#Remote({ OperatorProccessBeanRemote.class })
public class OperatorProccessBean implements OperatorProccessBeanLocal,
OperatorProccessBeanRemote
{
...
SBNDispatchBeanRemote SBNDispatchBean = (SBNDispatchBeanRemote) context.lookup("ejb/SBNDispatchBean#com.mirs.sbn.dispatch.SBNDispatchBeanRemote");
if (SBNDispatchBean == null)
{
logger.error(TAG + " SBNDispatchBean is null");
}
else
{
//until here I want all my data to be commited without waiting for the upcoming remote method to finish
SBNDispatchBean.updateSubscriberInBlockingList(...);
}
...
}
Now the method updateSubscriberInBlockingList() is annotated with
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
i want the data to be commited before that method being invoked.
Thanks in advance,
ray.
Now the method updateSubscriberInBlockingList() is annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
I want the data to be commited before that method being invoked.
Given that you are using container managed transactions, it is not possible. The rationale behind this, is that when the container is already performing a transaction, then starting a new transaction will result in the original being suspended. When the new transaction has committed, the original transaction will be resumed.
This behavior is not configurable, for the EJB container and the JTA Transaction Manager is expected adhere to the behavior specified in the JTA specification, which is derived from X/Open DTP transaction model. In the X/Open DTP model, if there is a new transaction is started, while another is in progress, the current one is suspended, and resumed at a later point in time. It should be noted that no transaction model, would possibly (I haven't studied all) allow for committing the current transaction and starting a new one. I have only seen nested transactions or suspended transactions being supported in the various transaction processing models.
If you want to have the work committed, you must have the existing transaction context terminated completely, so that the existing transaction will commit, and then start the new transaction.
Put the "before remote call" logic in a separate bean method annotated with REQUIRES_NEW as well. You will thus have three transactions :
one for the main method (but which won't do anything until th remote call is done);
one for the logic before the remote call;
one for the remote call.