I have a Java SE(!) scenario with JMS and JPA where I might need distributed transactions as well as "regular" JDBC transactions. I have to listen to a queue which sends service requests, persist log on receiving, process the request and update the log after the request has been processed. The message shall only be acknowledged if the request has been processed successfully.
The first idea was to only use JTA (provided by Bitronix). But there I face two problems:
no log will be persisted if the request can't be processed
the request won't be processed if the log can't be updated (unlikely but yet possible)
So the other idea is to create and update the log with regular JDBC transactions. Only the entitymanager(s) for the request transaction(s) would join the user transactions and the entity managers for creating and updating the log would commit directly.
Is it possible to "mix" JTA and JPA on a single persistence unit? Or do we already have patterns for those kinds of JMS and JDBC transactions?
I actually solved my problem with a slightly different approach. Instead of "mixing" JTA and JDBC transactions I used suspend and resume to work with different user transaction.
The task is still the same: I start a (JTA) user transaction that contains some JMS and JDBC transactions (receiving a message, performing some database operations). And in the middle of that workflow, I want to write a message log but that logging shall not be rolled back when the "outer" transaction fails.
So the solution is, in pseudo code:
transactionManager.begin()
doSomeJdbcStuff();
Transaction main = transactionManager.suspend();
// do the logging
transactionManager.begin() // <- now a new transaction is created and active!
doSomeLogging();
transactionManager.commit()
// continue
transactionManager.resume(main);
doSomeMoreJdbcStuff();
transactionManager.commit();
Related
I am learning distributed transaction rollback with Spring-Boot.
I am using spring-boot 2.2 with JPA and H2 database.
In my example, I have three micro-services which are running on different ports, with their own H2 database.
MicroserviceA --- http://localhost:2222/savePersonBasicDetails
MicroserviceB --- http://localhost:3333/savePersonAddress
MicroserviceC --- http://localhost:4444/savePersonHobbies
From the MicroserviceA, I will get Person_Id, which I will send to the remaining two microservices along with their respective data. If any of the microservices fails, then I want to roll back the complete transaction.
Example:
save(PersonVO personVO) {
Integer personId = microserviceA.savePersonBasicDetails(personVO);
microserviceB.savePersonAddress(personId, personVO);
microserviceC.savePersonHobbies(personId, personVO);//If it fails in microserviceC,
//then the complete transaction should be rolled back.
}
I tried with #Transactional(rollbackFor = Exception.class) on the save() method, but the transaction is not rolling back.
Please suggestion.
You are mixing the terms. Distributed transaction is a term associated with RDBMS, not with webservices. There is a webservice standard for transactions over webservice WS-TRANSACTION relevant to soap webservices. But this standard is mostly unused.
The usualy therm used in context of web services is Transaction Compensation and you can search it. A very common pattern for compensation is the Try Cancel Confirm pattern, there are also different aproaches.
If you insist on using distributed transactions check out this link:
https://www.atomikos.com/Blog/TransactionalRESTMicroservicesWithAtomikos
I have applied spring transaction on service layer of application. There is one method which performing following two operations
1) send message to SQS.
2) And logs that entry in DB.
So, while adding log in DB if any exception occurs then operation (1) will roll backed ? OR Spring will apply transaction on non DB operations ?
Rollback in case of exception is applied to anything that is managed by that transaction. Sending a message to the SQS is not managed by the database transaction, therefore it will not be rolledback.
To achieve this you would need to make a hook into the rollback and do the rollback equivalent of the sending the message to SQS.
I'm using GlassFish v2ur1 (it's the one our company uses and I cannot upgrade it at this point). I have an EJB (ejbA) which gets called periodically from a timer. In the call, I'm reading a file, creating an entity bean for each line, and persisting the entity bean to a db (PostgreSQL v9.2). After calling entitymanager.persist(entityBean), an HTTP call is made to a servlet, passing the entityBean's ID, which in turn calls into another EJB (ejbB). ejbB sends a JMS message to another entity bean, ejbC. This is a production system and I must make the HTTP call, it processes the data further. ejbC is in the same enterprise application as ejbA, but uses a different EntityManager. ejbC receives the id, reads the record from the db, modifies the record, and persists it.
The problem I'm having is the entity bean's data isn't stored into the db until the transaction from the timer call completes (see below) (I understand that's the way EJB's work). When ejbB is called, it fails to find the record in the db with the id it receives. I've tried a couple of approaches to get the data stored into the db so ejbC can find it:
1) I tried setting the flush mode to COMMIT when persisting the entityBean in ejbA:
- em.setFlushMode(FlushModeType.COMMIT)
- instantiate entity bean
- em.persist(entityBean)
- em.flush()
However, the results are the same, by the time ejbC is called, no record is in the db.
2) I created ejbD and added a storeRecord method (which persists entityBean) in it with TransactionAttributeType.REQUIRES_NEW. This is supposed to suspend ejbA's transaction, start ejbD's transaction, commit it, and resume ejbA's transaction. Again, the results here are the same, by the time ejbC is called, no record is in the db. I'm also seeing a problem with this solution where the ejbA call just stops when I call the storeRecord method. No exceptions are thrown, but I don't see the EJB processing any more lines from the file even though there are more lines. It seems to abort the EJB call and roll back the transaction with no indications. Not sure if this is a GlassFish v2ur1 bug or not.
How can I ensure that the data is stored into the db in ejbA so when ejbC is called, it can find the record in the db? BTW, there are other things going on in ejbA which I don't necessarily want to commit. I'd like to only persist the entityBeans I'm trying to store into the db.
ejbA
ejbTimer called (txn starts)
read file contents
for each line
create entity bean
persist entity bean to db
make HTTP call to ejbB, passing id
<see ejbC>
return (txn ends)
ejbB
Processes data based on id
Looks up JMS queue for ejbC
Passes ejbC the id
ejbC
ejb method called (txn starts)
read record based on received id
modify record and persist
return (txn ends)
When using a transaction isolation of "read-committed", no other transaction can see changes made by an uncommitted transaction. You can specify a lower transaction isolation, but this will have no effect on PostgreSQL: it's "most chaotic" behaviour is read-committed, so you simply can't do it with PostgreSQL. And nor should you:
ejbA should not call ejbB via HTTP. Servlets should only be used to service remote client requests, not to provide internal services. ejbA should connect and invoke ejbB directly. If the method in ejbB is annotated TransactionAttributeType.MANDATORY or TransactionAttributeType.REQUIRED, ejbB will see the entity created by ejbA because it is under the same transaction.
In ejbB, persisting is unnecessary: simply load the entity with an EntityManager and make the change.
If you are completely at the mercy of this HTTP mechanism, you could use bean-managed transactions, but this is a terrible way of doing things:
read file contents
for each line
start transaction
create entity bean
persist entity bean to db
commit transaction
make HTTP call
There are two things I did to solve this. 1) I added remote methods to ejbB which perform the same functionality as the HTTP call. This way the call to ejbB is within the same transaction.
The root of the problem, which Glenn Lane pointed out, is that the transaction continues in the call from ejbA to ejbB, but it ends when ejbB sends the JMS message to ejbC...the transaction doesn't extend to the call to ejbC. This means when the id gets to ejbC, it's in a new transaction, one which cannot see the data persisted to the db by ejbA.
2) I stored the entity bean to the db in ejbA in a special state. The entity bean will be stored when the timer call to ejbA returns (and therefore txn commits). When ejbA is called again by the timer, it looks for record in the db in this special state. It then calls ejbB. ejbB sends a JMS message. When ejbC gets the id, it finds the record in the db (as it was previously committed in a previous txn), changes it's state, and continues processing.
I struggeling with JTA, two-phase-commit, JMS- and JDBC-transactions. The idea is (in short) to
receive a message on a queue
perform some database operations
acknowledge the message, when the db operations have been successful
So I got the XAQueueConnectionFactory, create the XAQueueSession, create a receiver from the session and set a message listener.
Inside the listener, in the onMessage method, I begin my user transaction, do the jdbc stuff and commit the transaction or do a rollback, if something went wrong. Now I expected (aka "hoped") that the message would be acknowledged, when the user transaction commits.
But that doesn't happen, the messages are still on the queue and get redelivered again and again.
What am I missing? I double-checked the session and the acknowledge mode really is "SESSION_TRANSACTED" and getTransacted returns true.
I don't have a Java EE container, no spring, no message driven beans. I use the standalone JTA bitronix.
You don't really need XA for this. Just following your algorithm: receive the message, perform the DB operations, then acknowledge the message... Literally, that's the solution. (And instead a transacted session, you probably would just choose explicit CLIENT_ACKNOWLEDGE.) If your application should fail while performing the DB operations, don't ack the JMS msg and it will be redelivered. If your app fails after the DB txn and before the ack, then the message will be redelivered -- but you can detected this (redelivered flag will be set to true on the message), and you can decide to reprocess the message or not, based on the state of the database.
When you say that inside the listener, you begin your user transaction, this seems to hint that you are using Bean Managed Transaction (BMT). Is there a good reason for doing so?
If you used Container Managed Transaction (CMT), what you want would come for free.
As far as I remember, it is not possible with BMT, since the UserTransaction will not participate and will not be able to participate in the transaction created for the message. But you might want to double check with the Java EE spec.
Edit:
Sorry, I realized too late that you are not using a Java EE container.
Are you sure that the user transaction that you start inside the listener is part of the transaction started for the message? It seems that you start an independent transaction for the db work.
If you use no container, who provides the JMS implementation, i.e. XAQueueConnectionFactory etc?
I think with XA you shoulndn't use transacted session.
I'm entering the world of JTA, due to need of distributed transactions, and I'm uncertain about the differences between javax.jms.ConnectionFactory and javax.jms.XAConnectionFactory or more accurately how can it be that javax.jms.ConnectionFactory performed what I expected only javax.jms.XAConnectionFactory can do for me.
The details: I'm using Atomikos essentials as my transaction manager and my app is running on Apache Tomcat 6.
I'm running a small POC with a dummy app where I have my JMS provider (OpenMQ) registered as a JNDI resource.
<Resource name="jms/myConnectionFactory" auth="Container"
type="com.atomikos.jms.AtomikosConnectionFactoryBean"
factory="com.atomikos.tomcat.EnhancedTomcatAtomikosBeanFactory"
uniqueResourceName="jms/myConnectionFactory"
xaConnectionFactoryClassName="com.sun.messaging.XAConnectionFactory"
maxPoolSize="3"/>
And the strange issue is that in my code I do this:
Context ctx = new InitialContext();
ConnectionFactory queueConnectionFactory =
(ConnectionFactory)ctx.lookup("java:comp/env/jms/myQueueFactory");
javax.jms.Connection connection = queueConnectionFactory.createConnection();
Session session = connection.createSession(true, Session.AUTO_ACKNOWLEDGE);
And later in the code I use this session in a UserTransaction and it performs flawlessly with two MessageProducers with either Commit or Rollback.
What I don't understand is how can it be that I'm using javax.jms.XAConnectionFactory.createConnection() method and I get a Session which does the job? What's javax.jms.XAConnectionFactory role?
I'll also add that I've looked at the source code of both classes (and javax.jms.BasicConnectionFactory) and I verified that the XA class does not override createConnection.
The core of the difference between ConnectionFactory and XAConnectionFactory is that the XAConnectionFactory creates XAConnections which create XASessions. XASessions represent the real difference because (to quote from the JMS JavaDocs:)
The XASession interface extends the capability of Session by adding access to a JMS provider's support for the Java Transaction API (JTA) (optional). This support takes the form of a javax.transaction.xa.XAResource object.
In other words, the XASession is what gives the XA instances their transactional awareness. However, this specific implementation is optional, even for a fully compliant JMS provider. From the same JavaDoc:
An XAResource provides some fairly sophisticated facilities for interleaving work on multiple transactions, recovering a list of transactions in progress, and so on. A JTA aware JMS provider must fully implement this functionality. This could be done by using the services of a database that supports XA, or a JMS provider may choose to implement this functionality from scratch.
A client of the application server is given what it thinks is a regular JMS Session. Behind the scenes, the application server controls the transaction management of the underlying XASession.
In other words, the provider may require that you specify an XA or non-XA JMS resource, or, as would seem to be in your case, the provider may perform all the JTA plumbing transparently with what appears to be a regular JMS Session.
Actually, none of the example code you provided would exercise XA functionality. If all that is required is that your messages are under syncpoint, then you can get by with 1-phase commit (1PC). However if you want, for example, that JMS messages and DB updates occur in a single coordinated unit of work then you would use 2-phase commit (2PC) which is XA. Coordinating across two message producers on the same transport provider does not require XA 2PC.
If you were using 2PC, then in addition to COMMIT and ROLLBACK you would be calling BEGIN somewhere in the code. The lack of that verb in your example is why I said it looks like you are not doing 2PC. The BEGIN call would communicate with the transaction manager to establish a transaction context across the participating resource managers. Then the COMMIT would cause the messages and the DB updates to finalize in one unit of work. The interesting thing here is that if you have only one participating resource manager, some transports will silently optimize you back down to 1PC. In that case it looks as though you are doing 2PC but are really getting 1PC. Since there is only one resource manager there is no loss of reliability in this optimization.
On the other hand, if you are doing 1PC you won't see any difference between the two types of connection factory. It would exhibit exactly the behavior you describe.
The last case to consider is that you use ConnectionFactory and try to call BEGIN. Since the non-XA connection factory cannot participate in a coordinated XA transaction, this call should fail.