Reactor and database transaction - java

I'm working on a java project, using spring boot as container, and to supply the fact the connection to Oracle database is no reactive, we decided to use reactor to parallelize operation such as insert or update that are not dependent to one another (no foreign key relations among them).
Sometimes during the execution of the ParallelFlux, it laments loosing connection:
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 1001ms.
or simply it does not find data persisted before the ParallelFlux call, but not committed.
This behavior is random, using the same data as input.
What I'm wondering about is if the database transaction is thread safe and if the ParallelFlux is treating it correctly.
Can anyone explain me how the database transaction is passed by spring boot to reactor and how the last handle it?
EDIT:
I did other tests. I monitored the connection on DB and noticed that the original transaction, the one where data is saved before the ParallelFlux block, is suspended and for each call of the method inside the ParallelFlux block, there is a new connection open and in wait for the original one to end. All this ended up as a lock on Oracle and then the reason for the org.springframework.jdbc.CannotGetJdbcConnectionException, due to connection saturation.
Knowing that, I changed the #Transactional annotation of the method called by the ParallelFlux, to:
#Transactional(propagation = Propagation.REQUIRED)
Doing this it result in a org.springframework.jdbc.CannotGetJdbcConnectionException at the first call of the method, as expected.
Change of Tags
Someone asked to remove "reactor" from the tag list of this question, but it is not correct, because I'm using Reactor and not spring-webflux and this part is in the backend, not inside a REST service.
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-core</artifactId>
<type>jar</type>
</dependency>

Related

How to handle data inconsistency

My applications is on Spring MVC 4.2 with postgres database. In my application, we are consuming an API which is written using spring boot having its own database (mysql).
#Transaction(rollbackfor = Exception.class)
updateOrder(Order order) {
// This insert is part of my application
update(order); //**STEP - 1**
//This is not part of our application &
// happening in api written in spring boot.
Integer transactionId = updateOrderWorkflow(order);// **STEP - 2**
//Below updateOrderWithTransactionId is part of my application
//Updates the order with the transaction Id
updateOrderWithTransactionId(order, transactionId); //**STEP - 3**
}
If STEP-3 fails, then I have to rollback the changes made in the consuming api. For rolling back I have written compensation/rollback method, which rolls back to the old workflow status.
Now the problem scenario:
If one process (PROCESS_1) is working on the above updateOrder() method and reaches to STEP-3, but before this process FAILS in STEP-3, another process (PROCESS_2) tries to access the updateOrder() method and updates STEP-2. Now PROCESS_1 fails in STEP-3 and it calls compensation/rollback method, but PROCESS_2 completes STEP-3 successfully.
This creates data inconsistency. How to handle this situation?
It sounds like the problem is that updateOrderWorkflow in step 2 exposes the changes made by the transaction of PROCESS_1 before it has been committed.
What I would do is:
Change updateOrderWorkflow in step 2 so that it doesn't show uncommitted changes. Any changes it makes have to be made in a temporary space of some kind, associated with the transaction ID.
Add an API endpoint to this same API, to commit or roll back the changes of a transaction. If committed, the changes in the temporary space are made visible globally. If rolled back, the changes are discarded.
Use the new commit API endpoint in your updateOrder method, and the rollback API in your rollback handler.

neo4j transaction not rolling back

I am using neo4j 2.1.7 with java.
try(Transaction transaction = this.graphDatabaseService.beginTx())
{
Node user = this.graphDatabaseService.createNode();
user.setProperty("userId", userId);
transaction.failure();
}
So I am getting the object of GraphDatabaseService and creating a new transaction and marking it to rollback. According to their javadocs:
void failure()
Marks this transaction as failed, which means that it will
unconditionally be rolled back when close() is called. Once this
method has been invoked, it doesn't matter if success() is invoked
afterwards -- the transaction will still be rolled back.
But I see that the node gets created no matter what. I tried throwing an exception. I also tried not calling transaction.success() at all. Still I see that the changes get committed and not rolled back. I am not sure of this behaviour and would like an explanation. Thanks.
If you must know, I am trying to build a commit() function with nested transactions such that if any operation fail within the inner transactions, the parent transaction must fail too. However, in the process I found that no matter what I do, the transactions are getting committed.
Update 1:
The embedded version of neo4j works fine. The rest version is causing this trouble. I am using this package for rest:
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-rest-graphdb</artifactId>
<version>2.0.1</version>
</dependency>
No transactions over REST, at least not for that old version.
There are only transactions over HTTP with the new Cypher Endpoint.
That library is discontinued, I recommend that you use e.g. the JDBC driver or the new implementation that comes with Spring Data REST.

Mixing JTA and JDBC transactions

I have a Java SE(!) scenario with JMS and JPA where I might need distributed transactions as well as "regular" JDBC transactions. I have to listen to a queue which sends service requests, persist log on receiving, process the request and update the log after the request has been processed. The message shall only be acknowledged if the request has been processed successfully.
The first idea was to only use JTA (provided by Bitronix). But there I face two problems:
no log will be persisted if the request can't be processed
the request won't be processed if the log can't be updated (unlikely but yet possible)
So the other idea is to create and update the log with regular JDBC transactions. Only the entitymanager(s) for the request transaction(s) would join the user transactions and the entity managers for creating and updating the log would commit directly.
Is it possible to "mix" JTA and JPA on a single persistence unit? Or do we already have patterns for those kinds of JMS and JDBC transactions?
I actually solved my problem with a slightly different approach. Instead of "mixing" JTA and JDBC transactions I used suspend and resume to work with different user transaction.
The task is still the same: I start a (JTA) user transaction that contains some JMS and JDBC transactions (receiving a message, performing some database operations). And in the middle of that workflow, I want to write a message log but that logging shall not be rolled back when the "outer" transaction fails.
So the solution is, in pseudo code:
transactionManager.begin()
doSomeJdbcStuff();
Transaction main = transactionManager.suspend();
// do the logging
transactionManager.begin() // <- now a new transaction is created and active!
doSomeLogging();
transactionManager.commit()
// continue
transactionManager.resume(main);
doSomeMoreJdbcStuff();
transactionManager.commit();

Play 2.0 Attempting to obtain a connection from a pool that has already been shutdown

I have AKKA actors running in Play 2 application. There are a list of POJO objects retrieved from database and pass along in a message to actors. When an actor starts processing these objects, it will throw this exception. I guess it tries to read data from DB because of lazy loading of ebean. This happens when running in test cases. I haven't tested in normal application env.
Attempting to obtain a connection from a pool that has already been shutdown
at com.avaje.ebeaninternal.server.transaction.TransactionManager.createQueryTransaction(TransactionManager.java:356)
at com.avaje.ebeaninternal.server.core.DefaultServer.createQueryTransaction(DefaultServer.java:2021)
at com.avaje.ebeaninternal.server.core.OrmQueryRequest.initTransIfRequired(OrmQueryRequest.java:241)
at com.avaje.ebeaninternal.server.core.DefaultServer.findList(DefaultServer.java:1468)
at com.avaje.ebeaninternal.server.core.DefaultBeanLoader.loadBean(DefaultBeanLoader.java:360)
at com.avaje.ebeaninternal.server.core.DefaultServer.loadBean(DefaultServer.java:526)
at com.avaje.ebeaninternal.server.loadcontext.DLoadBeanContext.loadBean(DLoadBeanContext.java:143)
at com.avaje.ebean.bean.EntityBeanIntercept.loadBean(EntityBeanIntercept.java:548)
at com.avaje.ebean.bean.EntityBeanIntercept.preGetter(EntityBeanIntercept.java:638)
at models.MemberInfo._ebean_get_type(MemberInfo.java:4)
at models.MemberInfo.getType(MemberInfo.java:232)
at actors.MessageWorker.doSendToIOS(MessageWorker.java:161)
at actors.MessageWorker.onReceive(MessageWorker.java:97)
at akka.actor.UntypedActor$$anonfun$receive$1.apply(UntypedActor.scala:154)
at akka.actor.UntypedActor$$anonfun$receive$1.apply(UntypedActor.scala:153)
at akka.actor.Actor$class.apply(Actor.scala:311)
at akka.actor.UntypedActor.apply(UntypedActor.scala:93)
at akka.actor.ActorCell.invoke(ActorCell.scala:619)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:196)
at akka.dispatch.Mailbox.run(Mailbox.scala:178)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:505)
at akka.jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:259)
at akka.jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
at akka.jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
at akka.jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
Although I'm not sure if it's relevant for you, I'll tell my story. I had the same error message coming up when running my test-cases, without using actors.
First note that, during stopping a Play application, its data-sources are closed.
Since many of my test-cases require a running Application in scope, I was using the WithApplication helper around each test-case. The problem in my case was that my DB-access object was a singleton (a Scala object) initializing its Datasource only once. Since that object was never re-instantiated between test-cases, the closed datasource remained there, resulting in the mentioned error.
The solution in my case was to make sure the datasource was re-created between test-cases.

How do I do nested transactions in hibernate using only one connection?

Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.

Categories

Resources