neo4j transaction not rolling back - java

I am using neo4j 2.1.7 with java.
try(Transaction transaction = this.graphDatabaseService.beginTx())
{
Node user = this.graphDatabaseService.createNode();
user.setProperty("userId", userId);
transaction.failure();
}
So I am getting the object of GraphDatabaseService and creating a new transaction and marking it to rollback. According to their javadocs:
void failure()
Marks this transaction as failed, which means that it will
unconditionally be rolled back when close() is called. Once this
method has been invoked, it doesn't matter if success() is invoked
afterwards -- the transaction will still be rolled back.
But I see that the node gets created no matter what. I tried throwing an exception. I also tried not calling transaction.success() at all. Still I see that the changes get committed and not rolled back. I am not sure of this behaviour and would like an explanation. Thanks.
If you must know, I am trying to build a commit() function with nested transactions such that if any operation fail within the inner transactions, the parent transaction must fail too. However, in the process I found that no matter what I do, the transactions are getting committed.
Update 1:
The embedded version of neo4j works fine. The rest version is causing this trouble. I am using this package for rest:
<dependency>
<groupId>org.neo4j</groupId>
<artifactId>neo4j-rest-graphdb</artifactId>
<version>2.0.1</version>
</dependency>

No transactions over REST, at least not for that old version.
There are only transactions over HTTP with the new Cypher Endpoint.
That library is discontinued, I recommend that you use e.g. the JDBC driver or the new implementation that comes with Spring Data REST.

Related

How to handle data inconsistency

My applications is on Spring MVC 4.2 with postgres database. In my application, we are consuming an API which is written using spring boot having its own database (mysql).
#Transaction(rollbackfor = Exception.class)
updateOrder(Order order) {
// This insert is part of my application
update(order); //**STEP - 1**
//This is not part of our application &
// happening in api written in spring boot.
Integer transactionId = updateOrderWorkflow(order);// **STEP - 2**
//Below updateOrderWithTransactionId is part of my application
//Updates the order with the transaction Id
updateOrderWithTransactionId(order, transactionId); //**STEP - 3**
}
If STEP-3 fails, then I have to rollback the changes made in the consuming api. For rolling back I have written compensation/rollback method, which rolls back to the old workflow status.
Now the problem scenario:
If one process (PROCESS_1) is working on the above updateOrder() method and reaches to STEP-3, but before this process FAILS in STEP-3, another process (PROCESS_2) tries to access the updateOrder() method and updates STEP-2. Now PROCESS_1 fails in STEP-3 and it calls compensation/rollback method, but PROCESS_2 completes STEP-3 successfully.
This creates data inconsistency. How to handle this situation?
It sounds like the problem is that updateOrderWorkflow in step 2 exposes the changes made by the transaction of PROCESS_1 before it has been committed.
What I would do is:
Change updateOrderWorkflow in step 2 so that it doesn't show uncommitted changes. Any changes it makes have to be made in a temporary space of some kind, associated with the transaction ID.
Add an API endpoint to this same API, to commit or roll back the changes of a transaction. If committed, the changes in the temporary space are made visible globally. If rolled back, the changes are discarded.
Use the new commit API endpoint in your updateOrder method, and the rollback API in your rollback handler.

Reactor and database transaction

I'm working on a java project, using spring boot as container, and to supply the fact the connection to Oracle database is no reactive, we decided to use reactor to parallelize operation such as insert or update that are not dependent to one another (no foreign key relations among them).
Sometimes during the execution of the ParallelFlux, it laments loosing connection:
org.springframework.jdbc.CannotGetJdbcConnectionException: Failed to obtain JDBC Connection; nested exception is java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 1001ms.
or simply it does not find data persisted before the ParallelFlux call, but not committed.
This behavior is random, using the same data as input.
What I'm wondering about is if the database transaction is thread safe and if the ParallelFlux is treating it correctly.
Can anyone explain me how the database transaction is passed by spring boot to reactor and how the last handle it?
EDIT:
I did other tests. I monitored the connection on DB and noticed that the original transaction, the one where data is saved before the ParallelFlux block, is suspended and for each call of the method inside the ParallelFlux block, there is a new connection open and in wait for the original one to end. All this ended up as a lock on Oracle and then the reason for the org.springframework.jdbc.CannotGetJdbcConnectionException, due to connection saturation.
Knowing that, I changed the #Transactional annotation of the method called by the ParallelFlux, to:
#Transactional(propagation = Propagation.REQUIRED)
Doing this it result in a org.springframework.jdbc.CannotGetJdbcConnectionException at the first call of the method, as expected.
Change of Tags
Someone asked to remove "reactor" from the tag list of this question, but it is not correct, because I'm using Reactor and not spring-webflux and this part is in the backend, not inside a REST service.
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-core</artifactId>
<type>jar</type>
</dependency>

How do I do nested transactions in hibernate using only one connection?

Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.

Manual Rollback of transactions in Seam

This is a similar problem to Forcing a transaction to rollback on validation error
The scenario is this:
A user edits a page, the transaction is set to MANUAL so only if we call flush it would be committed to the database. Now the user wants to cancel the changes. Easy as you haven't flushed it yet.
Now consider this scenario: User edits page with lots of ajax on there. Some of these ajax callbacks require database queries (e.g. using a richFaces suggestion box etc). Some validation is done as well that requires database lookups. The problem is Hibernate will automatically issue a flush when you do a query. So the user doesn't press the save button (which would flush the transaction) he presses the cancel button. What do you do now?
If you don't do anything the changes will be written to the database - not what the user expects.
You can throw an exception that is annotated with
#ApplicationException(rollback=true)
That would rollback the transaction. You can then redirect to another page. However here I've come across another problem, on some pages you redirect to you get a lazy initialisation exception. I've specified
<exception class="com.mycomp.BookingCancelException">
<end-conversation before-redirect="true"/>
<redirect view-id="/secure/Bookings.xhtml">
<message severity="INFO">#{messages['cancel.rollback']}</message>
</redirect>
</exception>
in pages.xml, so the conversation should end before we are doing the redirect. A new conversation should start (with a new transaction) but that doesn't seem to happen in all cases? Why?
I've read somewhere else that you can simply use
Transaction.instance().rollback();
This would be preferable as you don't have to go via exceptions (the redirect always takes long when Seam handles exceptions) but the problem is that the Transaction isn't actually rolled back. I couldn't figure out why. If I check the status of the transaction it says that it is not in rollback state.
How would you best handle Cancel requests. The pure MANUAL flush doesn't work in this case. You could work with detached entities but the page contains several linked entities so this is getting messy.
Update: I've now discovered that throwing the ApplicationException doesn't rollback the transaction in all cases. So rather confused now.
Update 2: Of course rolling back transactions will not work when you have an page where you use ajax to update values. Each transaction only covers one request. So if you do e.g. 5 edits with ajax request, rolling back a transaction will only roll back the changes from the last ajax request and not from the earlier 4 ones.
So solution is really to use the flush mode MANUAL.
There are a few things that will cause a flush even if you specify MANUAL.
a query in an ajax request can trigger a flush - Use setFlushMode(FlushMode.COMMIT) on the query to avoid this.
Persisting an entity can trigger a flush depending on the ID generation used (e.g. if you use the strategy IDENTITY). You can work around this by using Cascades. If you need to create entities during the edit which don't have any real relationship with the main entity your are editing just add them to a list and persist all entities in that list when you do a save.
When you start a nested conversation or another bean joins the conversation the Flush Mode on that session is set back to AUTO when you don't specify #Begin(join=true,flushMode=FlushModeType.MANUAL)
You might want to specify MANUAL as the default mode in components.xml
<core:manager concurrent-request-timeout="10000"
conversation-id-parameter="cid" conversation-timeout="600000" default-flush-mode="MANUAL"/>
Have you tried
#Begin(flushMode=MANUAL)
someRandomValidationMethodHere(){ ... }
or setting
<core:manager conversation-timeout="120000" default-flush-mode="manual" />
in components.xml?
It might be helpful to know that you can tell the ajax validation components not to flush, using the attribute bypassUpdates. Here's an article explaining it.
You can manage your transaction and manage it with the button activated.
protected void doGet(HttpServletRequest request,
HttpServletResponse response, String butonPressed)
throws ServletException, IOException {
try {
// Begin unit of work
HibernateUtil.getSessionFactory()
.getCurrentSession().beginTransaction();
} catch (Exception ex) {
HibernateUtil.getSessionFactory()
.getCurrentSession().getTransaction().rollback();
throw new ServletException(ex);
}
finally
{
if ("SAVE".equals(butonPressed))
{
// Process request and render page...
// End unit of work
HibernateUtil.getSessionFactory()
.getCurrentSession().getTransaction().commit();
}
else
{
HibernateUtil.getSessionFactory()
.getCurrentSession().getTransaction().rollback();
}
}
}

Commit while Open new Transaction within Transaction

Using Ejb3.0, Weblogic 11g, JDBC
I am invoking a method which is running remotely in another deployment EAR.
The method in the remote deployment being invoked but it's annotated with the
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
the problem is that all the logic I do in the database before the remote method is being invoked wont commit till the remote method finished.
What I willing to do is a commit to let the "before" logic take place" and when I get back after the remote call to continue normally.
Any idea?
Some code to explain:
#CallByReference
#Stateless(mappedName = "ejb/OperatorProccessBean")
#Local({ OperatorProccessBeanLocal.class })
#Remote({ OperatorProccessBeanRemote.class })
public class OperatorProccessBean implements OperatorProccessBeanLocal,
OperatorProccessBeanRemote
{
...
SBNDispatchBeanRemote SBNDispatchBean = (SBNDispatchBeanRemote) context.lookup("ejb/SBNDispatchBean#com.mirs.sbn.dispatch.SBNDispatchBeanRemote");
if (SBNDispatchBean == null)
{
logger.error(TAG + " SBNDispatchBean is null");
}
else
{
//until here I want all my data to be commited without waiting for the upcoming remote method to finish
SBNDispatchBean.updateSubscriberInBlockingList(...);
}
...
}
Now the method updateSubscriberInBlockingList() is annotated with
#TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
i want the data to be commited before that method being invoked.
Thanks in advance,
ray.
Now the method updateSubscriberInBlockingList() is annotated with #TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
I want the data to be commited before that method being invoked.
Given that you are using container managed transactions, it is not possible. The rationale behind this, is that when the container is already performing a transaction, then starting a new transaction will result in the original being suspended. When the new transaction has committed, the original transaction will be resumed.
This behavior is not configurable, for the EJB container and the JTA Transaction Manager is expected adhere to the behavior specified in the JTA specification, which is derived from X/Open DTP transaction model. In the X/Open DTP model, if there is a new transaction is started, while another is in progress, the current one is suspended, and resumed at a later point in time. It should be noted that no transaction model, would possibly (I haven't studied all) allow for committing the current transaction and starting a new one. I have only seen nested transactions or suspended transactions being supported in the various transaction processing models.
If you want to have the work committed, you must have the existing transaction context terminated completely, so that the existing transaction will commit, and then start the new transaction.
Put the "before remote call" logic in a separate bean method annotated with REQUIRES_NEW as well. You will thus have three transactions :
one for the main method (but which won't do anything until th remote call is done);
one for the logic before the remote call;
one for the remote call.

Categories

Resources