Is it possible to test the transactionality of a process? - java

I would like to be able to verify if each unit of work is done in its own transaction, or as part of a single global transaction.
I have a method (defined using spring and hibernate), which is of the form:
private void updateUser() {
updateSomething();
updateSomethingElse();
}
This is called from two places, the website when a user logs in and a batch job which runs daily. For the web server context, it will run with a transaction created by the web server. For the batch job, it must have one transaction for each user, so that if something fails during this method, the transaction is rolled back. So we have two methods:
#Transactional(propagation=Propagation.REQUIRES_NEW)
public void updateUserCreateNewTransaction() {
updateUser();
}
#Transactional(propagation=Propagation.REQUIRED)
public void updateUserWithExistingTransaction() {
updateUser();
}
updateUserCreateNewTransaction() is called from the batch job, and updateUserWithExistingTransaction() from the web server context.
This works. However, it is very important that this behaviour (of the batch) not be changed, so I wish to create a test that tests this behaviour. If possible, I would like to do this without changing the code.
So some of the options open to me are:
Count the transactions opened in the
database during the run of the batch
job.
Change the data in some sublte way so that at least one user update fails, in the updateSomethingElse() method, and check that the updateSomething() for that user has not taken place.
Code review.
1 is a very database dependent method, and how do I guarantee that hibernate won't create a transaction anyway?. 2 seems better, but is very complex to set up. 3 is not really practical because we will need to do one for every release.
So, does anyone have a method which would enable me to test this code, preferably through a system test or integration test?

I would try to setup a test in a unit test harness using an in memory HSQLDB and EasyMock (or some other mocking framework).
You could then have the updateSomething() method really write to the HSQL database but use the mock framework to mock the updateSomethingElse() method and throw a RuntimeException from that method. When that is done you could perform a query against the HSQLDB to verify that the updateSomething() stuff was rolled back.
It will require some plumbing to setup the HSQLDB and transaction manager but when that is done you have a test without external dependencies that can be re-run whenever you like.

Another thing you can do is configure logging output for Hibernate's transactions:
http://docs.jboss.org/hibernate/core/3.3/reference/en/html/session-configuration.html#configuration-logging
If you make a log4j category for org.hibernate.transaction with trace log leve, it should tell everything that Hibernate does transaction-wise during a unit test.

Related

JPA Repositories and blocking I/O

I'm having a problem where I need to perform several slow HTTP requests on a separate thread after having written to the database using a JpaRepository. The problem is that doActualJob() blocks while waiting for a series of futures to resolve. This seems to prevent the underlying Hibernate session from closing, causing the application to run out of connections shortly after.
How do I write this function so the database connection isn't kept open while doing the blocking I/O? Is it even possible using JpaRepositories, or do I need to use a lower level API like EntityManager/SessionFactory?
#Service
class SomeJobRunner {
private final SomeJobRepository mSomeJobRepository; //extends JpaRepository
#AutoWired
public SomeJobRunner(final SomeJobRepository someJobRepository) {
mSomeJobRepository = someJobRepository;
}
#Async
public void doSlowJob(final long someJobId) {
SomeJob someJob = mSomeJobRepository.findOne(someJobId);
someJob.setJobStarted(Instant.now());
mSomeJobRepository.saveAndFlush(someJob);
doActualjob(); // Synchronous job doing several requests using Unirest in series
someJob = mSomeJobRepository.findOne(someJobId);
someJob.setJobEnded(Instant.now());
mSomeJobRepository.saveAndFlush(someJob);
}
Well - non-blocking database IO is not possible in Java/JDBC world in a standard way .To put it simply - your Spring data repository would be eventually using JPA ORM Implementation ( likes of Hibernate) which in turn will use JDBC to interact with the database which is essentially blocking in nature. There is work being done on this currently by Oracle (Asynchronous Database Access API ) to provide a similar API as JDBC but non-blocking. They intend to propose this as a standard. Also there is an exciting and parallel effort by Spring guys on this namely R2DBC – Reactive Relational Database Connectivity. They have actually integrated this with Spring data as well (link) so that may help you integrate in your solution. A good tutorial by Spring on this can be found here.
EDIT: As of 2022 Hibernate has reactive option as well
I would suggest to write in the database using a separate JTA transaction. Do do so, define a methode like
#Transactional(Transactional.TxType.REQUIRES_NEW)
public void saveJobStart(final long someJobId) {
SomeJob someJob = mSomeJobRepository.findOne(someJobId);
someJob.setJobStarted(Instant.now());
mSomeJobRepository.saveAndFlush(someJob);
}
Of course it is not quite the same. If doActualjob() fails, in your case, the database won't persist the start date. In my proposal, it will persist it. To compensate, you need to remove the start date in a catch bloc in doSlowJob, within a new transaction, and then rethrow the exception.

DropwizardAppRule ClassRule does not release connections after test completion

I have some integration tests for a RESTful application written in Java using Dropwizard. The test suite runs fine until eventually it hangs and I get an exception with C3P0PooledConnectionPoolManager: java.sql.SQLNonTransientConnectionException: Too many connections
I identified that the connections are not being cleaned up after each test using C3P0Registry.getPooledDataSources(), but I misdiagnosed the problem as not closing my jersey response entities, as detailed here: https://jersey.github.io/documentation/latest/client.html#d0e5255
Many of the tests are checking for just a status code, so it made sense to me that this would be happening (In the link it states: "If you don't read the entity, then you need to close the response manually by response.close()"). However, after fixing this problem and ensuring that each entity was closed, I'm still getting persistent connections between tests.
I'm using DropwizardAppRule as a Class Rule and after creation at the beginning and end of each test run, I can call to close the client that is associated with the rule, but the connections remain open. My C3P0ConnectionPool gains 3 connections per test class that is run and I can't figure out a way to stop it from growing with each new class that is added.
ClassRule snippet:
#ClassRule
public static final DropwizardAppRule<MicroServiceCoreConfiguration> RULE =
new DropwizardAppRule<>(App.class, ResourceHelpers.resourceFilePath("./config.yml"));
Will update with any information that is requested!

How to make Spring Cloud Contract reset WireMock before or after each test

We are writing a Spring Boot application and use the Cloud Contract WireMock support to stub a backing service. Our test class is annotated like so:
#RunWith(SpringRunner.class)
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
#AutoConfigureWireMock(port = 0)
public class Tests...
This works fine except for one thing: We found out that Spring Cloud does not seem to reset WireMock, in particular delete stubs, in between tests so that tests are not isolated properly. Of course, you can accomplish this yourself with a #Before method containing a reset(), but we wonder whether this is intentional. Is there an option that we have overlooked or an additional annotation one has to use?
After all, it is not possible to define stubs in a #BeforeClass method that would be gone if a reset would always be performed, so we wonder what speaks against doing it out of the box?
Configure Spring Boot property:
wiremock:
reset-mappings-after-each-test: true
ref: https://github.com/spring-cloud/spring-cloud-contract/commit/67119e62f6b30da56b06aade87ec3ba61de7fd24
I ended up injecting WireMockServer and running wireMockServer.resetAll() in #BeforeEach.
The WireMock server can be reset at any time, removing all stub mappings and deleting the request log. If you’re using either of the JUnit rules this will happen automatically at the start of every test case. However you can do it yourself via a call to WireMock.reset() in Java or sending a POST request with an empty body to http://<host>:<port>/__admin/reset.
To reset just the stub mappings leaving the request log intact send a DELETE to http://<host>:<port>/__admin/mappings.
Hope this is useful.

How can I use Hibernate/JPA to tell the DB who the user is before inserts/updates/deletes?

Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.

How do I do nested transactions in hibernate using only one connection?

Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.

Categories

Resources