I am writing an integration test in JUnit for a Message Driven Pojo (MDP):
#JmsListener(destination = "jms/Queue", containerFactory = "cf")
public void processMessage(TextMessage message) throws JMSException {
repo.save(new Entity("ID"));
}
where repo is a spring-data repository
my unit test:
#Test
public void test() {
//sendMsg
sendJMSMessage();
//verify DB state
Entity e = repo.findOne("ID");
assertThat(e, is(notNullValue()) );
}
Now, the thing is that the processMessage() method is executed in a different thread than the test() method, so I figured out that I need to somehow wait for the processMessage() method to complete before verifying the state of the DB. The best solution I could find was based on CountDownLatch. so now the methods look like this:
#JmsListener(destination = "jms/Queue", containerFactory = "cf")
public void processMessage(TextMessage message) throws JMSException {
repo.save(new Entity("ID"));
latch.countDown();
}
and the test
#Test
public void test() {
//set the countdownlatch
CountDownLatch latch = new CountDownLatch(1);
JMSProcessor.setLatch(latch);
//sendMsg
sendJMSMessage();
try {
countDownLatch.await();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
//verify DB state
Entity e = repo.findOne("ID");
assertThat(e, is(notNullValue()) );
}
So I was very proud of myself and then I run the test and it failed. The repo.findOne("ID") returned null. In the first reaction I set up a breakpoint at that line and proceed with debugging. During the debugging session the repo.findOne("ID") actually returned the entity inserted by the #JMSListenerlistener method.
After scratching my head for a while here's the current theory: Since the spring-data repository is accessed in two different threads, it gets two different instances of EntityManager and therefore the two threads are in a differen't transaction. Eventhough there's some sort of synchronization using the CountDownLatch, the transaction bound to the thread executing the #JMSListener annotated method has not committed yet when the JUnit #Test annotated method starts a new transaction and tries to retrieve the entity.
So my question is:
Is there a way for one thread to wait for the commit of the other.
Can two threads share one transaction in such a synchronized context (ie, the two threads would not access the EntityManager simultaneously)
Is my testing approach a nonsense and there is a better way of doing this
Related
I'm working on an application which uses Kafka to consume messages from multiple topics, persisting data as it goes.
To that end I use a #Service class, with a couple of methods annotated with #kafkaListener. Consider this:
#Transactional
#KafkaListener(topics = MyFirstMessage.TOPIC, autoStartup = "false", containerFactory = "myFirstKafkaListenerContainerFactory")
public void handleMyFirstMessage(ConsumerRecord<String, MyFirstMessage> record, Acknowledgment acknowledgment) throws Exception {
MyFirstMessage message = consume(record, acknowledgment);
try {
doHandle(record.key(), message);
} catch (Exception e) {
TransactionInterceptor.currentTransactionStatus().setRollbackOnly();
} finally {
acknowledgment.acknowledge();
}
}
#Transactional
#KafkaListener(topics = MySecondMessage.TOPIC, autoStartup = "false", containerFactory = "mySecondKafkaListenerContainerFactory")
public void handleMySecondMessage(ConsumerRecord<String, MySecondMessage> record, Acknowledgment acknowledgment) throws Exception {
MySecondMessage message = consume(record, acknowledgment);
try {
doHandle(record.key(), message);
} catch (Exception e) {
TransactionInterceptor.currentTransactionStatus().setRollbackOnly();
} finally {
acknowledgment.acknowledge();
}
}
Please disregard the stuff about setRollbackOnly, it's not relevant to this question.
What IS relevant is that the doHandle() methods in each listener perform inserts in a table, which occasionally fail because autogenerated keys turn out to be non-unique once the final commit is done.
What happens is that each doHandle() method will increment the key column in their own little transactions, and only one of them will "win" that race. The other will fail during commit, with a non-unique constraint violation.
What is best practice to handle this? How do I "synchronize" transactions to execute like pearls on a string in stead of all at once?
I'm thinking of using some kind of semaphor or lock, to serialize things but that smells like a solution with many pitfalls. If there was a general pattern or framework to help with this problem I would be much more comfortable implementing it.
See the documentation.
Using #Transactional for the DB and a KafkaTransactionManager in the listener container is similar to using a ChainedKafkaTransactionManager (configured with both TMs) in the container. The DB tx is committed, followed by Kafka, when the listener exits normall.
When the listener throws an exception, both transactions are rolled back in the same order.
The setRollbackOnly is definitely relevant to this question since you are not rolling back the kafka transaction when you do that.
#Transactional
public void save(String myIds)
{
synchronized (this)
{
List<mydata> data = getDataToSaveOrUpdate(myIds);//Returns the new dataList and updates old data
repository.saveAll(data);
logger.info("request processed");
}
logger.debug("exiting the method");
}
In this method if i sent the two same request with the difference between 0.5 sec what happens getDataToSaveOrUpdate method start reading data from repository before the previous request saveAll finishes the job.
Note One thing i noticed that it will work properly once i removed #Transactional
Maybe what you need is LockModeType(PESSIMISTIC_WRITE).
Second request processing starts as soon as first thread exits synchronized block. Transaction might still not be committed by then. Transaction will only be committed after method execution is completed.
One possible solution is to add synchronized keyword to the method itself.
#Transactional
public synchronized void save(String myIds) {
List<mydata> data = getDataToSaveOrUpdate(myIds);//Returns the new dataList and updates old data
repository.saveAll(data);
logger.info("request processed");
logger.debug("exiting the method");
}
One need to be very careful when using synchronized keyword. I don't know your exact need, may be it is a valid usage for your scenario.
There is a method foo() in controller, which have to wait another method bar() triggered to continue execution.
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
}
My solution is: saving a status flag in database/cache, while foo() is waiting, the thread loops searching if the status changed.
However, this solution will blocke request thread for seconds.
Is there any way to make foo() method run asynchronously, thus won't block thread execution?
This question is too broad. Yes you can use DeferredResult to finish a web request later. But doAnotherSomething() should actually do stuff asynchronously, otherwise you still end up using a thread, just not the one from the app server's pool. Which would be a waste since you can simply increase the app server's pool size and be done with it. "Offloading" work from it to another pool is a wild goose chase.
You achieve truly asynchronous execution when you wait on more than one action in a single thread. For example by using asynchronous file or socket channels you can read from multiple files/sockets at once. If you're using a database, the database driver must support asynchronous execution.
Here's an example of how to use the mongodb async driver:
#GetMapping("/foo")
public DeferredResult<ResponseEntity<?>> foo() {
DeferredResult<ResponseEntity<?>> res = new DeferredResult<>();
doSomething();
doAnotherSomething(res);
return res;
}
void doAnotherSomething(DeferredResult<ResponseEntity<?>> res) {
collection.find().first(new SingleResultCallback<Document>() {
public void onResult(final Document document, final Throwable t) {
// process (document)
res.setResult(ResponseEntity.ok("OK")); // finish the request
}
});
}
You can use CountDownLatch to wait till the dependent method is executed. For the sake of simplicity, I have used a static property. Make sure both methods have access to the same CountDownLatch object. ThreadLocal<CountDownLatch> could also be considered for this usecase.
private static CountDownLatch latch = new CountDownLatch(1);
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
latch.await();
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
latch.countDown();
}
I have a method that is going to call a stored function. I want it to async'ly do its work. This is what I have, but it seems like the .doWork() is never started because when I call getDao.deleteAll(), the stored function does not run.
#Transactional
public void delete()
{
final Session session = (Session) entityManager.getDelegate();
ExecutorService executorService = Executors.newSingleThreadExecutor();
executorService.execute(new Runnable()
{
#Override
public void run()
{
LOGGER.warn("starting");
session.doWork(new Work()
{
#Override
public void execute(Connection connection) throws SQLException
{
try
{
CallableStatement purgeArchived = connection.prepareCall("{call deleteAll()}");
purgeArchived.execute();
}
catch (SQLException exception)
{
LOGGER.warn("Failed to purge archive points. Reason: " + exception);
}
}
});
LOGGER.warn("stopping");
}
});
executorService.shutdown();
}
I see the logger has logged "starting", but it never got to "stopping" why is this happening?
Be aware that #Transaction is moot when you have a separate thread as Transactions are typically thread bound.
You will need to get a new entityManager from the factory inside the run().
Also go for #Async which is much cleaner.
Again be aware of transactionality with #Async
#Async and #Transactional: not working
As a general rule of thumb if you want to make some work async - treat that as a single unit of work and a separate transaction.
How can I use a transaction manager (such as Bitronix, JBoss TS or Atomikos) in a Java SE (not Java EE or Spring) to support the following use case:
Let's assume we have the following class:
public class Dao {
public void updateDatabase(DB db) {
connet to db
run a sql
}
}
and we create a Java Runnable from that, like the following:
public class MyRunnable extends Runnable {
Dao dao;
DB db;
public MyRunnable(Dao dao, DB db) {
this.dao=dao;
this.db = db;
}
public run() throws Exception {
return dao.updateDatabase(db);
}
}
Now in our Service layer, we have another class:
public class Service {
public void updateDatabases() {
BEGIN TRANSACTION;
ExecutorService es = Executors.newFixedThreadPool(10);
ExecutorCompletionService ecs = new ExecutorCompletionService(es);
List<Future<T>> futures = new ArrayList<Future<T>>(n);
Dao dao = new Dao();
futures.add(ecs.submit(new MyRunnable(dao, new DB("db1")));
futures.add(ecs.submit(new MyRunnable(dao, new DB("db2")));
futures.add(ecs.submit(new MyRunnable(dao, new DB("db3")));
for (int i = 0; i < n; ++i) {
completionService.take().get();
}
END TRANSACTION;
}
}
And the client can be a Servlet or any other multi-threaded environment:
public MyServlet extend HttpServlet {
protected void service(final HttpServletRequest request, final HttpServletResponse response) throws IOException {
Service service = new Service();
service.updateDatabases();
}
}
What would be the correct code for BEGIN TRANSACTION and END TRANSACTION parts? Is this even feasible? If not, what needs to be changed? The requirements is to keep the updateDatabases() method concurrent (since it will be accessing multiple databases at the same time) and transactional.
Seems like this can be done using Atomikos using SubTxThread
//first start a tx
TransactionManager tm = ...
tm.begin();
Waiter waiter = new Waiter();
//the code that calls the first EIS; defined by you
SubTxCode code1 = ...
//the associated thread
SubTxThread thread1 = new SubTxThread ( waiter , code1 );
//the code that calls the second EIS; defined by you
SubTxCode code2 = ...
//the associated thread
SubTxThread thread2 = new SubTxThread ( waiter , code2 );
//start each thread
thread1.start();
thread2.start();
//wait for completion of all calls
waiter.waitForAll();
//check result
if ( waiter.getAbortCount() == 0 ) {
//no failures -> commit tx
tm.commit();
} else {
tm.rollback();
}
XA Specification mandates that all XA calls be executed in the same thread context. To elaborate on the reason for this its because the commit could be called before any of the transactional branches are even created in your threads.
if you are just interested in how to execute those three calls in a XA transaction in JBoss TS
First make sure your -ds.xml specifies your datasource as an <xa-datasource>
InitialContext ctx = new InitialContext(parms);
UserTransaction ut = (UserTransaction) ctx.lookup("java:comp/UserTransaction");
ut.begin();
//Some Transactional Code
ut.commit();
Keep in mind with the code above you would not be able to use the ExecutorService to parallelize the calls.
Side Note: I don't know a lot about it but JTS/OTS claims to allow multiple threads to share in a transaction. I think it does this by propagating transactional context similar to ws-coordination/ws-transaction and is supported by JBossTS. Could be a red herring, but if your not under a time crunch it might be worth researching.
How about you
BEGIN_TRANSATION: Connect to all 3 databases in your Service,
pass the Connection objects (instead of db object) to MyRunnable
END_TRANSACTION: invoke commit and close on all 3 connections in
your Service