The current situation
Got a project using: spring-boot, spring-cloud, postgresql, as a microservice system.
There are 2 services, say SA and SB, they operate on 2 RDBMS databases respectively, say DA and DB.
Now, there is an operation contains 2 sub steps:
Http client would make a request to service SA, to save a record RA, into DA.
Then, SA send a request to service SB, to save a record RB, into DB.
As a whole, the 2 sub steps should either both commit, or both rollback.
Analysis
If move both operations into a single service, then could use Spring's distributed transaction to sovled it with JTA (based on 2PC protocol).
But here, the 2 operations are in 2 services, and they communicated via http REST protocol. Maybe could use mq + compensation to solve this, but I am not sure is there a better approach.
The questions are
In this case, does JTA (based on 2PC protocol) still work?
If not, what is the preferred solution?
Possible solutions I can guess:
Refactor code to move the 2 operations into a single service.
Implementat the mq + compensation architecture to support this.
Maybe this project is helpful for you https://github.com/apache/servicecomb-pack
Apache ServiceComb Pack is an eventually data consistency solution for micro-service applications. ServiceComb Pack currently provides TCC and Saga distributed transaction co-ordination solutions by using Alpha as a transaction coordinator and Omega as an transaction agent
Related
I have a requirement to update a database using events in Springboot microservice. Each microservice has its own persistent layer. Microservices are communicating with each other using REST API's.
Scenario:
I have two microservices - Vendor microservice with vendor DB and Order microservice with order DB. When a request is received by the vendor microservice, it will update the vendor Db and also add an order in the order DB and all this should be done in one transaction.
I cannot use a REST API for calling the vendor service to update the order. If any transaction fails, everything should be rolled back. How can I achieve this using events or something similar?
You can use a message queue like Kafka or RabbitMQ. Because you faced an issue known as two-phase commit. You can use transactional outbox pattern that is widely used in projects which consist of microservices.
Spring already provides a consistent programming model for transactions. You can use
#Transactional
to achieve this.
You can read more about transactional annotation in the official documentation
https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/transaction/annotation/Transactional.html
I have two micro services. One is in spring boot and other is using simple jdbc apis to perform database operation. I have to perform database operation using these two services.
(i.e.
Insert using jdbc service,
Insert using spring-boot service,
Insert using jdbc service).
These operation should follow acid property.
I have tried saga pattern using axon framework, it is working fine but I want to do it by using 2pc protocol. I have tried 2pc protocol in jdbc service but it is working only for transactions happened only in this service. I also used atomikos framework in spring service, it is working for this service only.
Is there any way to co-ordinate javax.transaction and springframework.transaction ?
For the scenario described in the question, you should try using Oracle Transaction Manager for Microservices (MicroTx). It is a free product that comes with a transaction manager and client library for microservices written in Java and node.js. With this, you can create XA (2PC) transactions involving multiple microservices. Oracle MicroTx - https://www.oracle.com/database/transaction-manager-for-microservices
I am working on a project that is making a REST call to another Service to save DATA on the DB. The Data is very important so we can't afford losing anything.
If there is a problem in the network this message will be lost, which can't happen. I already searched about Spring Retry and I saw that it is designed to handle temporary network glitches, which is not what I need.
I need a method to put the REST calls in some kind of Queue (Like Active MQ) and preserve the order (this is very important because I receive Save, Delete and Update REST calls.)
Any ideas? Thanks.
If a standalone installation of ActiveMQ is not desirable, you can use an embedded in-memory broker.
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
Not sure if you can configure the in-memory broker to use KahaDB. Of course the scope of persistence will be limited to your application process i.e. messages in the queue will not be available if the application is restarted . This is probably the most important reason why in-memory or vanilla code based approaches are no good.
If you must reinvent the wheel, then have a look at this topic, which talks about using Executors and BlockingQueue to implement your own pseudo-MQ.
Producer/Consumer threads using a Queue.
On a side note, retry mechanism is not something provided by the MQ broker. It is the client that implements it. Be it ActiveMQs bundled client library or other Messaging libraries such as Camel.
You can also retrospect your current tech-stack to see if any of the existing components have JMS capabilities. For example: Oracle database bundles an MQ called Oracle AQ
Have your service keep its own internal queue of jobs, and only move onto the next REST call until the previous one returns a success code.
There are many better ways to do this but the limitation will come down to what your company will let you do.
I was asked the following question in an interview and couldn't answer that.
How do you include a jdbc operation,a web service call and a JMS operation into one single transaction. That means if one of them fails all has to be roll backed.
I have heard about two-phase commit protocol and oracl XA in case of database transactions involving multiple databases. But not sure whether the same can be used here.
The critical factor is that the web services you connect to have been built using a web services framework that supports transactions. JBoss Narayana is one such web services framework. Once the web services endpoint you are connecting to is on such a framework, it's just a matter of configuring spring to use the appropriate client.
In the case of Narayana, the spring config (from http://bgshinhung.blogspot.ca/2012/10/integrating-spring-framework-jetty-and.html) for transactions with web services:
You are never going to be able to do this in a completely bomb-proof way as the systems are separate. A failure in one stage of the system (for example between the SQL commit and the JMS commit the power on your server gets turned off) will leave the SQL commit in place.
The only way to resolve that would be to keep some record of partial commits somewhere and scan that on startup to fix any resulting problems but now what happens if you have a failure processing or keeping that list.
Essentially the solution is to do your own implementation of the multiple-stage-commit and rollback process wrapping the three operations you need to make. If any of the operations fails then you need to reverse (preferably using an internal transaction mechanism, if not then by issuing reversing commands) any that have been done so far.
There are a lot of corner cases and potential ways for a system like this to fail though, so really the first approach should be to consider whether you can redesign the system so you don't need to do this at all!
It may be trick question and the right answer is "it can not be done".
But I would try to pseodo-code something like this:
try{
jdbc.startTransaction();
Savepoint saveJdbc = jdbc.setSavepoint();
JMS.startTransaction();
Savepoint saveJMS = JMS.setSavepoint();
jdbs.doSomeStuff();
JMS.doSomeStuff();
jdbc.commit();
JMS.commit();
if(webServise.doSomeStuff() == fail){throw new Exception();}
}
catch(Eception e){
jdbc.rollback(saveJdbc);
JMS.rollback(saveJMS);
}
You prepare one servise that has roll back. You prepare second servise that has roll back. You will try web servise and if web servise fail you will roll back those two which have rollback.
May be it is a way to implement rollback to your web servise.
We had same situation like web service will push the data, we have to read the xml stream and persist to db(oracle). implementation we followed is.
Web service send soap message and that will contain xml stream data.
all request soap messages pushed to jms.
respective listner will read the stream and persist the data into 'Temporary tables'.
if request processed successfully then move data from temp table to actual table.
if any error roll back.
hope above points may help.
To my mind, it looks like interviewer liked to understand your ability to think in terms of enterprise wide distribution. Few points:
JDBC is used for Database connectivity
WebService is probably a mechanism to send control command to a
server from any client.
JMS is mainly used for alerts of what is being happened in the
system.
My guess is your interviewer might be having a typical scenario with him that they wish to suffice the following situation:
Data is on one tier ( cluster, or machine )
Clients may be any kind, mobile, app, ios, objective c, browser pay, etc.
JMS is configured to listen to topics. Or is that he wishes he could do that.
Now probably the best approach is to write a JMS Subscriber which decides what to do in the onMessage() method. As an example, suppose a web service is initiated a payment request from client. This will initiate a JMS publisher to tell a DAO do the necessary internal connection to database and when transaction is in middle and when it finishes, one message will be published to subscriber. You will have full grain control of every step as that would be configured to be published through JMS. Though this is difficult to achieve, this could be your interviewer's expected approach from you. (This is Only my guess, and please note.)
Background
I have Spring Client application that provisions a service to two servers using RMI. In the the client I save an entity to the database (easy) and make rmi calls to two servers with details of the entity. I am using Spring 3.0.2 on the servers and the client is a simple Spring-mvc site.
Requirements
My requirement is that if any of the rmi calls fail to the servers that the whole transaction rolls back, that is the entity is not saved on the client and if either rmi call was successful that this too rolls back.
I am relatively new to Distributed transactions, but I guess I want a XA like transaction using RMI calls.
I did find a nice link on the subject here but it does not mention the pattern for when calling two remote method calls to different servers. I would love to hear more about the subject in terms of recommended reading and also any pointers on how to achieve this using spring. Is using a transaction manager for this possible?
Thank you.
Here is how this situation could be theoretically handled. First you need to have several JTA distributed transaction manager on each nodes. One acts as the master, the other as the slaves. The master coordinate the commit/rollback of the distributed transaction to the slaves. Stand alone JTA implementations exist, e.g. JOTM.
Vanilla RMI does not support propagating context information such as the transaction ID of the operation. But I think RMI has hooks so that it can be extended to support that. You can have a look at Carol.
You will need to use XAResource to wrap the participants in the transaction so that they can be enlisted in the distributed transaction. The master will need to send commit/rollback messages to the slaves, which will need to use XATerminator to act accordingly.
The JTA spec is only a distributed transaction manager, logging of the operations in a transaction log needs to be done by the servers. Library exists for transaction log management, e.g. HOWL.
I don't think Spring can be used -- even with a distributed transaction manager -- to do that easily. I tried once to use RMI with distributed transaction controlled from a stand alone client and several slaves. Here is a blog post about it. It was rather complicated.
You can get all that for free if you use a Java EE application server, with IIOP. IIOP support distributed transaction propagation. The client can be an application client container, and you can control the transactions with UserTransaction. That's actually one of the rare case, where I think using an application server is really justified.
But that said, distributed transaction are complicated things, which can lead to heuristic failures, timeout if one node dies, and complicated recovery procedures.
My last advice would then be: try to find a design which does not involve distributed transaction if possible. That will make your like a lot easier.
You can maybe draw inspiration at BPEL compensation mechanism. There are maybe other design approaches for error handling and robustness which can avoid the usage distributed transactions.
As far as I know, Spring per se doesn't manage distributed transactions. It may use JtaTransactionManager which in its turn delegates to a Java EE server's transaction coordinator. So as far as I understand this kind of transactions available only across datasources registered in application container.
You may try to write your own XAResource implementation (not sure if it's the best way, but still) and register it in application container, but Spring won't help you much with that.