How to handle Database failures in Microservices? - java

I have 3 micro-services, for example, A, B, C. Services A does some tasks and updates its database accordingly. Same for rest two services.
Suppose services C could not insert to the database because of some error but service A and B updated the database accordingly and this has led to the inconsistencies in the database.
How shall I correctly handle this scenario if -
I have one common database for all the services?
Separate databases associated with each service?
Thank you for your answers!

For Separate databases you might want to google the SAGA architecture pattern. This helps you to manage transaction accross different microservices each having respective Database. It would take me a lot of space to describe it here, so I think the best advice I can give you is to refer you to this article SAGA Pattern for database per service architecture

First up, in a microservices architecture you should pursue separate databases, or at the very least separated schemas. Sharing data across microservices, as pointed out in comments, would be a microservice anti-pattern.
You can consider a couple of approaches here:
Each microservice updates it's own database and informs the others that an update has taken place. This enables each microservice to align its own database (eventually consistent).
A better approach, if you need coordination, is to create a fourth coordinating microservice whose job is to orchestrate the other three microservices. Research the saga pattern. This is especially useful if you need transactional coordination (i.e. all services must update their databases or none of them). If you think you need transactional coordination think again very carefully - in many (most?) situations eventually consistent is good enough. If you really need transactional then you should research saga and routing slip patterns which include compensation in the event of a failure.
If you need a unified view of the three separate databases then consider another microservice whose job is to create the view (projection) that you need. Let the microservices do the one thing they are good at and that only, if you start to mix concerns in your microservices - well, again it would be an anti-pattern.
A good method of enabling microservice communication is to use a message bus such as RabbitMQ or Azure Service Bus, but there are many other options including Spring Boot.
Given your questions, I would spend some more time researching microservice architectures and the right tools for your project before embarking on a microservices project. A lot of work has been done to ease the complexity of microservices and you would be wise to research the most suitable tool set for you. Nevertheless it will add quite a lot complexity at first but if done right as the project grows it can pay dividends.

You can use RabbitMQ to make message exchange among Micro Services. RabbitMQ will hold all the information on Database Update. So even if a micro service dies before database update then when the microservice will up again, it would look into RabbitMQ and knows what it missed. Thus it can do the database update after recovering from failure.

Related

Spring event lifecycle

To understand if the spring events fits the task im working on I need to understand how they work, where are they stored?
as I can guess they are stored in a spring application context and disappears if the application crashes, is my guess correct?
Spring events are intended to use when calling methods directly would create too much coupling. If you need to track events for auditing or replay purposes you have to save the events yourself. Based on your comments, there are many ways to achieve this, based on the topology and purpose of the application (list not complete):
Model entities that represent the events and store them in a repository
Incorporate a message broker such as Kafka that support message persistence
Install an in-memory cache such as Hazelcast
Use a cloud message service such as AWS SQS
Lastly, please make sure that you carefully evaluate which options suits your needs best. Options 2 to 4 all introduce heavy complexity and distributed applications can bring sorrow and misery to your life. Go for the simplest option if you can and only resort the other options if absolutely necessary.

Question about separating service into microservices

Currently we have core service that contains functionality for User and Admin user.
We want to separate user and admin functionality into different microservices. Therefore, decrease load on them and separate codebase(although some code will repeat).
These microservices will use the same DB.
What do you think, is it a good idea to separate the microservices?
What are prods and cons? Are there any best practises for this?
If two micro services share the same database they have lots of dependencies regarding database schema, database structure, availability, deployment etc. Thus they do not achieve one of the core requirement of a micro service, namely that each micro service is truly independent. So they are not two micro services but a single complex one.
The shared/repeated code is a further indication that splitting the micro services into two isn't the best idea.
I'm further surprised that you expect benefits regarding the load by splitting it into a user and an admin service. Typically, admin related load is very small compared to user related load. Thus I would expect that 99% of today's load would still go to the user service after the split. If so, you wouldn't achieve the initial goal.
Overall, I think it's a bad idea. I don't see any advantage at all. If excessive load is the main problem, solve it by running multiple instance of the current micro service.

DDD and messages for Aggregates communication

I'm writing an exercise application using DDD principles in java with spring boot and mongodb. According to DDD, the communication between Aggregates occur only through messaging. At this point Im not distributing the application, all aggregates resides in the same application, so Im simply using the Spring messaging capabilities to exchange messages.
Each aggregate correspond to exactly one mongo document. Each command or operation triggered by an event is guarded by a #Transactional annotation to ensure that the db transaction and the event are processed atomically.
I was wondering where should I store the events? Can I store them within the mongo document? Actually, since mongo transactions spans single documents, isn't this the only option?
Next step is to set a periodic task that will read all recent events and publish them to simulate an off thread communication. At this point, probably a good place would be a separate document collection storing the events.
P.S. I'm not taking event sourcing into consideration for the moment, as it seems to be more advanced.
Thank you!
I was wondering where should I store the events?
The usual line of thinking is that distributed transactions suck; therefore if you can possibly manage it you want to store the events with the aggregate state. In the RDBMS world your events live in a table that is in the same database schema as your aggregate data -- see Udi Dahan, 2014.
If it helps, you can think of this "outbox" of event messages as being another entity within your aggregate.
After this save is successful, you then work on the problem of copying the information to the different places it needs to be, paying careful attention to the failure modes. It's purely a "plumbing" problem at this point, which is to say that it is usually treated as an application and infrastructure concern, rather than as a domain concern.

Shared entity/table design for microservices

We are in the middle of breaking a big monolithic e-commerce application into microservices. (We plan to use Java, Spring, Hibernate) We have concept of fulfillment items and persistent items in our monolithic application. Our plan is to mostly break up the fulfillment item CRUD operations and persistent item CRUD operations into two separate APIs. But we have some common entities/tables that both the API's will end up needing. What is the best way to handle this scenario?
Currently one of the options open on table is to have one microservice own the entity/table and have a READ ONLY object reference in other microservice. Are there any drawbacks to this?
Depends a lot on your deployment strategy. If you going to bundle/package both the APIs into one then it's ok if both share the same entities(infact you should not duplicate entities). I would prefer having all the entities and repositories/DAO into one common bundle/package just to expose various APIs for crud operations(without any other business logic). And then my other components will consume these APIs and will have the business logic.
There really isn't much of a drawback except in situations where a micro service cannot operate under eventual consistency. And even in these cases, you can always add a dependency for your non-common micro service to know how to query the common micro service for relevant updates if necessary, although that's less than ideal.
You will likely have to introduce some form of mediator mechanism for your use case though. Something like a JMS broker is an ideal choice that would allow one micro service to inform other interested micro services that something occured so that they each can handle the event in their own way.
For example, a CustomerMessage could be raised that contains the customer's id, name, address, and perhaps credit-limit and one micro service may only be concerned with the id and name while another may be interested also in the address and credit-limit.

Spring RESTful service application architecture

Currently we are building web services applications with Spring, Hibernate, MySQL and tomcat. We are not using real application server- SoA architecture. Regarding the persistence layer - today we are using Hibernate with MySQL but after one year we may end up with MongoDB and Morphia.
The idea here is to create architecture of the system regardless concrete database engine or persistence layer and get maximum benefits.
Let me explain - https://s3.amazonaws.com/creately-published/gtp2dsmt1. We have two cases here:
Scenario one:
We have one database that is replicated (in the beginning no) and different applications. Each application represents on war that has it's one controllers, application context, servlet xml. Domain and persistence layer is imported as maven lib - there is one version for it that is included in each application.
Pros:
Small applications that are easy to maintain
Distributed solution - each application can be moved to it's own tomcat instance or different machine for example
Cons:
Possible problems when using hibernate session and sync of it between different applications. I don't know that is possible at all with that implementation.
Scenario two - one application that has internal logic to split and organize different services - News and User.
Pros:
One persistence layer - full featured of hibernate
More j2ee look with options to extend to next level- integrate EJB and move to application server
Cons:
One huge war application more efforts to maintain
Not distribute as in the first scenario
I like more the first scenario but I'm worried about Hibernate behavior in that case and all benefits that I can get from it.
I'll be very thankful for your opinion on that case.
Cheers
Possible problems when using hibernate session and sync of it between different applications. I don't know that is possible at all with that implementation.
There are a couple of solutions that solve this exact problem:
Terracotta
Take a look at Hibernate Distributed Cache Tutorial
Also there is a bit older slide share Scaling Hibernate with Terracotta that delivers the point in pictures
Infinispan
Take a look at Using Infinispan as JPA-Hibernate Second Level Cache Provider
Going with the first solution (distributed) may be the right way to go.
It all depends on what the business problem is
Of course distributed is cool and fault tolerant and, and,.. but RAM and disks are getting cheaper and cheaper, so "scaling up" (and having a couple hot hot replicas) is actually NOT all that bad => these are props to the the "second" approach you described.
But let's say you go with the approach #1. If you do that, you would benefit from switching to NoSQL in the future, since you now have replica sets / sharding, etc.. and actually several nodes to support the concept.
But.. is 100% consistency something that a must have? ( e.g. does the product has to do with money ). How big are you planning to become => are you ready to maintain hundreds of servers? Do you have complex aggregate queries that need to run faster than xteen hours?
These are the questions that, in addition to your understanding of the business, should help you land on #1 or #2.
So, this is very late answer for this but finally I'm ready to answer. I'll put some details here about further developing of the REST service application.
Finally I landed on solution #1 from tolitius's great answer with option to migrate to solution #2 on later stage.
This is the application architecture - I'll add graphics later.
Persistence layer - this holds domain model, all database operations. Generated from database model with Spring Roo, generated repository and service layer for easy migration later.
Business layer - here is located all the business logic necessary for the oprations. This layer depends on Persistence layer.
Presentation layer validation, controllers calling Business layer.
All of this is run on Tomcat without Application server extras. On later phase this can be moved to Application server and implement Service locator pattern fully.
Infrastructure - geo located servers with geo load balancer, MySQL replication ring between all of them and one backup server and one backup server in case of fail.
My idea was to make more modern system architecture but from my experience with Java technology this is a "normal risk" situation.
With more experience - more beautiful solutions :) Looking forward for this!

Categories

Resources