I use jhipster in a new project that needs different customizations for each client.
Every microservice (registry office, accounting etc ...) must be able to be customized according to some customer needs.
Which is the best solution to manage an extendable microservice with different customizations for each client?
Tnk!!
In my point of view, you should ask yourself: Why do you want to have such a shared microservice stub that all the other service have to use as a basis. There are good reasons I am aware of but one big strength of a microservice architecture is that each service can make use of the technology which is best suited for its business needs.
Getting back to your question, you could create a Git repository with a basic JHipster setup which only contains those application parts that all microservices should have in common. Then you can create forks of this repository for each microservice.
Another approach: Instead of having a single base project you could create small modules of those features that all microservices should have in common, e.g. a common logging mechanism. Then all microservice projects can use these modules as dependencies.
Related
I'm new to Java and Spring, please help me out.
We have 2 microservices - Service1 and Service2. There is a new client integration done with Service2 which gives me the location details. I need these location details in Service1. My initial thought was to add Service2 as a dependency in Service1's pom.xml so that I can use the new client's methods(instead of copying the code associated with geo client into Service1 like configuration, client, service code).
For an older task, classes/API from Service2 had to be used in Service1. Instead of adding dependency in pom.xml my teammates created similar classes(for API calls) in Service1. Is it advised to copy code from one microservice to another instead of editing pom.xml? Are there any disadvantages with adding dependency in pom.xml?
There are trade-offs to both approaches. Adding a dependency in the pom.xml file makes it easier to reuse existing code and maintain consistency between the two microservices, but it also increases the coupling between the two services and makes it more difficult to deploy and manage each service independently.
On the other hand, copying the code from one microservice to another increases the autonomy of each service, as each service can be deployed and managed independently, but it also makes it harder to maintain consistency between the two services and can result in duplicated code.
It is important to weigh the trade-offs and make a decision based on the specific requirements and constraints of your project. Some factors to consider include the size of the code being reused, the frequency of changes to the code, the level of coupling between the services, and the need for independent deployment and management of each service.
In general, it is recommended to try to minimize the coupling between microservices, and to use techniques like API-based communication or event-based communication to keep the services loosely coupled. This allows for greater flexibility and scalability, as well as better separation of concerns and better management of complexity.
I know this question has already been asked, but I have a different scenario in which I have the following structure:
As you can see the front-end (UI) will always call a GraphQL API Gateway microservice that will redirect the request to different microservices depending on the functionality or data required. I simplified the graphic by drawing 3 microservices, but we have more than 30.
When I started working in this project, something that I noticed is that the GraphQL API Gateway microservice is repeating many DTOs that are already defined on the other microservices. As a consequence, this GraphQL API Gateway microservice is full of Java classes that serve as DTOs (with same fields, datatypes and no modifications), so basically, what developers have been doing was to copy and paste a DTO from user microservice and paste them in the GraphQL API Gateway microservice. The same pattern was followed for all the 30 microservices DTOs. Additionally, DTOs across microservices are being copy-pasted based on the needs. You may say that there should not be an issue, but whenever a microservice requires a DTO structure change, we need to change the DTO in all the other microservices.
Some approaches that me and my team evaluated were to:
Either create a common library exclusively for DTOs that could be reused by many projects. The only concern is that this will add some "dependency" and break the "microservices" idea. Additionally, I think this will not fully solve the issue we have since we would need to keep this dependency up to date in each microservice.
Use some sort of utility such JSON schema that will generate required DTOs based on needs.
This is something that I have been discussing with the team, most of the team would like to go for the JSON Schema approach. However, I would like to know if there could be something more efficient from your point of view, I took a look to some approaches like exposing domain objects (https://www.baeldung.com/java-microservices-share-dto), but I do not think this apply for the particular case I am talking in this post.
So, any suggestion or feedback will be highly appreciated. Thanks in advance.
I'm testing micro service architecture using Spring Boot and RabbitMQ.
I now have two small services:
UserRegistrationService (Registers the user in a db)
GetUserInfo (Returns the user from the same db)
I choose to have all the user-specific services use the same db.
Both the services are using the entity "User"(JPA). (This may not be the smartest way of going about)
Is there a smart way of handling this dependency? (two services depend on the same entity)
Should I make the entity (user) to a separate project and use a artifact repository?
Yes, but you should go one step further and decouple the message representation from the database representation. Define an API artifact that contains just plain DTOs for the vocabulary objects in each service API, and implement the message-driven POJOs in reference to these DTOs, using whatever backend objects are relevant. (If you're using Spring Integration, you can just register a Spring converter to map back and forth automatically.)
Yes. For better reusability and easy maintenance you may need to publish common components as separate jar artifact(s) and refer it as a dependency in each micro services.
A sample project structure can be something like this,
So I created a new maven pom based webapp using intelliJ 11.
Now I want to separate out my various layers of the application, so I currently I have:
/myproj
/myproj-common (maven module)
/myproj-services (maven module)
/src/main/webapp (spring mvc application)
So I am using the following:
spring mvc
hibernate
So I will create Dao for each entity, and then a service layer that will use the Dao's and wrap other business logic etc.
How should I setup my maven modules properly without making it too complicated?
Should I create a separate interface module that my other modules will use?
Looking for some practical advice.
I'm using maven to build this also.
I tried this before and moving things into separate modules can't be a little tricky, so looking for some guidance on how to do this.
Update
Should I have a separate module for my entities also?
The simplest way is use only one maven module and do separation on package level.
If you need more I can recomend this setup:
myproj-services - entity classes, service interfaces
myproj-services-impl - implementation dao and services
myproj-ui - your spring mvc classes
Ui depends only on services and services-impl depends only on services.
To reply to your update: IMO yes.
To follow the DDD you should have a model module that contains your entities and DAO's and a service module for for your services.
One step further is splitting up the service module into a service-api module (service interfaces) and and service-lib module (implementations). This also entails then that you don't pass entities from your service module to your web modules by TO's (or views if you prefer).
Another related tip: if you're afraid your service classes will get too big (hence difficult to read/maintain/test) consider splitting them up into Business Objects. So, instead of having a UserService containing all the code you have a UserServiceFacade which delegates to MakeUserBO, FindUserBO, ... . These BO's are responsible for one (or more if related) tasks and can easily be reused by other services or other BO's. BO's are short, to the point and therefore easily to read/maintain. It is also easier to mock specific BO's while you're testing other.
Having several backend modules exposing a REST API, one of the module needs to call other modules through their API and have an immediate response.
A solution is to directly call the REST API from this 'top' module. The problem is that it creates coupling, does not support scaling or failover natively.
A kind of bus (JMS, ESB) permits to decouple the modules by avoiding the need of endpoints known by the modules. They only 'talk' to the bus.
What would you use to enable fast response through the bus (another constraint is you don't have multicast as it could be deployed in the cloud)?
Also is it reasonable to still rely on the REST api or would a JMS listener be better?
I thought about JMS, Camel, ESB's. Do you know about companies using such architecture?
ps: a module could be a java war running on a tomcat instance for example.
If your top module "knows" to call the other modules, then yes you have a coupling, which could be undesirable. If instead your top module is directed to the other modules through links, forms and/or redirects from the responses from the middle module, then you have the same amount of coupling that a JMS solution would give you.
When you need scalability and failover (not before), add a caching reverse proxy such as an F5 or Varnish. This will be more scalable and resilient than any JMS based solution.
Udpate
In situations where you want to aggregate and/or transform the responses from the other modules, you're simply creating a composed service. The top module calls the middle module, which makes one or more calls to the back-end modules, composes the results and sends the appropriate response. Using a HTTP cache in between each hop (i.e. Top -> Varnish -> Middle -> Varnish -> Backend) is a much easier and more efficient way to cache the data, compared to a bespoke JMS based solution.