Adding microservices as dependency in pom.xml - java

I'm new to Java and Spring, please help me out.
We have 2 microservices - Service1 and Service2. There is a new client integration done with Service2 which gives me the location details. I need these location details in Service1. My initial thought was to add Service2 as a dependency in Service1's pom.xml so that I can use the new client's methods(instead of copying the code associated with geo client into Service1 like configuration, client, service code).
For an older task, classes/API from Service2 had to be used in Service1. Instead of adding dependency in pom.xml my teammates created similar classes(for API calls) in Service1. Is it advised to copy code from one microservice to another instead of editing pom.xml? Are there any disadvantages with adding dependency in pom.xml?

There are trade-offs to both approaches. Adding a dependency in the pom.xml file makes it easier to reuse existing code and maintain consistency between the two microservices, but it also increases the coupling between the two services and makes it more difficult to deploy and manage each service independently.
On the other hand, copying the code from one microservice to another increases the autonomy of each service, as each service can be deployed and managed independently, but it also makes it harder to maintain consistency between the two services and can result in duplicated code.
It is important to weigh the trade-offs and make a decision based on the specific requirements and constraints of your project. Some factors to consider include the size of the code being reused, the frequency of changes to the code, the level of coupling between the services, and the need for independent deployment and management of each service.
In general, it is recommended to try to minimize the coupling between microservices, and to use techniques like API-based communication or event-based communication to keep the services loosely coupled. This allows for greater flexibility and scalability, as well as better separation of concerns and better management of complexity.

Related

Jhipster Extends Microservice Solution

I use jhipster in a new project that needs different customizations for each client.
Every microservice (registry office, accounting etc ...) must be able to be customized according to some customer needs.
Which is the best solution to manage an extendable microservice with different customizations for each client?
Tnk!!
In my point of view, you should ask yourself: Why do you want to have such a shared microservice stub that all the other service have to use as a basis. There are good reasons I am aware of but one big strength of a microservice architecture is that each service can make use of the technology which is best suited for its business needs.
Getting back to your question, you could create a Git repository with a basic JHipster setup which only contains those application parts that all microservices should have in common. Then you can create forks of this repository for each microservice.
Another approach: Instead of having a single base project you could create small modules of those features that all microservices should have in common, e.g. a common logging mechanism. Then all microservice projects can use these modules as dependencies.

Microservices - Stubbing/Mocking

I am developing a product using microservices and am running into a bit of an issue. In order to do any work, I need to have all 9 services running on my local development environment. I am using Cloud Foundry to run the applications, but when running locally I am just running the Spring Boot Jars themselves. Is there anyway to setup a more lightweight environment so that I don't need everything running? Ideally, I would only like to have the service I am currently working on to have to be real.
I believe this is a matter of your testing strategy. If you have a lot of micro-services in your system, it is not wise to always perform end-to-end testing at development time -- it costs you productivity and the set up is usually complex (like what you observed).
You should really think about what is the thing you wanna test. Within one service, it is usually good to decouple core logic and the integration points with other services. Ideally, you should be able to write simple unit tests for your core logic. If you wanna test integration points with other services, use mock library (a quick google search shows this to be promising http://spring.io/blog/2007/01/15/unit-testing-with-stubs-and-mocks/)
If you don't have already, I would highly recommend to set up a separate staging area with all micro-services running. You should perform all your end-to-end testing there, before deploying to production.
This post from Martin Fowler has a more comprehensive take on micro-service testing stratey:
https://martinfowler.com/articles/microservice-testing
It boils down to a test technique that you use. Here my recent answer in another topic that you could find useful https://stackoverflow.com/a/44486519/2328781.
In general, I think that Wiremock is a good choice because of the following reasons:
It has out-of-the-box support by Spring Boot
It has out-of-the-box support by Spring Cloud Contract, which gives a possibility to use a very powerful technique called Consumer Driven Contracts.
It has a recording feature. Setup your Wiremock as a proxy and make requests through it. This will generate stubs for you automatically based on your requests and responses.
There are multiple tools out there that let you create mocked versions of your microservices.
When I encountered this exact problem myself I decided to create my own tool which is tailored for microservice testing. The goal is to never have to run all microservices at once, only the one that you are working on.
You can read more about the tool and how to use it to mock microservices here: https://mocki.io/mock-api-microservices. If you only want to run them locally, it is possible using the open source CLI tool
It can be solved if your microservices allow passing metadata along with requests.
Good microservice architecture should use central service discovery, also every service should be able to take metadata map along with request payload. Known fields of this map can be somehow interpreted and modified by the service then passed to next service.
Most popular usage of per-request metadata is request tracing (i.e. collecting tree of nodes used to process this request and timings for every node) but it also can be used to tell entire system which nodes to use
Thus plan is
register your local node in dev environment service discovery
send request to entry node of your system along with metadata telling everyone to use your local service instance instead of default one
metadata will propagate and your local node will be called by dev environment, then local node will pass processed results back to dev env
Alternatively:
use code generation for inter-service communication to reduce risk of failing because of mistakes in RPC code
resort to integration tests, mocking all client apis for microservice under development
fully automate deployment of your system to your local machine. You will possibly need to run nodes with reduced memory (which is generally OK as memory is commonly consumed only under load) or buy more RAM.
An approach would be to use / deploy an app which maps paths / urls to json response files. I personally haven't used it but I believe http://wiremock.org/ might help you
For java microservices, you should try Stybby4j. This will mock the json responses of other microservices using Stubby server. If you feel that mocking is not enough to map all the features of your microservices, you should setup a local docker environment to deploy the dependent microservices.

Multi-modules application architecture

I would like to design an application with a core and multiple modules on top of the core. The core would have the responsibility to receive messages from the network, to parse the incoming messages and to distribute the messages to registered modules.
There are multiple types of messages, some modules may be interested to receive only some of the types. The modules can execute in parallel, or they can execute sequentially (e.g. when modules are interdependent with a well defined order of execution).
Also, it would be great if the modules could be deployed/undeployed even if the core is up.
It is completely new for me, I used to write modular application but with the multiple parts wired statically.
Which direction (i.e. framework, pattern...) should I take for such a design? I don't know if it's relevant to my question but I precise I'll use Java.
Thanks
You has a very good approach at the architecture level. But it will be beneficial only when your application layers/tire will be at separate instance, so that you can shut down one module/server and while other part will still be running. Point is will you run the modules on separate instances?
Secondly, I would like to suggest you to build the application core architecture using Web-Service either REST/SOAP as it will automatic achieve your thought that follows Service Oriented Architecture (SOA). That will be producer - consumer relation and you can run on separate instance. And while deploying/undeploying you can still run the services part to support other client instances.
Using Web service will also provide you a global information exchange system that will likely to communicate with several application views/front end.

Bus and REST services with fast response for decoupling?

Having several backend modules exposing a REST API, one of the module needs to call other modules through their API and have an immediate response.
A solution is to directly call the REST API from this 'top' module. The problem is that it creates coupling, does not support scaling or failover natively.
A kind of bus (JMS, ESB) permits to decouple the modules by avoiding the need of endpoints known by the modules. They only 'talk' to the bus.
What would you use to enable fast response through the bus (another constraint is you don't have multicast as it could be deployed in the cloud)?
Also is it reasonable to still rely on the REST api or would a JMS listener be better?
I thought about JMS, Camel, ESB's. Do you know about companies using such architecture?
ps: a module could be a java war running on a tomcat instance for example.
If your top module "knows" to call the other modules, then yes you have a coupling, which could be undesirable. If instead your top module is directed to the other modules through links, forms and/or redirects from the responses from the middle module, then you have the same amount of coupling that a JMS solution would give you.
When you need scalability and failover (not before), add a caching reverse proxy such as an F5 or Varnish. This will be more scalable and resilient than any JMS based solution.
Udpate
In situations where you want to aggregate and/or transform the responses from the other modules, you're simply creating a composed service. The top module calls the middle module, which makes one or more calls to the back-end modules, composes the results and sends the appropriate response. Using a HTTP cache in between each hop (i.e. Top -> Varnish -> Middle -> Varnish -> Backend) is a much easier and more efficient way to cache the data, compared to a bespoke JMS based solution.

Designing a N-tier Java EE web application having views, webservice and scheduler with spring

I have a start up web application using Spring and Hibernate which currently has 3 layers. View, Service and DAO. It also the domain objects are segregated separately.
To this I want to add webservice and scheduler . Now which layers should I add these classes? Or shall I create new packages for these? What are the best practices on n-tier web applications?
Please share your thoughts and experiences.
To web and scheduler packages?
There's no "right" answer to this question, and without any idea regarding your package layout beyond what's shown, it's difficult to be more specific.
As long as it makes sense in context, and it's consistent, it really doesn't matter a whole lot anyway. And you may find that your existing structure changes after you identify and refactor functionality across the original and new functionality.
A few thoughts:
A package is not a tier. A tier (or layer) is a logical abstraction for a collection of related functionality, a package is a physical grouping tool for compilation units. It may be the case that all the classes used to implement a logical tier reside in the same source package, but there is no requirement that this is the case.
It seems like webservice would fit nicely in the service package, or maybe a subpackge within service called web.
For the scheduler, it may also belong somewhere in the service package (particularly if other components are meant to interface with the scheduler via a service API). If not, then the next most appropriate thing would be to give it its own package called scheduler.
As for best practices, just do what 1) works and 2) makes sense. "n-tier web applications" is a topic so broad that there aren't really any specific answers that apply in all possible cases.

Categories

Resources