Having several backend modules exposing a REST API, one of the module needs to call other modules through their API and have an immediate response.
A solution is to directly call the REST API from this 'top' module. The problem is that it creates coupling, does not support scaling or failover natively.
A kind of bus (JMS, ESB) permits to decouple the modules by avoiding the need of endpoints known by the modules. They only 'talk' to the bus.
What would you use to enable fast response through the bus (another constraint is you don't have multicast as it could be deployed in the cloud)?
Also is it reasonable to still rely on the REST api or would a JMS listener be better?
I thought about JMS, Camel, ESB's. Do you know about companies using such architecture?
ps: a module could be a java war running on a tomcat instance for example.
If your top module "knows" to call the other modules, then yes you have a coupling, which could be undesirable. If instead your top module is directed to the other modules through links, forms and/or redirects from the responses from the middle module, then you have the same amount of coupling that a JMS solution would give you.
When you need scalability and failover (not before), add a caching reverse proxy such as an F5 or Varnish. This will be more scalable and resilient than any JMS based solution.
Udpate
In situations where you want to aggregate and/or transform the responses from the other modules, you're simply creating a composed service. The top module calls the middle module, which makes one or more calls to the back-end modules, composes the results and sends the appropriate response. Using a HTTP cache in between each hop (i.e. Top -> Varnish -> Middle -> Varnish -> Backend) is a much easier and more efficient way to cache the data, compared to a bespoke JMS based solution.
Related
In Spring, what is the idiomatic way to integrate an existing legacy service (that must run in its own thread)?
For slight clarification, this is a service that receives messages via UDP from an embedded device, transforms them into POJOs and pushes them into a (local, in-memory) queue. Ideally, I'd like to encapsulate this as a Spring component and have some declarative way of indicating "this component provides messages of this type" and allowing registration of other components as listeners (1:1 would be enough) without reinventing any wheels.
(Answering my own question) Using ApplicationEvent wiring seems to be sufficiently declarative for my purposes, so I'm just going with that.
Best Architecture for implementing a WebService that takes requests from one side, save and enhance that and then call another service with new parameters.
is there any special Design Pattern for this?
There's not a lot to go on, but from what you've said it sounds like a job for "pipes and filters"!
To get a more precise answer, you might want to ask yourself some more detailed questions:
If you need to do any validation or transformation of the incoming message? Will you want to handle all requests the same way, or are there different types? Are the external services likely to change, and if so, will they do this frequently? What do you want to do if the final web service call fails (should you rollback the database record?)? How do you want to report failures/responses - do you need to report these back? Do you need a mechanism to track the progress of a particular request?
Since you are looking for a design pattern, I think you might want to compare the pros and cons of using microservices orchestration vs choreography in the context of your project.
If you do not need an immediate response to the calling system I would suggest to you to use event-driven approach if that's feasible. So instead of REST services, you will have a message broker and your services will be subscribed for certain events. This will hide your consumers behind the message broker which will make your system less coupled.
This can be implemented via Spring Cloud Stream, where you will have a Sink (microservice producing events, transformer - microservice that makes intermediate transformations possible and a source - microservice that receives a final result for further processing).
Another possible case could be Camel. It has basically all the integration patterns built in, so it should not be a problem to implement the solution either based on REST APIs or events.
I'm building a REST Api using Jackson.
As many standard APIs do, this is an interface between a front-end and various resources (databases and processing engines on different environments).
GUI -> REST API -> Databases, HDFS, Hive etc.
What is a way to shield these resources from overloading?
What would be a good design to limit the number of calls that my API does to these services but yet still "handle" the calls from the front end?
You can follow below aproaches to shield these resources from overloading
1) You can put up a in-memory cache over the service layer that interacts with databases resources.So these will reduces.
2)You can throttle your api calls.Therefore you can limit the no of api calls from a particular user.
Reference - https://adayinthelifeof.nl/2014/05/28/throttle-your-api-calls-ratelimitbundle/
I would like to design an application with a core and multiple modules on top of the core. The core would have the responsibility to receive messages from the network, to parse the incoming messages and to distribute the messages to registered modules.
There are multiple types of messages, some modules may be interested to receive only some of the types. The modules can execute in parallel, or they can execute sequentially (e.g. when modules are interdependent with a well defined order of execution).
Also, it would be great if the modules could be deployed/undeployed even if the core is up.
It is completely new for me, I used to write modular application but with the multiple parts wired statically.
Which direction (i.e. framework, pattern...) should I take for such a design? I don't know if it's relevant to my question but I precise I'll use Java.
Thanks
You has a very good approach at the architecture level. But it will be beneficial only when your application layers/tire will be at separate instance, so that you can shut down one module/server and while other part will still be running. Point is will you run the modules on separate instances?
Secondly, I would like to suggest you to build the application core architecture using Web-Service either REST/SOAP as it will automatic achieve your thought that follows Service Oriented Architecture (SOA). That will be producer - consumer relation and you can run on separate instance. And while deploying/undeploying you can still run the services part to support other client instances.
Using Web service will also provide you a global information exchange system that will likely to communicate with several application views/front end.
My (perhaps all-too-simple) understanding of EJB3 is that its a way of turning a POJO into a Java EE-compliant unit of business logic. It is reusable and can be "plugged in" to different backend architectures spanning multiple projects. It's a step in the direction of true component-driven architectures. If any of these assertions are untrue, please begin by correcting me!!
If I am correct on these items, then I'm wondering how/where/when/if EJB3s snap into an ESB like Apache Camel. With Camel each endpoint typically implements some EIP like WireTap, Filter, or Transformer. I'm wondering which of these EIP/SOA patterns the EJB (specifically EJB3) fits into. Is it a simple Filter? Something else?
I guess at the root of my question is this:
If I'm building a Camel Route, when does it makes sense to make an EJB3 an endpoint, as opposed to some other EIP? What are the use cases for EJB3s in an ESB and when are they preferable over other EIPs?
There is no right or wrong in this case.
EJB plugs very well into JavaEE application servers and are built to provide an architecture to encapsulate the business logic as Java code inside EJBs and let the application server handle scaling, throttling, fail over, clustering, load balancing etc, as well as exposure of the EJBs to communication protocols (Web Services or JMS for Message Driven Beans).
I see no real point in introducing EJBs as business logic containers in Apache Camel, unless you already have a full stack Java EE application that you want Camel to work with.
Camel has a great set of features to connect to "real" pojos through bean-binding.
I would recommend using simple java beans/pojos for business logic, and you can easily plug them in at any other application through camel's rich set of connectors. There are multiple options for implementing the different camel EIPs. One common way is with java code, but XSLT for transformations and groovy for filters is just as common. I would never use EJBs for simple filters, but rather invoke some complex logic inside a Java EE app. server, or typically avoid all together (except MDBs) and look at JMS communication with the application server instead.
Basically an EJB is a service. The idea behind a service is that it can simply be used without needing to create it as a consumer. Additionally services often can be looked up in a registry.
So my advice is to use the simple bean integration for cases where it is easy to instantiate the bean impl and to use services where it is difficult. So you can encapsulate the initialization inside the component that provides the service.
I am not using EJBs regularly but often use OSGi services which are very similar in concept to EJBs.
In addition to previous answers I'd mention that SOA is rather approach with specified requirements than a concrete technology stack. Make your EJB3 beans or OSGI services be operable via network regardless to operations systems, platforms and languages at least and you will have service-oriented system. So EJBs and OSGI or Spring-powered applications do fit to SOA when they do fulfill its requirements.