I'm building a REST Api using Jackson.
As many standard APIs do, this is an interface between a front-end and various resources (databases and processing engines on different environments).
GUI -> REST API -> Databases, HDFS, Hive etc.
What is a way to shield these resources from overloading?
What would be a good design to limit the number of calls that my API does to these services but yet still "handle" the calls from the front end?
You can follow below aproaches to shield these resources from overloading
1) You can put up a in-memory cache over the service layer that interacts with databases resources.So these will reduces.
2)You can throttle your api calls.Therefore you can limit the no of api calls from a particular user.
Reference - https://adayinthelifeof.nl/2014/05/28/throttle-your-api-calls-ratelimitbundle/
Related
When introducing parallel processing to an application where multiple save entity calls are being made, I see prior dev has chosen to do it via Spring Integration using split().channel(MessageChannels.executor(Executors.newfixedThreadPool(10))).handle("saveClass","saveMethod").aggregate().get() - where this method is mapped to a requestChannel using #Gateway annotation. My question is this task seems to be simpler to do using the parallelStream() and forEach() method. Does IntergrationFlow provide any benefit in this case?
If you really do a plain in-memory data processing where Java's Stream API is enough, then indeed you don't need the whole messaging solution like Spring Integration. But if you deal with distributed requirements to process data from different systems, like from HTTP to Apache Kafka or DB, then it is better to use some tool which allows you smoothly to connect everything together. Also: no one stops you to use Stream API in the Spring Integration application. In the end all your code is Java anyway. Please, learn more what is EIP and why we would need a special framework to implement this messaging-based solutions: https://www.enterpriseintegrationpatterns.com/
Hi i am using Spring 4 Async rest template to make 10k rest api calls to a web service. I have a method that creates the request object and a method that calls the web service. I am using Listenable Future classes and the two methods to create and call are enclosed in another method where the response is handled in future. Any useful links for such a task would be greatly helpful.
First, set up your testing environment.
Then benchmark what you have.
Then adjust your code and compare
(repeat as necessary).
Whatever you do, there is a cost associated with it. You need to be sure that your costs are measured and understood, every step of the way.
A simple Tomcat application might outperform a Spring application or be equivalent depending on what aspects of Spring's inversion of control are being leveraged. Using a Future might be fast or slow, depending on what it is being compared to. Using non-NIO might be faster or slower, depending on the implementation and the data being processed.
I have an existing web forms project which consists of 3 different projects: UI layer (Web project), Business Logic Layer and Database Project. I have already written the data access methods which connect to the database and return data to the business logic layer.
Now we need to provide a REST API, and I was thinking of using oData API along with REST. But all the examples I have seen use Entity Framework and I just cannot use Entity Framework because our data access layer returns data to the business layer, which then processes that data and adds some logic, and then present it to the UI layer.
Can I still use oData API? If yes, then will I need to create fresh methods manually for each of the complex query of oData API? How will OData API access my BL Layer?
You can do this (I have just done similar myself) but it is very hard work.
To me, OData always felt like a way of exposing the entity framework through web services so if you were to try and implement it without the entity framework you will end up spending a lot of time parsing queries to your data access layer.
If you do decide to go down this route, maybe consider only implementing part of the OData spec - work out which parts you actually want to be able to use - as it is huge and the task is daunting.
These are only from my experiences though and you may have a better data access layer API setup than I had when I started which could make things significantly easier.
EDIT to answer last question:
Will you need to create fresh methods manually for each of the complex query of oData API? This will really depend on how your data will be exposed and how your data access layer is setup.
Having several backend modules exposing a REST API, one of the module needs to call other modules through their API and have an immediate response.
A solution is to directly call the REST API from this 'top' module. The problem is that it creates coupling, does not support scaling or failover natively.
A kind of bus (JMS, ESB) permits to decouple the modules by avoiding the need of endpoints known by the modules. They only 'talk' to the bus.
What would you use to enable fast response through the bus (another constraint is you don't have multicast as it could be deployed in the cloud)?
Also is it reasonable to still rely on the REST api or would a JMS listener be better?
I thought about JMS, Camel, ESB's. Do you know about companies using such architecture?
ps: a module could be a java war running on a tomcat instance for example.
If your top module "knows" to call the other modules, then yes you have a coupling, which could be undesirable. If instead your top module is directed to the other modules through links, forms and/or redirects from the responses from the middle module, then you have the same amount of coupling that a JMS solution would give you.
When you need scalability and failover (not before), add a caching reverse proxy such as an F5 or Varnish. This will be more scalable and resilient than any JMS based solution.
Udpate
In situations where you want to aggregate and/or transform the responses from the other modules, you're simply creating a composed service. The top module calls the middle module, which makes one or more calls to the back-end modules, composes the results and sends the appropriate response. Using a HTTP cache in between each hop (i.e. Top -> Varnish -> Middle -> Varnish -> Backend) is a much easier and more efficient way to cache the data, compared to a bespoke JMS based solution.
I'm working on an application acts as an event service bus for integrating various legacy components....The application utilizes a data store to audit all events and requests sent between systems as well as to store metadata about bus-subscribing endpoints...etc. I want to utilize CouchDB as the data store since it already has many of my application's requirements built-in (REST API, replication, versioning metadata documents...etc). Now here's how my app stack looks like:
[spring-integration filters/routers/service activators]
[service layer]
[dao layer]
[database]
With the database being CouchDB, I guess the the DAO layer would be either Ektorp Java library or a simple REST client. Here's my question though: isn't building a DAO layer with Ektorp kind of redundant? I mean, why not just use a RestTemplate in the service layer that talks to the views and design documents in CouchDB and save me some coding effort?
Am I missing something?
Thanks,
I don't know if you have tried it yet, but LightCouch in many ways would act like a REST template. Beside handling document conversion to your domains, and design docs / views, you may use it as a client to CouchDB anywhere in application such as a DAO or service layer.
If you roll your own you will have to implement the json parsing / mapping of view results and what not.
Besides efficient view result parsing / object mapping which might be tedious to develop yourself, Ektorp will also help you with view design document management through annotations.
Ektorp has many more features that I think you will appreciate when you dive deeper into CouchDB.
If your app only will perform simple gets of individual documents, then rest template might be enough. Otherwise I don't think you will safe time doing it yourself.