Using resttemplate for external API calls - performance - java

I am a jr.dev and have worked with Spring Boot in different projects, mainly for creating server/client where client sends some stuff to a server, the server then calls external API's to reflect changes on the clients side.
In the future I will be involved in creating a backend for a administration portal. On this portal, there might be up to ~100 users making changes at the same time. With the aformentioned server/client systems, updates were infrequent so performance was not prioritized, however, if 100 users are on this administration portal at the same time the user experience may be poor.
Structure of what I am building:
Client (browser) CRUD -> my backend -> 1-3 external API calls to locally hosted services.
And this is where my lack of Spring's Resttemplate is lacking. If 10 users make a request roughly at the same time, the RESTcontroller will try to execute the corresponding code for the 10 requests at the same time, BUT since the code is using Resttemplate to do external API calls, will each RESTcontroller "thread" have to wait for their turn because Resttemplate is synchronous? - In turn making the last RESTcontroller request having to wait for possibly 10-30 Resttemplate calls to the external API, which take roughly 30ms each.
Is there a better way to handle it? So that each request to the restcontroller can contact the external API without having to wait for another thread to free up Resttemplate?
I might be talking gibberish, I find it's very hard to google specific questions about Spring/RESTcontroller/Resttemplate's structure, whether they are asynchronous/synchronous etc..

Related

microservice project between a monolitic api and another back

I would like to know if it's possible to create a spring boot microservice between an old java 1.8 monolithic API and a Spring Boot Backend (React for the front but it doesn't matter).
Here is the idea:
RestController inside the monolithic API ---> Microservice (Springboot) ---> Back API (Springboot)
For the use case:
Click on the button of API A
Binding data to the RestController of the API B
Send the same data to an API C
I don't think it's possible through a RestController due to the Cross Origin but it could be great to find a solution.
What do you think?
TL;DR Assuming these are all synchronous remoting calls I think this should not pose too many problems, apart from maybe latency if that's an issue and possibly authentication.
The RestController in your Monolith A can call the REST API implemented by your Microservice B as long as it can reach that endpoint, and knows how to map/aggregate the data for it. The Microservice B can in turn call your Back API C.
I assume the calls will all be blocking, meaning each thread processing a request will be paused until a response is received. This means that the call to A will have to wait until B and C are all done with their processing and have sent their responses. This can add up (especially if these are all network hops to different servers). If this is a temporary set up to apply the strangler pattern to part of the monolith then the latency might not be an issue for the period in which calls are still routed through the monolith.
Cross origin resource sharing (CORS) is only a concern when retrieving content from a browser window as far as I know. In the described situation this should not be an issue. Any client calling Monolith A will not be aware of the components behind it. If one or more oof the three components are not under your control, or not managed/authenticated in the same way then you might run into some authentication challenges. For instance, the Microservice might require a JWT token which the Monolight might not yet provide. This would mean some tinkering to get the components to become friends in this respect.
Strangler pattern

How to work with asynchronous process from external client?

please give me some advice about the best pattern of the task solution. My task is this:
User makes a request to the Camunda processor through own rest
controller
BPMN schema on a backend side consists of a chain of
several asynchronous services
Data will be ready for response to
the User only when one final service on BPMN makes it.
Every chain works not greater than 10-15 secs. And users sessions count is less than 500 an hour.
How to organize the work of the rest controller? Is it acceptable to force controller waiting of result in the same call? Where a bottleneck?
Can you use some server push technology? If it were just a couple of seconds, I'd say go for waiting in the rest controller.
Being 15 seconds and thinking about scalability, I'd say to follow some kind of asynchronous pattern with the client to.
Client sends a request to do something
The controller delegates the work to some external process and returns to the client ok
The process ends and a response is ready.
If the other side is a browser, use some kind of server push technology to notify it. If it is an application use some kind of rpc, polling or any other inter process mechanism to communicate.
Note that depending on the hosting technology, there are different limits on concurrent connections. Check Spring Boot - Limit on number of connections created for tomcat.

Self Updating Simulator for a Web-service Suite

I am presently working on an application which has an external dependency on micro-services, there are around 25 microservices, which are administrated via a eureka instance, every microservice has around 3-4 controllers.
This is an external dependency for me and blocks my work if it goes down, also I am unaware of the code ad logics for these microservices.
Currently, I am looking for a solution which can act as a simulator for these services in there absence, some application which can intercept and log, all the request and response to/from the external services, and in absence of these services it can match the last response to a requests from log and provide that response.
you should check mockito or any other mock framework
just record and serialize the result e.g. with xstream and respond with the deserialized xstream result and modify it slightly by your needs.
This is the quickest solution for mocking remote services.

How to handle thousands of request in REST based web service at a time?

Is making REST based web service (POST) asynchronous is the best way to handle thousands of requests at one time (Keeping in mind that I have only single instance of server serving the request)?
Edited:
Jersey is wrongly tagged.
For eg: I have a rest based web service, which is supposed to be consumed by 100 thousand clients within a very short span of time (~60 seconds). I understand that if I am allowed to deploy multiple instance of the server, then I can use a load balancer to handle all my incoming request and delegate them accordingly. But I am restricted to use only single instance. What design could I opt within this restriction?
I could think of making the request asynchronous( which will not respond to client immediately ) in order to be able to let the server be free from this load and handle the requests at it's own pace.
For now we can ignore memory limitations.
Please let me know if this clarifies your doubt?
The term asynchronous could have different meanings in different places. For a web application code, it could refer to a Nonblocking I/O server such as Node or Netty/Akka which is a way for HTTP Requests to time multiplex on the same worker threads. If you're writing callbacks or using async or future constructs, it probably is non-blocking I/O which people sometimes refer to as asynchronous.
However, I could have REST API running on Node which implements non-blocking I/O, but the API or the overall architecture is still fully synchronous. For example, let's say I have an API endpoint POST /photos, which takes in a photo, creates image thumbnails, stores the URLs of the photo in a SQL Db and then stores the images in S3. The REST API could still block from the initial POST until after the image is processed and stored.
A second way is for the server to accept the photo process as a job and return immediately. Then the server could store the photo in a in memory or network based queue to be processed later by some other worker thread. In fact, I could even implement this async architecture even with a blocking server like some good old Java 7 and Jetty.

java applications architecture data exchange

we are developing more java applications. Very simple descriptions.
frontend applications - web app, interact with users
middleware app - provide some functionality for frontend app
transport app - app which communicates with external systems.
These applications communicate with each other through xml transport over http.
Scenario from real life looks like, user create some action in frontend app, frontend app calls
middleware app, and usually middleware app calls transport app (which usually calls some other external system).
Also frontend app could call directly transport app, it depends on flow and business logic etc.
Like you see there are plenty of http calls, frontend app create http call and call middleware app and, middleware app create
http call and call transport app, transport asks some other system and send response back to middleware etc.
My question is. Is this really good architecture? Looks like too much overhead to me. There should be some other better solution,
how to transport data between apps, even they are running in one server.
Data are in 99% simple xml's, created through xstream.
Could be JMS appropriate solution for that?
Thank you
I agree with you that although it will most certainly work OK, the approach with http calls between layers is probably a bit heavy-handed.
JMS would be a very good match if the calls between the different layers are asynchronous and essentially fire and forget (you fire a message and aren't immediately interested in the outcome of the work the destination has to do when it receives your message). Although there are people who do request-reply with JMS I don't feel it's the most natural and elegant usage of a message oriented system.
If what you're doing is synchronous (you call a backend and wait for it to respond to your request) I'd go with normal (stateless) session beans, the creation and management of those has been simplified a lot in EE6.
Another advantage with EJB's is that you don't incur the overhead of the different XML serializations and deserializations that are needed in the scenario you describe.

Categories

Resources