Spring Boot HTTP Client of multiple servers WITHOUT load balancer - java

I'm running Spring Boot services in AWS ECS using CloudMap.
Using Java 11 and Spring Boot 2.2.1.RELEASE
S(1) and S(2) are exact copies of a CPU intensive service, and C is calling them as part of servicing multiple parallel requests.
C is not resource bound, so I don't want to create more instances of it.
Calls are HTTP/REST made using com.konghq:unirest-java:jar:3.6.00, which in turn uses httpcomponents:httpclient:jar:4.5.11
Here a little diagram:
Multiple Parallel Requests ----> C (10.1.12.25) ---------> S(1) (10.1.178.143)
\
\---> S(2) (10.1.118.82)
Using Cloudmap as Service Directory, when I dig <service-name>, it returns both IPs in the answer.
;; ANSWER SECTION:
<service-name>. 60 IN A 10.1.178.143
<service-name>. 60 IN A 10.1.118.82
Because C is only one instance, S(1) is receiving 100% of requests from C. This makes me think C is somehow using only one of the IP addresses registered as .
Is it posible to make C use BOTH IP addresses to invoke <service-name> without using a Load Balancer? Maybe configuring something in Unirest and/or HttpClient?
Thanks in advance.
P.S.: This is my first question, so please be kind if not the right tag, etc. ;-)

You can use Route53 with those ip address and chose a round robin policy.
How ever if the requests are ran in parallel and you wan't to not over load any specific instance a load balancer is needed for the long run.

Related

Spring Boot + k8s - autodiscovery solution

Let's say I have such architecture based on Java Spring Boot + Kubernetes:
N pods with similar purpose (lets say: order-executors) - GROUP of pods
other pods with other business implementation
I want to create solution where:
One (central) pod can communicate with all/specific GROUP of pods to get some information about state of this pods (REST or any other way)
Some pods can of course be replicated i.e. x5 and central pod should communicate with all replicas
Is it possible with any technology? If every order-executor pod has k8s service - there is a way to communicate with all replicas of this pod to get some info about all of them? Central pod has only service url so it doesn't know which replica pod is on the other side.
Is there any solution to autodiscovery every pod on cluster and communicate with them without changing any configuration? Spring Cloud? Eureka? Consul?
P.S.
in my architecture there is also deployed etcd and rabbitmq. Maybe it can be used as part of solution?
You can use a "headless Service", one with clusterIP: none. The result of that is when you do an DNS lookup for the normal service DNS name, instead of a single A result with the magic proxy mesh IP, you'll get separate A responses for every pod that matches the selector and is ready (i.e. the IPs that would normally be the backend). You can also fetch these from the Kubernetes API via the Endpoints type (or EndpointSlices if you somehow need to support groups with thousands, but for just 5 it would be Endpoints) using the Kubernetes Java client library. In either case, then you have a list of IPs and the rest is up to you to figure out :)
I'm not familiar with java, but the concept is something I've done before. There are a few approaches you can do. One of them is using kubernetes events.
Your application should listen to kubernetes events (using a websocket). Whenever a pod with a specific label or labelset has been created, been removed or in terminating state, etc. You will get updates about their state, including the pod ip's.
You then can do whatever you like in your application without having to contact the kubernetes api yourself in your application.
You can even use a sidecar pod which listens to those events and write it to a file. By using shared volumes, your application can read that file and use the content of it.

microservice project between a monolitic api and another back

I would like to know if it's possible to create a spring boot microservice between an old java 1.8 monolithic API and a Spring Boot Backend (React for the front but it doesn't matter).
Here is the idea:
RestController inside the monolithic API ---> Microservice (Springboot) ---> Back API (Springboot)
For the use case:
Click on the button of API A
Binding data to the RestController of the API B
Send the same data to an API C
I don't think it's possible through a RestController due to the Cross Origin but it could be great to find a solution.
What do you think?
TL;DR Assuming these are all synchronous remoting calls I think this should not pose too many problems, apart from maybe latency if that's an issue and possibly authentication.
The RestController in your Monolith A can call the REST API implemented by your Microservice B as long as it can reach that endpoint, and knows how to map/aggregate the data for it. The Microservice B can in turn call your Back API C.
I assume the calls will all be blocking, meaning each thread processing a request will be paused until a response is received. This means that the call to A will have to wait until B and C are all done with their processing and have sent their responses. This can add up (especially if these are all network hops to different servers). If this is a temporary set up to apply the strangler pattern to part of the monolith then the latency might not be an issue for the period in which calls are still routed through the monolith.
Cross origin resource sharing (CORS) is only a concern when retrieving content from a browser window as far as I know. In the described situation this should not be an issue. Any client calling Monolith A will not be aware of the components behind it. If one or more oof the three components are not under your control, or not managed/authenticated in the same way then you might run into some authentication challenges. For instance, the Microservice might require a JWT token which the Monolight might not yet provide. This would mean some tinkering to get the components to become friends in this respect.
Strangler pattern

Spring boot+tomcat performance: Server-sent events vs websocket vs old-school REST

Suppose, we have a Spring boot application built with Tomcat and web-client works in browser. And we need to indicate some info about processes, runned on server-side. Server side is Spring boot application built with Tomcat.
As we know, there is 3 ways to implement such functionality:
1. websocket;
2. server-sent-events;
3. REST-service (GetMapping) on server side and timer on browser-side which polling that service with GET-requests each second.
Let's suppose that we have a lot clients, and that exactly our code have equal time and memory consumption in all 3 versions. Question: which way is cheapest by CPU, by memory and, maybe, by available connections?

ECS Fargate Routing to 20+ Containers from ALB

I'm running 20 + Java ElasticBeanstalk Instances that are all using Classic Load Balancers and the works (autoscaling, security groups, etc). I'm trying to figure out how to improve this setup and move it to ECS and reduce overall resource consumption.
The question I'm looking to figure out is if it's possible to have Fargate handle 20+ different host and path match conditions to 20+ different containers in a single service.
I have a POC in place where I have my containers spinning up on an ECS EC2 instance, instead of Fargate but I can see everything is working when I visit my Route53 CNAME I'm testing with.
I have traffic coming into a main ALB that's rerouting http traffic -> https. After https, the traffic is filtered through rules that route traffic based on a host and path condition.
First rule
host is hello.world.com + path is /java Then forward to helloworld-target-group
Second rule
host is new.world.com + path is /java Then forward to welcomeworld-target-group
and so on
I'm reading that a single ECS service can only have 1 ALB with a max of 5 target groups and my initial thinking was to have the 20+ target groups on a single service with an ALB and 20+ containers.
Now I'm thinking as a possible side solution, I could have 5 different services making use of the 5 target group per service limit.
The containers will all have the same docker image. The only difference will be the environmental variables on the containers. (I need the containers to each have different environmental variables so that java will know which db to use.)
Has anyone looked at a similar problem or know of a better solution?
Edit: I might be wrong or AWS has updated the routing rules but as of now the answer I've landed on is not to use Fargate to route traffic to 20+ containers. Use an ECS EC2 instance to setup 20+ target groups that all route to separate ports on the EC2 Instance. This way you can route traffic to 20+ containers from a single instance which is pretty cool.

AWS rds - How to read from a read replica inside of a Java application?

I am new to aws.
I have a mysql rds instance and I just created 2 read replicas. My application is written in Java, and what I have done up until now is using the JDBC I have connected to the one aws instance, but now how do I distribute the work around the 3 servers?
You can set up an internal Elastic Load Balancer to round robin requests to the slaves. Then configure two connections in your code: one that points directly to the master for writes and one that points to the ELB endpoint for reads.
Or if you're adventurous, you could set up your own internal load balancer using Nginx, HAProxy, or something similar. In either case, your LB will listen on port 3306.
AWS suggests setting up route 53. Here is the official article on the subject https://aws.amazon.com/premiumsupport/knowledge-center/requests-rds-read-replicas/
In case you have the option to use Spring boot and spring-cloud-aws-jdbc
You can take a look at this working example and explanation in this post

Categories

Resources