How to make Eureka really work on Cloud Foundry (cf)? - java

In cloud foundry all apps are accessible via cloud foundry load balancers.
Each load balancer has an url under which to call the app.
Even though there is a hidden way (e.g. X-CF-APP-INSTANCE),
it is not meant to call an app instance directly.
Eureka is highly available and partition tolerant, but it lacks of consistency (CAP theorem).
To overcome the staleness of registry data, Netflix uses client side load balancing (Ribbon).
(see: https://github.com/Netflix/eureka/wiki/Eureka-2.0-Architecture-Overview)
Because an app in cloud foundry is called via its load balancer, it registers itself with the
address of the load balancer at Eureka. As stated before, the app instance does not even have an address.
To make it more visible, let's say there are two instances of an app A (a1 and a2) that register themself at Eureka.
Eureka now has two entries for the app A but both having the same address assigned.
Now when Ribbon takes place to overcome the consistency problem of Eureka, it is very likely that a retry is directed to the same instance as the first try.
Failover using Ribbon is therefore not working with Eureka in cloud foundry.
The fact that all instances of an app are assigned to the same address in Eureka, makes things complicated in many situations.
Even the replication of Eureka we could solve with a workaround only. Turbine needs to be fed in push mode etc.
We are thinking about enhancing Eureka so that it sets the X-CF-APP-INSTANCE header.
Now before doing that, we wanted to know whether someone knows an easier way to make Eureka really work on cloud foundry?
This question is realated to: Deploying Eureka/ribbon code to Cloud Foundry

I think Eureka, and any other discovery service, has no place in a PaaS like CloudFoundry. Although it sounds appealing to enhance Eureka to support the X-CF-APP-INSTANCE header, you would also need to enhance the client part (Ribbon) to take advantage of that information, and add that header to each request.
Well, it's 9 months later, maybe you have done that in the meantime? Or you follow an alternative solution path?
Anyway, there's an additional app to app integration option in the meantime, the container to container networking. And even here, the CloudFoundry dev team decided to provide their own discovery mechanism.

Related

Best way for inter-cluster communication between microservices on Kubernetes?

I am new to microservices and want to understand what is the best way I can implement below behaviour in microservices deployed on Kubernetes :
There are 2 different K8s clusters. Microservice B is deployed on both the clusters.
Now if a Microservice A calls Microservice B and B’s pods are not available in cluster 1, then the call should go to B of cluster 2.
I could have imagined to implement this functionality by using Netflix OSS but here I am not using it.
Also, keeping aside the inter-cluster communication aside for a second, how should I communicate between microservices ?
One way that I know is to create Kubernetes Service of type NodePort for each microservice and use the IP and the nodePort in the calling microservice.
Question : What if someone deletes the target microservice's K8s Service? A new nodePort will be randomly assigned by K8s while recreating the K8s service and then again I will have to go back to my calling microservice and change the nodePort of the target microservice. How can I decouple from the nodePort?
I researched about kubedns but seems like it only works within a cluster.
I have very limited knowledge about Istio and Kubernetes Ingress. Does any one of these provide something what I am looking for ?
Sorry for a long question. Any sort of guidance will be very helpful.
You can expose you application using services, there are some kind of services you can use:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record
For internal communication you an use service type ClusterIP, and you could configure the service dns for your applications instead an IP.
I.e.: a service called my-app-1 could be reach internnaly using the dns http://my-app-1 or with fqdn http://my-app-1.<namespace>.svc.cluster.local.
For external communication, you can use NodePort or LoadBalancer.
NodePort is good when you have few nodes and know the ip of all of them. And yes, by the service docs you can specify a specific port number:
If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that’s inside the range configured for NodePort use.
LoadBalancer give you more flexibility, because you don't need to know all node ips, you just must to know the service IP and port. But LoadBalancer is only supported in cloudproviders, if you wan to implement in bare-metal cluster, I recomend you take a look in MetalLB.
Finnaly, there is another option that is use ingress, in my point of view is the best way to expose HTTP applications externally, because you can create rules by path and host, and it gives you much more flexibility than services. But only HTTP/HTTPS is supported, if you need TCP then go to Services
I'd recommend you take a look in this links to understand in deep how services and ingress works:
Kubernetes Services
Kubernetes Ingress
NGINX Ingress
Your design is pretty close to Istio Multicluster example.
By following the steps in the Istio multicluster lab you'll get two clusters with one Istio control plane that balance the traffic between two ClusterIP Services located in two independent Kubernetes clusters.
The lab's configuration watches the traffic load, but rewriting the Controller Pod code you can configure it to switch the traffic to the Second Cluster if the Cluster One's Service has no endpoints ( all pods of the certain type are not in Ready state).
It's just an example, you can change istiowatcher.go code to implement any logic you want.
There is more advanced solution using Admiral as an Istio Multicluster management automation tool.
Admiral provides automatic configuration for an Istio mesh spanning multiple clusters to work as a single mesh based on a unique service identifier that associates workloads running on multiple clusters to a service. It also provides automatic provisioning and syncing of Istio configuration across clusters. This removes the burden on developers and mesh operators, which helps scale beyond a few clusters.
This solution solves these key requirements for modern Kubernetes infrastructure:
Creation of service DNS entries decoupled from the namespace, as described in Features of multi-mesh deployments.
Service discovery across many clusters.
Supporting active-active & HA/DR deployments. We also had to support these crucial resiliency patterns with services being deployed in globally unique namespaces across discrete clusters.
This solution may become very useful in a full scale.
Use ingress for inter cluster communication and use cluster ip type service for intra cluster communication between two micro services.

Spring Cloud Eureka with Non-JVM Language (PHP) / Discovering Services Using Eureka REST Endpoints

I'm using Spring Eureka as discovery server in my application which is implemented using microservices architecture. The services are mostly created with PHP, and they register themselves on start-up using Eureka REST endpoints and each one of them sends a heartbeat every 30 seconds and everything works well.
Now, imagine service A wants to talk to service B. How does the discovery happen?
Currently I think service A should send a GET request to http://localhost:8761/eureka/apps/service-B endpoint, retrieve the list of current instances of service B and choose between them. Is it the right approach?
What about load-balancing? Should I implement that in my services to ask for a different instance every time? Or choose between them randomly?
Any help would be greatly appreciated.
Update: Take a look at this library.
There is an easy way to do this with Spring Cloud Netflix Sidecar: http://cloud.spring.io/spring-cloud-static/Camden.SR7/#_polyglot_support_with_sidecar
If you want to continue to implementing this yourself you have several options. With client side load balancing you could retrieve all instances from Eureka and then choose one randomly at the consuming side. If you want server side load balancing you will be needing an extra component, like Zuul, and let it do the load balancing and routing. Zuul uses the eureka configuration so it is easy to integrate it.

Design considerations for J2EE webapp on Tomcat in Amazon WebServices

My project is looking to deploy a new j2ee application to Amazon's cloud. ElasticBeanstalk supports Tomcat apps, which seems perfect. Are there any particular design considerations to keep in mind when writing said app that might differ from just a standalone tomcat on a server?
For example, I understand that the server is meant to scale automatically. Is this like a cluster? Our application framework tends to like to stick state in the HttpSession, is that a problem? Or when it says it scales automatically, does that just mean memory and CPU?
Automatic scaling on AWS is done via adding more servers, not adding more CPU/RAM. You can add more CPU/RAM manually, but it requires shutting down the server for a minute to make the change, and then configuring any software running on the server to take advantage of the added RAM, so that's not the way automatic scaling is done.
Elastic Beanstalk is basically a management interface for Amazon EC2 servers, Elastic Load Balancers and Auto Scaling Groups. It sets all that up for you and provides a convenient way of deploying new versions of your application easily. Elastic Beanstalk will create EC2 servers behind an Elastic Load Balancer and use an Auto Scaling configuration to add more servers as your application load increases. It handles adding the servers to the load balancer when they are ready to receive traffic, and removing them from the load balancer and deleting the extra servers when they are no longer needed.
For your Java application running on Tomcat you have a few options to handle horizontal scaling well. You can enable sticky sessions on the Load Balancer so that all requests from a specific user will go to the same server, thus keeping the HttpSession tied to the user. The main problem with this is that if a server is removed from the pool you may lose some HttpSessions and cause any users that were "stuck" to that server to be logged out of your application. The solution to this is to configure your Tomcat instances to store sessions in a shared location. There are Tomcat session store implementations out there that work with AWS services like ElastiCache (Redis) and DynamoDB. I would recommend using one of those, probably the Redis implementation if you aren't already familiar with DynamoDB.
Another consideration for moving a Java application to AWS is that you cannot use any tools or libraries that rely on multi-cast. You may not be using multi-cast for anything, but in my experience every Java app I've had to migrate to AWS relied on multi-cast for clustering and I had to modify it to use a different clustering method.
Also, for a successful migration to AWS I suggest you read up a bit on VPCs, private IP versus public IP, and Security Groups. A solid understanding of those topics is key to setting up your network so that your web servers can communicate with your DB and cache servers in a secure and performant manner.

Inter-app communication within application server without MQ

I'm looking at exposing separate services inside an application server, and all services need to authenticate with the same API key.
Rather than each request authenticating with the DB individually, I was hoping I could write the authentication service and configuration once, do some caching of the available API keys, and expose that auth service to the other services on the app server (TC, Glassfish, etc). I don't think HTTP loopback is a good choice, so I was looking at Spring Integration, JavaEE, RMI, etc.
There's lots of info available, but it's still not clear to me if this is something that Spring Integration can support after reading through some documentation and projects. It looks like Spring makes the assumption you're in-app, or MQ based (external MQ or embedded MQ.) I'm also not sure if this is something inherently available in EJB implementations with Jboss or Glassfish...It seems like it might be though.
While MQ's seem possible, they seem like overkill for what my purpose is. I really just need to pass a bean to my authentication service on the same box, and respond with a bean/boolean on whether the key was approved or not.
Anyone have some guidance on accomplishing something like this? (or maybe why I'm making the wrong decision?)
You can do it via plain PCT/IP or RMI.
But I don't see problem to follow with Micro Service Architecture principles and use the Spring Integration REST ability
Any networks access you always can restrict via firewalls and HTTP-proxies.

How to run multiple tomcats against the same Database with load balancing

Please suggest what are different ways of achieving load balance on database while more than one tomcat is accessing the same database?
Thanks.
This is a detailed example of using multiple tomcat instances and an apache based loadbalancing control
Note, if you have a hardware that would make a load balancing, its even more preferable way as to me (place it instead of apache).
In short it works like this:
A request comes from some client to apache web server/hardware loadbalancer
the web server determines to which node it wants to redirect the request for futher process
the web server calls Tomcat and tomcat gets the request
the Tomcat process the request and sends it back.
Regarding the database :
- tomcat itself has nothing to do with your Database, its your application that talks to DB, not a Tomcat.
Regardless your application layer you can establish a cluster of database servers (For example google for Oracle RAC, but its entirely different story)
In general, when implementing application layer loadbalancing please notice that the common state of the application gets replicated.
The technique called "sticky session" partially handles the issue but in general you should be aware of it.
Hope this helps

Categories

Resources