How to perform a failover with Netflix/Eureka? - java

I use Eureka as my service discovery and as a load balancer, and it is working fine when having two instances of a service "A", however when I stop one of this instances Eureka doesn't recognize that one of the instances is down and it keeps me showing an error page everytime the load balancer tries to use the dead instance.
I have put the enableSelfPreservation to false to prevent that but it takes Eureka up to 3 - 5 minutes to unregister that service, however I want high availability over my service and I want to perform the failover immediately in a matter of seconds. Is this possible using Eureka, if not how can I achieve to use only the alive instances when the others are dead.
I am using spring boot, here is my configuration for the Eureka server.
server:
port: 8761
eureka:
instance:
hostname: localhost
client:
registerWithEureka: false
fetchRegistry: false
serviceUrl:
defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/
server:
enableSelfPreservation: false

You should add a ribbon configuration to your application.yml. It is also recommended to set the hystrix isolation level to THREAD with a timeout set.
Note: This configuration should be in the client side (this usually means your gateway server), since Ribbon (and Spring Cloud in general) are a form of client-side load balancing.
Here's an example that I use:
hystrix:
command:
default:
execution:
isolation:
strategy: THREAD
thread:
timeoutInMilliseconds: 40000 #Timeout after this time in milliseconds
ribbon:
ConnectTimeout: 5000 #try to connect to the endpoint for 5 seconds.
ReadTimeout: 5000 #try to get a response after successfull connection for 5 seconds
# Max number of retries on the same server (excluding the first try)
maxAutoRetries: 1
# Max number of next servers to retry (excluding the first server)
MaxAutoRetriesNextServer: 2

Related

Not able to run two instance of Authentication service with Zuul

I am working with Eureka/Zuul and Springboot microservices.
While mapping multiple instances of same application to Zuul gateway, i am using serviceId attribute.
Here i had shared my application.yml of zuul project.
server:
port: 8093
servlet:
context-path: /apigateway
spring:
application:
name: zuul-proxy
zuul:
sensitiveHeaders: Cookie,Set-Cookie
host:
socketTimeoutMillis: 60000
routes:
authenticator-oauth:
path: /oauth/
url: http://localhost:8092/authenticator/oauth
sample-resource-server:
path: /sample/
serviceId: sample
stripPrefix: false
sample:
ribbon:
NIWSServerListClassName: com.netflix.loadbalancer.ConfigurationBasedServerList
listOfServers: http://localhost:8096,http://localhost:8097
ConnectTimeout: 60000
ReadTimeout: 60000
MaxTotalHttpConnections: 500
MaxConnectionsPerHost: 100
authenticator:
ribbon:
NIWSServerListClassName: com.netflix.loadbalancer.ConfigurationBasedServerList
listOfServers: http://localhost:8092, http://localhost:8091
ConnectTimeout: 1000
ReadTimeout: 5000
MaxTotalHttpConnections: 500
MaxConnectionsPerHost: 100
eureka:
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://localhost:8094/eureka/
instance:
preferIpAddress: true
I am able to run my authenticator application.
Also i am able to run my sample application on both port(8096/8097) with load balancing .
Here as you can see i had mapped authenticator service using URL and it's working fine.
url: http://localhost:8092/authenticator/oauth
But when i had mapped it with service id as below code, it's not able to redirect with actual authentication service url.
authenticator-oauth:
path: /authenticator/**
serviceId: authenticator
stripPrefix: false
In both these cases, i am hitting same URL as below
http://localhost:8093/apigateway/oauth/token.
Why authentication service or oauth/token end point behave differently compare to normal application?
Any help will appreciated.
In order to register multiple instance of same micro-service you have to do nothing
in ZUUL API GATEWAY.
Add a property in the micro-service itself for which you are creating multiple instance.
server:
port: ${PORT:0}
eureka:
instance:
instance-id: ${spring.application.name}:${spring.application.instance_id:${random.value}}
Every time you will start same service it will start on unique port
It will register itself with eureka discovery service
Load Balancing will be done automatically since ribbon comes built in with Zuul

Unable to deregister service(Spring boot app) from consul

Unable to deregister the service from Consul.
Basically Consul official page said that it will deregister service automatically but in my case it won't work like said.
https://www.consul.io/docs/agent/basics.html
Hi Referring to the consul Life-cycle its says that
To prevent an accumulation of dead nodes (nodes in either failed or left states), Consul will automatically remove dead nodes out of the catalog. This process is called reaping. This is currently done on a configurable interval of 72 hours (changing the reap interval is not recommended due to its consequences during outage situations). Reaping is similar to leaving, causing all associated services to be deregistered.
This is my bootstrap.yml file
server:
port: 8089
spring:
application:
name: ***-service
cloud:
consul:
host: consul-ui
port: 8500
discovery:
deregister: true
instance-id: ${spring.application.name}:${random.value}
enabled: true
register: true
health-check-interval: 20s
prefer-ip-address: true
config:
enabled: true
prefix: configuration
defaultContext: shared
format: YAML
data-key: data
watch:
enabled: true
endpoints:
shutdown:
enabled: true
In Consul UI after deleting service using purge command, still shows on Consul UI. So meant that it is not been deregister from Consul
You need to configure this on Consul, as your apps seems to not exit gracefully.
Checkout the consul config property deregister_critical_service_after:
In Consul 0.7 and later, checks that are associated with a service may
also contain an optional deregister_critical_service_after field,
which is a timeout in the same Go time format as interval and ttl. If
a check is in the critical state for more than this configured value,
then its associated service (and all of its associated checks) will
automatically be deregistered. The minimum timeout is 1 minute, and
the process that reaps critical services runs every 30 seconds, so it
may take slightly longer than the configured timeout to trigger the
deregistration. This should generally be configured with a timeout
that's much, much longer than any expected recoverable outage for the
given service.
That documentation is about consul nodes, not service nodes.
How exactly are you terminating the application?

ZuulException Forwarding error,ClientException null

Am facing the below ZuulException exception due to SocketTimeoutException Read timed out. I am trying to put my oauth2 server behind the zuul proxy.
Please see the log trace here , gateway's application.yml entries here and application dependencies here . I am not using hystrix or eureka explicitly
This issue is intermittent, sometimes it is working and sometimes it isn't. Have anyone faced this before.
everything works well except API gateway.
Try to define the below properties. It seems that you're using Zuul with Eureka. In this case, RibbonRoutingFilter will be used instead of SimpleHostRoutingFilter. If so, you need to define readTimeout and connectTimeout for ribbon instead of zuul.host properties.
ribbon:
ReadTimeout: 10000
ConnectTimeout: 10000
oauth2:
ribbon:
ReadTimeout: 10000
ConnectTimeout: 10000

Eureka detect service status

Context
We are using Spring Cloud Netflix with Eureka as the service discovery and Zuul for proxying the services and load balance them.
The microservices are implemented using NodeJS and are registered at Eureka using the NPM module eureka-js-client and a custom layer in between that handles the configuration and stuff that is generic for all microservices.
Problem
The problem is that Eureka does not recognize if one service goes down. This is a problem as we are having a development infrastructure with autodeployment that redeploys and restarts the microservices on different ports each time without restarting Eureka (and Zuul).
Therefore after a while we have ten or more instances of one microservice where only one is up but all are recognized as beeing UP.
Solution Approach
I tried setting the heartbeatInterval on the client lesser but that does not help.
I tried setting the renewalThresholdUpdateIntervalMs on the server lesser but that does not help either.
Many more frustrating, non-helping property tries…
Question
How do I configure Eureka to evict instances or to set status to DOWN of the instances that do not send a heartbeat in a reasonable time (not 30 minutes or so)?
Code Snippets
The server itself does not contain mentionable code (just a few annotations to start the Eureka server using Spring Cloud Starter).
The configuration of the Eureka server (I have removed all non-working tries):
server:
port: 8761
spring:
cloud:
client:
hostname: localhost
eureka:
instance:
address: 127.0.0.1
hostname: ${spring.cloud.client.hostname}
The client configuration that is sent to the server (using eureka-js-client):
{
instance : {
instanceId : `${CONFIG.instance.address}:${CONFIG.instance.name}:${CONFIG.instance.port}`,
app : CONFIG.instance.name,
hostName : CONFIG.instance.host,
ipAddr : CONFIG.instance.address,
port : {
'$' : CONFIG.instance.port,
'#enabled' : true
},
homePageUrl : `http://${CONFIG.instance.host}:${CONFIG.instance.port}/`,
statusPageUrl : `http://${CONFIG.instance.host}:${CONFIG.instance.port}/info`,
healthCheckUrl : `http://${CONFIG.instance.host}:${CONFIG.instance.port}/health`,
vipAddress : CONFIG.instance.name,
secureVipAddress : CONFIG.instance.name,
dataCenterInfo : {
'#class' : 'com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo',
name : 'MyOwn'
}
},
eureka : {
host : CONFIG.eureka.host,
port : CONFIG.eureka.port,
servicePath : CONFIG.eureka.servicePath || '/eureka/apps/',
healthCheckInterval : 5000
}
}
after a while we have ten or more instances of one microservice where
only one is up but all are recognized as beeing UP.
Eureka has a 'self preservation' mode. Where if less than 85% of instances heartbeats are registering, it will not evict any instances. You should be able to see a warning on the eureka dashboard.

Register multiple Instances of a Spring Boot Eureka Client from a single host

UPDATE
The README in this repo has been updated to demonstrate the solution in the accepted answer.
I'm working with a simple example of a Spring Boot Eureka service registration and discovery based on this guide.
If I start up one client instance, it registers properly, and it can see itself through the DiscoveryClient. If I start up a second instance with a different name, it works as well.
But if I start up two instances with the same name, the dashboard only shows 1 instance running, and the DiscoveryClient only shows the second instance.
When I kill the 2nd instance, the 1st one is visible again through the dashboard and the discovery client.
Here are some more details about the steps I'm taking and what I'm seeing:
Eureka Server
Start the server
cd eureka-server
mvn spring-boot:run
Visit the Eureka dashboard at http://localhost:8761
Note that there are no 'Instances' yet registered
Eureka Client
Start up a client
cd eureka-client
mvn spring-boot:run
Visit the client directly at http://localhost:8080/
The /whoami endpoint will show the client's self-knowledge of its application name and port
{
"springApplicationName":"eureka-client",
"serverPort":"8080"
}
The /instances endpoint will take up to a minute to update, but should eventually show all the instances of eureka-client that have been registered with the Eureka Discovery Client.
[
{
"host":"hostname",
"port":8080,
"serviceId":"EUREKA-CLIENT",
"uri":"http://hostname:8080",
"secure":false
}
]
You can also visit the Eureka dashoboard again now and see it listed there.
Spin up another client with a different name
You can see that another client will be registred by doing the following:
cd eureka-client
mvn spring-boot:run -Dspring.application.name=foo -Dserver.port=8081
The /whoami endpoint will show the name foo and the port 8081.
In a minute or so, the /instances endpoint will show the information about this foo instance too.
On the Eureka dashboard, two clients will now be registered.
Spin up another client with the same name
Now try spinning up another instance of eureka-client by only over-riding the port parameter:
cd eureka-client
mvn spring-boot:run -Dserver.port=8082
The /whoami endpoint for http://localhost:8082 shows what we expect.
In a minute or so, the /instances endpoint now shows the instance running on port 8082 also, but for some reason, it doesn't show the instance running on port 8080.
And if we check the /instances endpoint on http://localhost:8080 we also now only see the instance running on 8082 (even though clearly, the one on 8080 is running since that's what we're asking for.
The Eureka dashboard only shows 1 instance of eureka-client running.
What's going on here?
Let's try killing the instance running on 8082 and see what happens.
When we query /instances on 8080, it still only shows the instance on 8082.
But a minute later, that goes away and we just see the instance on 8080 again.
The question is, why don't we see both instances of eureka-client when they are both running?
For local deployments, try to configure {namespace}.instanceId property in eureka-client.properties (or eureka.instance.metadataMap.instanceId for proper yaml file in case of Spring Cloud based setup). It's deeply rooted in the way Eureka server calculates application lists and compares InstanceInfo for the PeerAwareInstanceRegistryImpl - when no more concrete data (e.g.: instance metadata is available) they try to get the id from the hostname..
I wouldn't recommend it for AWS deployment though, cause messing around with instanceId will bring you trouble figuring out which machine hosts a particular service - on the other hand I doubt that you'll hosts two identical services on one machine, right?
In order to get all instances show up in the admin portal by setting unique euraka.instance.hostname in your Eureka configuration file.
The hostname is used as key for storing the InstanceInfo in com.netflix.discovery.shared.Application (since no UniqueIdentifier is set). So you have to use unique hostnames. When you test ribbon in this scenario you would see that the load won't be balanced.
Following application.yml is example:
server:
port: ${PORT:0}
info:
component: example.server
logging:
level:
com.netflix.discovery: 'OFF'
org.springframework.cloud: 'DEBUG'
eureka:
instance:
leaseRenewalIntervalInSeconds: 1
leaseExpirationDurationInSeconds: 1
metadataMap:
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
instanceId: ${spring.application.name}:${spring.application.instance_id:${random.value}}
It's a bug before in Eureka, you can check further information in https://github.com/codecentric/spring-boot-admin/issues/134

Categories

Resources