I have 2 Springboot microservices, one is the Eureka server and the other one is the Gateway.
I can't find the right configuration to make the Gateway register to the Eureka server.
This is the eureka.yml with the K8s configuration:
Eureka.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: eureka-cm
data:
eureka_service_address: http://eureka-0.eureka:8761/eureka
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
ports:
- port: 8761
name: eureka
selector:
app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 1
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: myrepo/eureka1.0:eureka
imagePullPolicy: Always
ports:
- containerPort: 8761
env:
- name: EUREKA_SERVER_ADDRESS
valueFrom:
configMapKeyRef:
name: eureka-cm
key: eureka_service_address
---
apiVersion: v1
kind: Service
metadata:
name: eureka-lb
labels:
app: eureka
spec:
selector:
app: eureka
type: NodePort
ports:
- port: 80
targetPort: 8761
Eureka.application.yml
spring:
application:
name: eureka
server:
port: 8761
eureka:
instance:
hostname: "${HOSTNAME}.eureka"
client:
register-with-eureka: false
fetch-registry: false
serviceUrl:
defaultZone: ${EUREKA_SERVER_ADDRESS}
Gateway.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloud-gateway-app
labels:
app: cloud-gateway-app
spec:
replicas: 1
selector:
matchLabels:
app: cloud-gateway-app
template:
metadata:
labels:
app: cloud-gateway-app
spec:
containers:
- name: cloud-gateway-app
image: myrepo/gateway1.0:gateway
imagePullPolicy: Always
ports:
- containerPort: 9191
---
apiVersion: v1
kind: Service
metadata:
name: cloud-gateway-svc
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 9191
protocol: TCP
selector:
app: cloud-gateway-app
Gateway.application.yml
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://eureka-0.eureka.default.svc.cluster.local:8761/eureka
This is the error I got when I check the logs of the Gateway's pod:
error on POST request for "http://eureka-0.eureka.default.svc.cluster.local:8761/eureka/apps/API-GATEWAY": eureka-0.eureka.default.svc.cluster.local; nested exception is java.net.UnknownHostException: eureka-0.eureka.default.svc.cluster.local
Following the documentation I've tried to set defaultZone property of the Gateway.application.properties file following this pattern:
172-17-0-3.default.pod.cluster.local:8761/eureka
But in this way too, I can't subscribe to the Eureka Server.
I resolved by modifying the Gateway.application.yml in this way:
eureka:
instance:
preferIpAddress: true
hostname: eureka-0
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://eureka-0.eureka:8761/eureka/
EDIT:
I'm encountering some problems in registering other microservices to the Eureka Server.
I've tried by increasing the replicas of the Eureka Server and made each microservice register to a dedicated replica, but as now this is not working.
Related
I have a Cassandra cluster and a spring boot application in my Kubernetes cluster. They are in the same (default) namespace. The spring boot application needs to connect to Cassandra, but it's unable to do that. During the connection attempt, spring boot application receives the exception below:
Suppressed:
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection
refused: cassandra/10.111.117.185:32532 Caused by:
java.net.ConnectException: Connection refused at
java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
at
io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:710)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:829) Caused by:
io.netty.channel.StacklessClosedChannelException: null
Cassandra yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
type: NodePort
ports:
- port: 9042
targetPort: 9042
protocol: TCP
nodePort: 32532
selector:
app: cassandra
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: cassandra:latest
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8SCassandra"
- name: CASSANDRA_DC
value: "DC1-K8SCassandra"
- name: CASSANDRA_RACK
value: "Rack1-K8SCassandra"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Spring boot application yaml:
apiVersion: v1
kind: Service
metadata:
name: service-cassandraapp
labels:
app: cassandraapp
spec:
selector:
app: cassandraapp
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 32588
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-cassandraapp
labels:
app: cassandraapp
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: cassandraapp
template:
metadata:
labels:
app: cassandraapp
spec:
containers:
- name: cassandraapp
image: ek/cassandraapp:latest
ports:
- containerPort: 8080
resources:
limits:
memory: "1Gi"
cpu: "1000m"
requests:
memory: "256Mi"
cpu: "500m"
env:
- name: CONFIG_CASSANDRA_HOST
value: "cassandra"
- name: CONFIG_CASSANDRA_PORT
value: "32532"
Spring boot application.properties:
spring.data.cassandra.local-datacenter=datacenter1
spring.data.cassandra.keyspace-name=testkeyspace
spring.data.cassandra.port=${CONFIG_CASSANDRA_PORT}
spring.data.cassandra.contact-points=${CONFIG_CASSANDRA_HOST}
spring.data.cassandra.username=cassandra
spring.data.cassandra.password=cassandra
spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS
When I check the Cassandra pods, they are at running state. But the spring boot application gets refused. Any help would be greatly appreciated.
I solved the problem. I assumed that spring.data.cassandra.schema-action=CREATE_IF_NOT_EXISTS would create the key space but it wouldn't. So I created it manually.
My updated springboot application yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-cassandra-cassandraapp
data:
cassandra-host: "cassandra.default.svc.cluster.local"
cassandra-port: "9042"
cassandra-keyspace: "testkeyspace"
cassandra-datacenter: "datacenter1"
---
apiVersion: v1
kind: Service
metadata:
name: service-cassandraapp
labels:
app: cassandraapp
spec:
selector:
app: cassandraapp
type: LoadBalancer
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-cassandraapp
labels:
app: cassandraapp
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: cassandraapp
template:
metadata:
labels:
app: cassandraapp
spec:
containers:
- name: cassandraapp
image: ek/cassandraapp:latest
ports:
- containerPort: 8080
resources:
limits:
memory: "1Gi"
cpu: "1000m"
requests:
memory: "256Mi"
cpu: "500m"
env:
- name: CONFIG_CASSANDRA_HOST
valueFrom:
configMapKeyRef:
name: configmap-cassandra-cassandraapp
key: cassandra-host
- name: CONFIG_CASSANDRA_PORT
valueFrom:
configMapKeyRef:
name: configmap-cassandra-cassandraapp
key: cassandra-port
- name: CONFIG_CASSANDRA_KEYSPACE
valueFrom:
configMapKeyRef:
name: configmap-cassandra-cassandraapp
key: cassandra-keyspace
- name: CONFIG_CASSANDRA_DATACENTER
valueFrom:
configMapKeyRef:
name: configmap-cassandra-cassandraapp
key: cassandra-datacenter
Springboot application.properties
spring.data.cassandra.local-datacenter=${CONFIG_CASSANDRA_DATACENTER}
spring.data.cassandra.keyspace-name=${CONFIG_CASSANDRA_KEYSPACE}
spring.data.cassandra.port=${CONFIG_CASSANDRA_PORT}
spring.data.cassandra.contact-points=${CONFIG_CASSANDRA_HOST}
And lastly I also tried Bitnami Helm Chart. And I recommend it too.
[https://bitnami.com/stack/cassandra/helm][1]
I'm trying to configure the Eureka port with Spring Cloud with a eureka server, and config server (which is also a eureka client). Eureka service is successfully deployed and got a ip address. My Goal is to get the instance under eureka server.
I'm getting below error
2022-05-24 12:26:10.914 ERROR 21280 --- [freshExecutor-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://localhost:8761/eureka/}
2022-05-24 12:03:09.673 WARN 21280 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
This is my application.yml file from the test-service
spring:
datasource:
url: jdbc:oracle:thin:#//[ip address]/[address]
username: *******
password: *******
driver-class-name: oracle.jdbc.OracleDriver
profiles:
active:#activatedProperties#
jpa:
database-platform: org.hibernate.dialect.Oracle12cDialect
hibernate:
use-new-id-generator-mappings: false
ddl-auto: update
application:
name: test-service
eureka:
instance:
preferIpAddress: 'true'
client:
fetchRegistry: 'true'
registerWithEureka: 'true'
enabled: 'true'
service-url:
defaultZone: http://[username]:[password]#localhost:8761/eureka
server:
port: 8080
Debug: true
This is my eureka-serviece application.property file
server.port=8761
eureka.server.enable-self-preservation = false
eureka.client.register-with-eureka=false
eureka.client.fetch-registry=false
logging.level.com.netflix.eureka=OFF
logging.level.com.netflix.discovery=OFF
spring.security.user.name=******
spring.security.user.password=*******
eureka.instance.preferIpAddress=true
I was trying out spring boot microservice deployment on kubernetes cluster using Helm Chart. But I noticed a strange issue that my spring boot application start but it shutdown immediately after
Here are the logs
Started JhooqK8sApplication in 3.431 seconds (JVM running for 4.149)
2020-06-25 20:57:24.460 INFO 1 --- [extShutdownHook] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2020-06-25 20:57:24.469 INFO 1 --- [extShutdownHook] o.e.jetty.server.AbstractConnector : Stopped ServerConnector#548a102f{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2020-06-25 20:57:24.470 INFO 1 --- [extShutdownHook] org.eclipse.jetty.server.session : node0 Stopped scavenging
2020-06-25 20:57:24.474 INFO 1 --- [extShutdownHook] o.e.j.s.h.ContextHandler.application : Destroying Spring FrameworkServlet 'dispatcherServlet'
2020-06-25 20:57:24.493 INFO 1 --- [extShutdownHook] o.e.jetty.server.handler.ContextHandler : Stopped o.s.b.w.e.j.JettyEmbeddedWebAppContext#56528192{application,/,[file:///tmp/jetty-docbase.4637295322181051129.8080/],UNAVAILABLE}
Spring Boot Version : 2.2.7.RELEASE
Docker Hub Public image for spring boot : rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty
One strange thing which i noticed when i use kubectl command manually to create deployment and service spring boot deployments goes perfectly fine.
vagrant#kmaster:~$ kubectl create deployment demo --image=rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty
vagrant#kmaster:~$ kubectl expose deployment demo --type=LoadBalancer --name=demo-service --external-ip=1.1.1.1 --port=8080
(I followed this guide for deploying spring boot on kubernete - Deploy spring boot on kubernetes cluster)
I am just wodering is there something wrong with spring boot or my helm setup?
Here is my helm templates -
---
# Source: springboot/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
---
# Source: springboot/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
---
# Source: springboot/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-springboot
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: RELEASE-NAME-springboot
securityContext:
{}
containers:
- name: springboot
securityContext:
{}
image: "rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
---
# Source: springboot/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "RELEASE-NAME-springboot-test-connection"
labels:
helm.sh/chart: springboot-0.1.0
app.kubernetes.io/name: springboot
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['RELEASE-NAME-springboot:80']
restartPolicy: Never
2020-06-25 20:57:24.469 INFO 1 --- [extShutdownHook] o.e.jetty.server.AbstractConnector : Stopped ServerConnector#548a102f{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
ports:
- name: http
containerPort: 80
It appears the liveness probe (configured to contact the port named http) is killing your Pod since your container appears to be listening on :8080 but you've told kubernetes that it's listening on :80
Since a kubectl created deployment will not have any such specificity, kubernetes won't use a liveness probe and there you are
You can usually configure the spring application via an environment variable if you want to test that theory:
containers:
- name: springboot
env:
- name: SERVER_PORT
value: '80'
# and its friend, which is the one that
# you should be using for liveness and readiness
- name: MANAGEMENT_SERVER_PORT
value: '8080'
securityContext:
{}
image: "rahulwagh17/kubernetes:jhooq-k8s-springboot-jetty"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
I have a eureka server up and running fine and am trying to register my config client application with eureka server, but unable to register the instance with eureka server
Config Client Configuration in application.yml
eureka:
client:
fetch-registry: true
serviceUrl:
defaultZone: http://localhost:8761/eureka/
Eureka server configuration - application.yml
server:
port: ${PORT:8761}
eureka:
client:
registerWithEureka: false
fetchRegistry: false
server:
waitTimeInMsWhenSyncEmpty: 0
Eureka server configuration - bootstrap.yml
spring:
application:
name: eureka
cloud:
config:
uri: ${CONFIG_SERVER_URL:http://localhost:9898}
I have few microservices as :
APIGateway: common gateway for all requestwith zuul proxy
ConfigService: like config server for properties files
RegistryService: service registry with eureka server
HomePageService: 1 service registered with eureka and config-service
ProductService: 1 service registered with eureka and config-service
When I ran in local like order:
RegistryService->ConfigService THEN all services APIGateway, HomePageService, ProductService, its working fine.
Now i created docker images by providing config and run in docker container and pushed to GCR.
I have created account for google cloud(free for 1 Yr) and able to see images in repo.
All fine BUT how can I deploy those images in GKE. I ran individually and deploying but no linking. What is the way to deploy those services?
kubectl run service-registry --image=gcr.io/salesstock/service-registry:v1 --port=7002
kubectl expose deployment service-registry --name=service-registry --type=LoadBalancer --port=7002 --target-port=7002
I tried something and shared some code snipet.
RegistryService properties:
spring:
profiles:
active: dev
application:
name: registry-service
server:
port: 7002
eureka:
instance:
hostname: localhost
port: 7002
client:
register-with-eureka: false
fetch-registry: false
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
#======docker======
---
spring:
profiles: docker
eureka:
instance:
hostname: 192.168.99.100
port: 7002
client:
register-with-eureka: false
fetch-registry: false
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
APIGateway properties:
spring:
profiles:
active: dev
application:
name: api-gateway
cloud:
config:
fail-fast: true
discovery:
enabled: true
service-id: config-service
# uri: http://localhost:8888
server:
port: 7001
eureka:
instance:
hostname: localhost
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
#======docker======
---
spring:
profiles: docker
cloud:
config:
fail-fast: true
discovery:
enabled: true
service-id: config-service
# uri: http://192.168.99.100:8888
eureka:
instance:
hostname: 192.168.99.100
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
ConfigService properties:
spring:
profiles:
active: dev
application:
name: config-service
cloud:
config:
server:
git:
uri: git url
search-paths: ConfigFiles
server:
port: 8888
management:
security:
enabled: false
eureka:
instance:
hostname: service-registry
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
#======docker======
---
spring:
profiles: docker
cloud:
config:
server:
git:
uri: git repo
search-paths: ConfigFiles
eureka:
instance:
hostname: 192.168.99.100
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
HomePageService properties:
spring:
profiles:
active: dev
application:
name: homepage-service
cloud:
config:
fail-fast: true
discovery:
enabled: true
service-id: config-service
# uri: http://localhost:8888
server:
port: 7003
#for dynamic port
#server:
# port: 0
feign:
client:
config:
default:
connectTimeout: 160000000
readTimeout: 160000000
management:
security:
enabled: false
## endpoints:
## web:
## exposure:
## include: *
eureka:
instance:
hostname: localhost
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
#======docker======
---
spring:
profiles: docker
cloud:
config:
fail-fast: true
discovery:
enabled: true
service-id: config-service
# uri: http://192.168.99.100:8888
eureka:
instance:
hostname: 192.168.99.100
port: 7002
client:
register-with-eureka: true
fetch-registry: true
service-url:
defaultZone: http://${eureka.instance.hostname}:${eureka.instance.port}/eureka/
for Docker image sample:
FROM openjdk:8
EXPOSE 7003
ADD /target/homepage-service.jar homepage-service.jar
ENTRYPOINT ["java","-Dspring.profiles.active=docker", "-jar", "homepage-service.jar"]
You deploy Docker containers by defining the container in the podspec of the k8s resource. You are already doing this with the kubectl run commands.
You then expose the pods using services, this is what you are doing with kubectl expose (the default value for this is ClusterIP, which is perfect for inter pod communication)
The reason this is not working for you is the way each container is trying to reach other containers. Localhost won't work. the IP you are using (192.168.x.x) likely isn't working either.
Instead, configure each container to target the FQDN of the corresponding service.
Example:
RegistryService->ConfigService
Expose the ConfigService using a ClusterIP service (if you use kubectl expose, the name will be ConfigService).
Configure the RegistryService to look for the ConfigService using "ConfigService.default.svc.cluster.local"