I have an openshift namespace (SomeNamespace), in that namespace I have several pods.
I have a route associated with that namespace (SomeRoute).
In one of pods I have my spring application. It has REST controllers.
I want to send message to that REST controller, how can I do it?
I have a route URL: https://some.namespace.company.name. What should I find next?
I tried to send requests to https://some.namespace.company.name/rest/api/route but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.
You don't need to specify the pod in the route.
The chain goes like this:
Route exposes a given port of a Service
Service selects some pod to route the traffic to by its .spec.selector field
You need to check your Service and Route definitions.
Example service and route (including only the related parts of the resources):
Service
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
<label-key>: <label-value>
Where label-key and label-value is any label key-value combination that selects your pod with the http application.
Route
spec:
port:
targetPort: 8080-tcp <port name of the service>
to:
kind: Service
name: <service name>
When your app exposes some endpoint on :8080/rest/api, you can invoke it with <route-url>/rest/api
You can try it out with some example application (some I found randomly on github, to verify everything works correctly on your cluster):
create a new app using s2i build from github repository: oc new-app registry.redhat.io/openjdk/openjdk-11-rhel7~https://github.com/redhat-cop/spring-rest
wait until the s2i build is done and the pod is started
expose the service via route: oc expose svc/spring-rest
grab the route url: oc get route spring-rest -o jsonpath='{.spec.host}'
invoke the api: curl -k <route-url>/v1/greeting
response should be something like: {"id":3,"content":"Hello, World!"}
Routes are an OpenShift-specific way of exposing a Service outside the cluster.
But, if you are developing an app that will be deployed onto OpenShift and Kubernetes, then you should use Kubernetes Ingress objects.
Using Ingress means that your app’s manifests are more portable between different Kubernetes clusters.
From the official Kubernetes docs:
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
So, if you want to reach your REST controllers:
from within the k8s cluster. Create a k8s Service to expose an application running on a set of Pods as a network service:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: your-namespace
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
This specification creates a new Service object named "my-service", which targets TCP port 8080 on any Pod with the app=MyApp label.
You can reach the REST controller using this URL:
http://my-service
externally. Create an Ingress resource to configure externally-reachable URLs (a k8s Service 'my-service' should exist):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
namespace: your-namespace
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 80
You can reach the REST controller using this URL:
http://foo.bar.com
Related
Have deployed an application successfully using helm chart but I am unable to understand which url should I use to access it ..Here is the Nodeport service created by Helm for this web app :
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-springboot-demoweb NodePort 10.101.86.143 <none> 8080:31384/TCP 11m
xxxxxxx#xxxxxxxx5 charts % kubectl describe svc
Name: demo-springboot-demoweb
Namespace: springboot-demoweb
Labels: app=springboot-demoweb
app.kubernetes.io/managed-by=Helm
chart=springboot-demoweb-0.1.0
heritage=Helm
release=demo
Annotations: meta.helm.sh/release-name: demo
meta.helm.sh/release-namespace: springboot-demoweb
Selector: app=springboot-demoweb,release=demo
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.86.143
IPs: 10.101.86.143
Port: nginx 8080/TCP
TargetPort: 8080/TCP
NodePort: nginx 31384/TCP
Endpoints: 172.17.0.15:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You have deployed an application that is exposed using a Service of kind NodePort.
That means that all nodes of the cluster expose the application on the same port - the port number is coordinated.
So you need the ip of one of the nodes to access the cluster.
You can use kubectl get nodes -o wide to get the nodes and IP addresses. If it is a local cluster it will be shown as INTERNAL IP.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 155m v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.15.0-37-generic docker://20.10.12
Use one of the IPs together with your NodePort which is 31384. In my example it would be: http://192.168.49.2:31384
Im tring to access a external API from my service (site-api) in kubernetes, this external API can only be accessed with IP Whitelist, so i gave the IP of my ingress (EXTERNAL IP), this ip is located in kubernetes dashboard (Discovery and Load Balancing / Ingresses), but i still get access denied, the ip which i have to provide is correct? it is missing some settings in ingress yaml ?
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: site-name
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/proxy-body-size: "15m"
spec:
tls:
- hosts:
- site-name
secretName: aks-ingress-tls
rules:
- host: site-name
http:
paths:
- path: /?(.*)
backend:
serviceName: site-web
servicePort: 443
- path: /message/?(.*)
backend:
serviceName: site-message
servicePort: 3001
- path: /api/?(.*)
backend:
serviceName: site-api
servicePort: 8443
• The IP shown in the dashboard on Discovery and Load Balancing/Ingress is not the IP address that is to be whitelisted. Instead, the service object(site-api) that is trying to access the external IP will expose the deployment as well as the external IP that needs to be whitelisted.
• So, first create a service object that exposes the deployment of the site-api and display information about that service object that is created for exposing the deployment.
• Once, we get the concise information about the service object, get the detailed information about that service object which will display the detailed information about the external IP that the site-api is trying to access.
• So, the external IP address will be exposed by this service object which will need to be whitelisted. Also, take a note of the endpoint IP addresses and their ports which are accessing the site-api through the ports and node ports. You can then access the site-api service using the external IP address and the port in the displayed information.
• Please find the below Kubernetes commands to execute the steps as above: -
kubectl expose deployment<site-api>--type=LoadBalancer --name=my-service Service object creation for exposing the deployment
kubectl get services my-service Display information about the service
kubectl describe services my-service Display detailed information about the service
kubectl get pods --output=wide Get endpoint information
curl http://<external-ip>:<port> Access the site-api through external IP
Please refer the below link for more information: -
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
I'm trying to get two dockerized applications to speak to eachother on a given port according to my docker-compose.yml file.
They are able to speak to eachother on port app1:61001 and app2:61002, but my friend tells me that they should be able to communicate on port 80. example.. app2:80, and that port 61001 and 61002 should only be the accessible ports exposed out of the swarm.
The applications themselves are set to server.port=80
Any idea how I can get it working as my friend suggests?
Here is the docker-compose file I'm using:
docker compose
version: "3.5"
services:
app1:
image: docker.artifactory.gr.gr.com/app1:latest
ports:
- "61001:80"
deploy:
replicas: 2
networks:
- custom-network
app2:
image: docker.artifactory.gr.gr.com/app2:latest
ports:
- "61002:80"
deploy:
replicas: 2
networks:
- custom-network
networks:
custom-network:
My
first, look if your service expose port 80 with docker-compose ps command.
If is in this case, juste remove folowing code of your both services
ports:
- "61002:80"
if not, remove
ports:
- "61002:80"
and add
expose: 80
and in your app script, to call one service, just call appN:80
I hope i understood your request and i helped you
App1 and App2 are part of the same network that you named custom-network.
This means that the internal port used by the containers ( the one on the right, 80 ) is visible from both applications!
If you have to call a service from APP 1 to APP 2 you simply have to name the container with
hostname: app2 // do the same for the other container
container_name: app2
Then, from app1, you can call the application simply using "app1:80/yourpath".
The port exposed are visible OUTSIDE the network.
In addition:
You can check the connectivity, connecting into app1 application with an iterative shell:https://gist.github.com/mitchwongho/11266726 and then executing
ping app1
you will see that app1 has an internal IP and that is reachable.
So I have created a java spring boot application which uses Keycloak for authenticating its users.
When I run keycloak from docker-compose I can sucesfully authenticate when running my application as a standalone jar file or when debugging. But when I put my spring boot application as a docker containers inside docker-compose. I cannot authenticate users anymore.
my error log from spring boot docker container:
springBootApp | 2019-12-19 13:16:41.498 ERROR 1 --- [nio-8081-exec-2] o.k.a.rotation.JWKPublicKeyLocator : Error when sending request to retrieve realm keys
springBootApp |
springBootApp | org.keycloak.adapters.HttpClientAdapterException: IO error
I though that the problem is with network. but all containers are running in the same virtual network. They are also also in same docker-compose file.
this is my keycloak part:
keycloak:
image: jboss/keycloak
ports:
- 18080:8080
volumes:
- ../keycloak:/opt/jboss/keycloak/imports
command:
- "-b 0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/realm-export.json"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
my spring boot app
mySpringBootApp:
image: mySpringBootApp:master-1
environment:
- SPRING_PROFILES_ACTIVE=developmentTest
depends_on:
- jaeger
- keycloak
- db
ports:
- "8081:8081"
When I try to use
curl localhost:18080 from my host. I get the response.
when I try to use curl from springBootApp docker I get connection refused. So I assume that even though they are in the same network they don't see each other.
You have to keep in mind that your docker container is isolated from the host it is running on. localhost for your computer is different then localhost from inside the docker container.
You are using docker-compose and both services are in the same docker-compose.yaml configuration this means you can use the service name of a service to reach it from within another service that is in the same docker-compose file.
In your case the service you want to access is called keycloak and you have mapped its ports as 18080:8080 meaning that from your computer localhost 18080 accesses the port 8080 of this particular container.
In order to access this container (or service in a docker-compose context) you need to replace localhost by the name of your service.
In your case to curl the keycloak container from mySprngBootApp container you need to replace lcalhost by the name of the service so long story short: curl keycloak:18080
I would define network for your services in docker compose, like:
services:
app:
image: some-image
networks:
- my-network-name
networks:
my-network-name:
name: my-global-net
And you can be sure, that services are in the same network and speak via service's name with each other.
We are trying to implement two pods one with mondodba dn another one with java application. And java application requires to be bind with mongodb. How we can bind db and app when they are running on two different pods and with different subnets.
You may want to use service for your mongo pod. You need to add label e.g. name: mongo to pod and create a service:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
selector:
name: mongo
Then mondgo will be accessible from java application pod with mongo:27017 address.
For a quick experiment you may use kubectl expose pod _MONGO_POD_NAME_ --port=27017 --name=mongo
This tutorial may be handy as well.