We are trying to implement two pods one with mondodba dn another one with java application. And java application requires to be bind with mongodb. How we can bind db and app when they are running on two different pods and with different subnets.
You may want to use service for your mongo pod. You need to add label e.g. name: mongo to pod and create a service:
apiVersion: v1
kind: Service
metadata:
name: mongo
spec:
ports:
- port: 27017
selector:
name: mongo
Then mondgo will be accessible from java application pod with mongo:27017 address.
For a quick experiment you may use kubectl expose pod _MONGO_POD_NAME_ --port=27017 --name=mongo
This tutorial may be handy as well.
Related
Have deployed an application successfully using helm chart but I am unable to understand which url should I use to access it ..Here is the Nodeport service created by Helm for this web app :
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-springboot-demoweb NodePort 10.101.86.143 <none> 8080:31384/TCP 11m
xxxxxxx#xxxxxxxx5 charts % kubectl describe svc
Name: demo-springboot-demoweb
Namespace: springboot-demoweb
Labels: app=springboot-demoweb
app.kubernetes.io/managed-by=Helm
chart=springboot-demoweb-0.1.0
heritage=Helm
release=demo
Annotations: meta.helm.sh/release-name: demo
meta.helm.sh/release-namespace: springboot-demoweb
Selector: app=springboot-demoweb,release=demo
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.101.86.143
IPs: 10.101.86.143
Port: nginx 8080/TCP
TargetPort: 8080/TCP
NodePort: nginx 31384/TCP
Endpoints: 172.17.0.15:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
You have deployed an application that is exposed using a Service of kind NodePort.
That means that all nodes of the cluster expose the application on the same port - the port number is coordinated.
So you need the ip of one of the nodes to access the cluster.
You can use kubectl get nodes -o wide to get the nodes and IP addresses. If it is a local cluster it will be shown as INTERNAL IP.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 155m v1.23.3 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.15.0-37-generic docker://20.10.12
Use one of the IPs together with your NodePort which is 31384. In my example it would be: http://192.168.49.2:31384
I have an openshift namespace (SomeNamespace), in that namespace I have several pods.
I have a route associated with that namespace (SomeRoute).
In one of pods I have my spring application. It has REST controllers.
I want to send message to that REST controller, how can I do it?
I have a route URL: https://some.namespace.company.name. What should I find next?
I tried to send requests to https://some.namespace.company.name/rest/api/route but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.
You don't need to specify the pod in the route.
The chain goes like this:
Route exposes a given port of a Service
Service selects some pod to route the traffic to by its .spec.selector field
You need to check your Service and Route definitions.
Example service and route (including only the related parts of the resources):
Service
spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
<label-key>: <label-value>
Where label-key and label-value is any label key-value combination that selects your pod with the http application.
Route
spec:
port:
targetPort: 8080-tcp <port name of the service>
to:
kind: Service
name: <service name>
When your app exposes some endpoint on :8080/rest/api, you can invoke it with <route-url>/rest/api
You can try it out with some example application (some I found randomly on github, to verify everything works correctly on your cluster):
create a new app using s2i build from github repository: oc new-app registry.redhat.io/openjdk/openjdk-11-rhel7~https://github.com/redhat-cop/spring-rest
wait until the s2i build is done and the pod is started
expose the service via route: oc expose svc/spring-rest
grab the route url: oc get route spring-rest -o jsonpath='{.spec.host}'
invoke the api: curl -k <route-url>/v1/greeting
response should be something like: {"id":3,"content":"Hello, World!"}
Routes are an OpenShift-specific way of exposing a Service outside the cluster.
But, if you are developing an app that will be deployed onto OpenShift and Kubernetes, then you should use Kubernetes Ingress objects.
Using Ingress means that your app’s manifests are more portable between different Kubernetes clusters.
From the official Kubernetes docs:
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
Traffic routing is controlled by rules defined on the Ingress resource.
So, if you want to reach your REST controllers:
from within the k8s cluster. Create a k8s Service to expose an application running on a set of Pods as a network service:
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: your-namespace
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
This specification creates a new Service object named "my-service", which targets TCP port 8080 on any Pod with the app=MyApp label.
You can reach the REST controller using this URL:
http://my-service
externally. Create an Ingress resource to configure externally-reachable URLs (a k8s Service 'my-service' should exist):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
namespace: your-namespace
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: my-service
port:
number: 80
You can reach the REST controller using this URL:
http://foo.bar.com
Im tring to access a external API from my service (site-api) in kubernetes, this external API can only be accessed with IP Whitelist, so i gave the IP of my ingress (EXTERNAL IP), this ip is located in kubernetes dashboard (Discovery and Load Balancing / Ingresses), but i still get access denied, the ip which i have to provide is correct? it is missing some settings in ingress yaml ?
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: site-name
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/proxy-body-size: "15m"
spec:
tls:
- hosts:
- site-name
secretName: aks-ingress-tls
rules:
- host: site-name
http:
paths:
- path: /?(.*)
backend:
serviceName: site-web
servicePort: 443
- path: /message/?(.*)
backend:
serviceName: site-message
servicePort: 3001
- path: /api/?(.*)
backend:
serviceName: site-api
servicePort: 8443
• The IP shown in the dashboard on Discovery and Load Balancing/Ingress is not the IP address that is to be whitelisted. Instead, the service object(site-api) that is trying to access the external IP will expose the deployment as well as the external IP that needs to be whitelisted.
• So, first create a service object that exposes the deployment of the site-api and display information about that service object that is created for exposing the deployment.
• Once, we get the concise information about the service object, get the detailed information about that service object which will display the detailed information about the external IP that the site-api is trying to access.
• So, the external IP address will be exposed by this service object which will need to be whitelisted. Also, take a note of the endpoint IP addresses and their ports which are accessing the site-api through the ports and node ports. You can then access the site-api service using the external IP address and the port in the displayed information.
• Please find the below Kubernetes commands to execute the steps as above: -
kubectl expose deployment<site-api>--type=LoadBalancer --name=my-service Service object creation for exposing the deployment
kubectl get services my-service Display information about the service
kubectl describe services my-service Display detailed information about the service
kubectl get pods --output=wide Get endpoint information
curl http://<external-ip>:<port> Access the site-api through external IP
Please refer the below link for more information: -
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/
I'm trying to get two dockerized applications to speak to eachother on a given port according to my docker-compose.yml file.
They are able to speak to eachother on port app1:61001 and app2:61002, but my friend tells me that they should be able to communicate on port 80. example.. app2:80, and that port 61001 and 61002 should only be the accessible ports exposed out of the swarm.
The applications themselves are set to server.port=80
Any idea how I can get it working as my friend suggests?
Here is the docker-compose file I'm using:
docker compose
version: "3.5"
services:
app1:
image: docker.artifactory.gr.gr.com/app1:latest
ports:
- "61001:80"
deploy:
replicas: 2
networks:
- custom-network
app2:
image: docker.artifactory.gr.gr.com/app2:latest
ports:
- "61002:80"
deploy:
replicas: 2
networks:
- custom-network
networks:
custom-network:
My
first, look if your service expose port 80 with docker-compose ps command.
If is in this case, juste remove folowing code of your both services
ports:
- "61002:80"
if not, remove
ports:
- "61002:80"
and add
expose: 80
and in your app script, to call one service, just call appN:80
I hope i understood your request and i helped you
App1 and App2 are part of the same network that you named custom-network.
This means that the internal port used by the containers ( the one on the right, 80 ) is visible from both applications!
If you have to call a service from APP 1 to APP 2 you simply have to name the container with
hostname: app2 // do the same for the other container
container_name: app2
Then, from app1, you can call the application simply using "app1:80/yourpath".
The port exposed are visible OUTSIDE the network.
In addition:
You can check the connectivity, connecting into app1 application with an iterative shell:https://gist.github.com/mitchwongho/11266726 and then executing
ping app1
you will see that app1 has an internal IP and that is reachable.
I’m new to docker and I’m trying to connect my spring boot app running into my boot-example docker container to a mysql server running into my mymysql docker container on port 6603, both running on the same phisical machine.
The fact is: if I connect my spring-boot app to my mymysql docker container in order to communicate with the database, I get no errors and everything works fine.
When I move my spring boot application into my boot-example container and try to communicate (through Hibernate) to my mymysql container, then I get this error:
2018-02-05 09:58:38.912 ERROR 1 --- [ main] o.a.tomcat.jdbc.pool.ConnectionPool : Unable to create initial connections of pool.
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_111]
My spring boot application.properties are:
server.port=8083
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:mysql://localhost:6603/mydockerdb
spring.datasource.username=root
spring.datasource.password=mypassword
It works fine until my spring boot app runs in a docker container on port 8082, (after the docker image is correctly built):
docker run -it -p 8082:8083 boot-example
You cannot use localhost inside the container, it's the container itself. Hence, you will always get the connection refused error.
You can do below things -
Add your host machine IP in application.properties file of your spring boot application. (Not recommended since it breaks docker portability logic)
In case you want to use localhost, use --net=host while starting the container. (Not recommended for Production since no logical network layer exists)
Use --links for container communication with a DNS name. (deprecated/legacy)
Create a compose file & call your DB from spring boot app with the service name since they will be in same network & highly integrated with each other. (Recommended)
PS - Whenever you need to integrate multiple containers together, always go for docker-compose version 3+. Use docker run|build to understand the fundamentals & performing dry/test runs.
As #vivekyad4v suggested - the easiest way to achieve your desire, is to use docker-compose which has better container communication integration.
Docker-compose is a tool for managing single or multiple docker container/s. It uses single configuration file called docker-compose.yml.
For better information about docker-compose, please take a look at documentation and compose file reference
In my experience, it is good practice to follow SRP (single responsibility principle), thus - creating one container for your database and one for your application. Both of them are communicating using network you specify in your configuration.
Following example of docker-compose.yml might help you:
version: '2'
networks:
# your network name
somename:
driver: bridge
services:
# PHP server
php:
image: dalten/php5.6-apache
ports:
- 80:80
volumes:
- .application_path:/some/application/path
# your container network name defined at the beggining
networks:
- somename
# Mysql server for backend
mysql:
image: dalten/mysql:dev
ports:
- 3306:3306
# The /var/lib/mysql volume MUST be specified to achieve data persistence over container restart
volumes:
- ./mysql_data:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: backend
# your container network name defined at the beggining
networks:
- somename
Note: Communication between containers inside network can be achieved by calling the service name from inside container.
The connection parameters to MySQL container from PHP, would in this example be:
hostname: mysql
port: 3306
database: backend
user: root
password: root
As per above suggestion, Docker-compose is a way but if you don't want to go with compose/swarm mode.
Simply create your own network using docker network create myNet
Deploy your containers listening on a created network --network myNet
Change your spring.datasource.url to jdbc:mysql://mymysql:6603/mydockerdb
By using DNS resolution of docker demon, containers can discover each other and hence can communicate.
[DNS is not supported by default bridge. A user-defined network using bridge does.]
For more information: https://docs.docker.com/engine/userguide/networking/