jQuery call REST API not working in Docker/Kubernetes - java

This is a Java Spring boot REST API application, using an index.html to present the UI web page to user.
When the index.html is displayed, it would trigger the logic from the Javascript/jQuery to make a REST api call(coded as below) to the backend service in the Java controller class to get 2 random generated numbers:
$.ajax({
url: "http://localhost:8080/multiplications/random"
The program is working fine when run it as a Spring Boot app in Eclipse!
However, it's not working after I used the .jar file to build a Docker image file then deployed it using Kubernetes/minikube(I'm new to Docker/Kubernetes).
here's the dockfile to build the image file using the .jar:
FROM openjdk:latest
ADD target/social-multiplication-v3-0.3.0-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-jar","app.jar"]
EXPOSE 8080
Here's the deployment.yaml file:
---
kind: Service
apiVersion: v1
metadata:
name: multiplicationservice
spec:
selector:
app: multiplication
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
nodePort: 30001
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mdeploy
spec:
replicas: 1
selector:
matchLabels:
app: multiplication
template:
metadata:
labels:
app: multiplication
spec:
containers:
- name: multiplication
image: huxianjun/multiplication
ports:
- containerPort: 80
and the IP address of the host where the application being deployed in Kubernetes:
$ minikube ip
192.168.99.101
At the end, I can get to the index.html page from browser by folllowing URL:
http://192.168.99.101:30001/
The page is being displayed as expected - What NOT working is, the following REST api call didn't occur thus the 2 numbers not returned from the call and displayed on the page:
$.ajax({
url: "http://localhost:8080/multiplications/random"
My guess is, is it caused by the 'localhost' & the port'8080' not aligned with those port# defined in the deployment.yaml file? or even something conflict to the 'EXPOSE 8080' in the docfile?

In your case, you are calling the $.ajax command from your browser which is in your host machine, hence, those API calls will be sent from your local machine but not within your docker container.
To solve the problem, you can update the URL to be using http://192.168.99.101:30001/ like this
$.ajax({
url: "http://192.168.99.101:30001/multiplications/random"

Try run sudo lsof -i :8080, if you use linux. It'll show all your available ports. If you don't see 8080 port, your application's port aren't available and visible for your localhost. That's because Docker containers are "closed/isolated" for all external processes and files. Moreover, EXPOSE 8080 instruction in Dockerfile is not enough.
Try docker run -p 8080:8080 YOUR_CREATED_IMAGE_NAME. It will build redirection from docker container localhost:8080 to Your localhost:8080

Related

docker can't connect to another docker on same machine [duplicate]

I have two separate docker-compose.yml files in two different folders:
~/front/docker-compose.yml
~/api/docker-compose.yml
How can I make sure that a container in front can send requests to a container in api?
I know that --default-gateway option can be set using docker run for an individual container, so that a specific IP address can be assigned to this container, but it seems that this option is not available when using docker-compose.
Currently I end up doing a docker inspect my_api_container_id and look at the gateway in the output. It works but the problem is that this IP is randomly attributed, so I can't rely on it.
Another form of this question might thus be:
Can I attribute a fixed IP address to a particular container using docker-compose?
But in the end what I'm looking after is:
How can two different docker-compose projects communicate with each other?
You just need to make sure that the containers you want to talk to each other are on the same network. Networks are a first-class docker construct, and not specific to compose.
# front/docker-compose.yml
version: '2'
services:
front:
...
networks:
- some-net
networks:
some-net:
driver: bridge
...
# api/docker-compose.yml
version: '2'
services:
api:
...
networks:
- front_some-net
networks:
front_some-net:
external: true
Note: Your app’s network is given a name based on the “project name”, which is based on the name of the directory it lives in, in this case a prefix front_ was added
They can then talk to each other using the service name. From front you can do ping api and vice versa.
UPDATE: As of compose file version 3.5:
This now works:
version: "3.5"
services:
proxy:
image: hello-world
ports:
- "80:80"
networks:
- proxynet
networks:
proxynet:
name: custom_network
docker-compose up -d will join a network called 'custom_network'. If it doesn't exist, it will be created!
root#ubuntu-s-1vcpu-1gb-tor1-01:~# docker-compose up -d
Creating network "custom_network" with the default driver
Creating root_proxy_1 ... done
Now, you can do this:
version: "2"
services:
web:
image: hello-world
networks:
- my-proxy-net
networks:
my-proxy-net:
external:
name: custom_network
This will create a container that will be on the external network.
I can't find any reference in the docs yet but it works!
Just a small adittion to #johnharris85's great answer,
when you are running a docker compose file, a "default" network is created
so you can just add it to the other compose file as an external network:
# front/docker-compose.yml
version: '2'
services:
front_service:
...
...
# api/docker-compose.yml
version: '2'
services:
api_service:
...
networks:
- front_default
networks:
front_default:
external: true
For me this approach was more suited because I did not own the first docker-compose file and wanted to communicate with it.
All containers from api can join the front default network with following config:
# api/docker-compose.yml
...
networks:
default:
external:
name: front_default
See docker compose guide: using a pre existing network (see at the bottom)
The previous posts information is correct, but it does not have details on how to link containers, which should be connected as "external_links".
Hope this example make more clear to you:
Suppose you have app1/docker-compose.yml, with two services (svc11 and svc12), and app2/docker-compose.yml with two more services (svc21 and svc22) and suppose you need to connect in a crossed fashion:
svc11 needs to connect to svc22's container
svc21 needs to connect to svc11's container.
So the configuration should be like this:
this is app1/docker-compose.yml:
version: '2'
services:
svc11:
container_name: container11
[..]
networks:
- default # this network
- app2_default # external network
external_links:
- container22:container22
[..]
svc12:
container_name: container12
[..]
networks:
default: # this network (app1)
driver: bridge
app2_default: # external network (app2)
external: true
this is app2/docker-compose.yml:
version: '2'
services:
svc21:
container_name: container21
[..]
networks:
- default # this network (app2)
- app1_default # external network (app1)
external_links:
- container11:container11
[..]
svc22:
container_name: container22
[..]
networks:
default: # this network (app2)
driver: bridge
app1_default: # external network (app1)
external: true
Everybody has explained really well, so I'll add the necessary code with just one simple explanation.
Use a network created outside of docker-compose (an "external" network) with docker-compose version 3.5+.
Further explanation can be found here.
First docker-compose.yml file should define network with name giveItANamePlease as follows.
networks:
my-network:
name: giveItANamePlease
driver: bridge
The services of first docker-compose.yml file can use network as follows:
networks:
- my-network
In second docker-compose file, we need to proxy the network by using the network name which we have used in first docker-compose file, which in this case is giveItANamePlease:
networks:
my-proxy-net:
external:
name: giveItANamePlease
And now you can use my-proxy-net in services of a second docker-compose.yml file as follows.
networks:
- my-proxy-net
Since Compose 1.18 (spec 3.5), you can just override the default network using your own custom name for all Compose YAML files you need. It is as simple as appending the following to them:
networks:
default:
name: my-app
The above assumes you have version set to 3.5 (or above if they don't deprecate it in 4+).
Other answers have pointed the same; this is a simplified summary.
UPDATE: As of docker-compose file version 3.5:
I came across a similar problem and I solved it by adding a small change in one of my docker-compose.yml project.
For instance, we have two API's scoring and ner. Scoring API needs to send a request to the ner API for processing the input request. In order to do that they both are supposed to share the same network.
Note: Every container has its own network which is automatically created at the time of running the app inside docker. For example ner API network will be created like ner_default and scoring API network will be named as scoring default. This solution will work for version: '3'.
As in the above scenario, my scoring API wants to communicate with ner API then I will add the following lines. This means Whenever I create the container for ner API then it automatically added to the scoring_default network.
networks:
default:
external:
name: scoring_default
ner/docker-compose.yml
version: '3'
services:
ner:
container_name: "ner_api"
build: .
...
networks:
default:
external:
name: scoring_default
scoring/docker-compose.yml
version: '3'
services:
api:
build: .
...
We can see this how the above containers are now a part of the same network called scoring_default using the command:
docker inspect scoring_default
{
"Name": "scoring_default",
....
"Containers": {
"14a6...28bf": {
"Name": "ner_api",
"EndpointID": "83b7...d6291",
"MacAddress": "0....",
"IPv4Address": "0.0....",
"IPv6Address": ""
},
"7b32...90d1": {
"Name": "scoring_api",
"EndpointID": "311...280d",
"MacAddress": "0.....3",
"IPv4Address": "1...0",
"IPv6Address": ""
},
...
}
You can add a .env file in all your projects containing COMPOSE_PROJECT_NAME=somename.
COMPOSE_PROJECT_NAME overrides the prefix used to name resources, as such all your projects will use somename_default as their network, making it possible for services to communicate with each other as they were in the same project.
NB: You'll get warnings for "orphaned" containers created from other projects.
So many answers!
First of all, avoid hyphens in entities names such as services and networks. They cause issues with name resolution.
Example: my-api won't work. myapi or api will work.
What worked for me is:
# api/docker-compose.yml
version: '3'
services:
api:
container_name: api
...
ports:
- 8081:8080
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
and
# front/docker-compose.yml
version: '3'
services:
front:
container_name: front
...
ports:
- 81:80
networks:
- mynetwork
networks:
mynetwork:
name: mynetwork
NOTE: I added ports to show how services can access each other, and how they are accessible from the host.
IMPORTANT: If you don't specify a network name, docker-compose will craft one for you. It uses the name of the folder the docker_compose.yml file is in. In this case: api_mynetwork and front_mynetwork. That will prevent communication between containers since they will by on different network, with very similar names.
Note that the network is defined exactly the same in both file, so you can start either service first and it will work. No need to specify which one is external, docker-compose will take care of managing that for you.
From the host
You can access either container using the published ports defined in docker-compose.yml.
You can access the Front container: curl http://localhost:81
You can access the API container: curl http://localhost:8081
From the API container
You can access the Front container using the original port, not the one you published in docker-compose.yml.
Example: curl http://front:80
From the Front container
You can access the API container using the original port, not the one you published in docker-compose.yml.
Example: curl http://api:8080
For using another docker-compose network you just do these(to share networks between docker-compose):
Run the first docker-compose project by up -d
Find the network name of the first docker-compose by: docker network ls(It contains the name of the root directory project)
Then use that name by this structure at below in the second docker-compose file.
second docker-compose.yml
version: '3'
services:
service-on-second-compose: # Define any names that you want.
.
.
.
networks:
- <put it here(the network name that comes from "docker network ls")>
networks:
- <put it here(the network name that comes from "docker network ls")>:
external: true
I would ensure all containers are docker-compose'd to the same network by composing them together at the same time, using:
docker compose --file ~/front/docker-compose.yml --file ~/api/docker-compose.yml up -d
If you are
trying to communicate between two containers from different docker-compose projects and don't want to use the same network (because let's say they would have PostgreSQL or Redis container on the same port and you would prefer to not changing these ports and not use it at the same network)
developing locally and want to imitate communication between two docker compose projects
running two docker-compose projects on localhost
developing especially Django apps or Django Rest Framework (drf) API and running app inside container on some exposed port
getting Connection refused while trying to communicate between two containers
And you want to
container api_a communicate to api_b (or vice versa) without the same "docker network"
(example below)
you can use "host" of the second container as IP of your computer and port that is mapped from inside Docker container. You can obtain IP of your computer with this script (from: Finding local IP addresses using Python's stdlib):
import socket
def get_ip():
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
# doesn't even have to be reachable
s.connect(('10.255.255.255', 1))
IP = s.getsockname()[0]
except:
IP = '127.0.0.1'
finally:
s.close()
return IP
Example:
project_api_a/docker-compose.yml:
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_a
image: api_a:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_a container you are running Django app:
manage.py runserver 0.0.0.0:8000
and second docker-compose.yml from other project:
project_api_b/docker-compose-yml :
networks:
app-tier:
driver: bridge
services:
api:
container_name: api_b
image: api_b:latest
depends_on:
- postgresql
networks:
- app-tier
inside api_b container you are running Django app:
manage.py runserver 0.0.0.0:8001
And trying to connect from container api_a to api_b then URL of api_b container will be:
http://<get_ip_from_script_above>:8001/
It can be especially valuable if you are using even more than two(three or more) docker-compose projects and it's hard to provide common network for all of it - it's good workaround and solution
To connect two docker-compose you need a network and putting both docker-composes in that network,
you could create netwrok with docker network create name-of-network,
or you could simply put network declaration in networks option of docker-compose file and when you run docker-compose (docker-compose up) the network would be created automatically.
put the below lines in both docker-compose files
networks:
net-for-alpine:
name: test-db-net
Note: net-for-alpine is internal name of the network and it will be used inside of the docker-compose files and could be different,
test-db-net is external name of the network and must be same in two docker-compose files.
Assume we have docker-compose.db.yml and docker-compose.alpine.yml
docker-compose.apline.yml would be:
version: '3.8'
services:
alpine:
image: alpine:3.14
container_name: alpine
networks:
- net-for-alpine
# these two command keeps apline container running
stdin_open: true # docker run -i
tty: true # docker run -t
networks:
net-for-alpine:
name: test-db-net
docker-compose.db.yml would be:
version: '3.8'
services:
db:
image: postgres:13.4-alpine
container_name: psql
networks:
- net-for-db
networks:
net-for-db:
name: test-db-net
To test the network, go inside alpine container
docker exec -it alpine sh
then with following commands you could check the network
# if it returns 0 or see nothing as a result, network is established
nc -z psql (container name)
or
ping pgsql
I'm running multiple identical docker-compose.yml files in different directories, using .env files to make a slight difference. And use Nginx Proxy Manage to communicate with other services. here is my file:
make sure you have created a public network
docker network create nginx-proxy-man
/domain1.com/docker-compose.yml, /domain2.com/docker-compose.yml, ...
version: "3.9"
services:
webserver:
build:
context: ./bin/${PHPVERSION}
container_name: "${COMPOSE_PROJECT_NAME}-${PHPVERSION}"
...
networks:
- default # network outside
- internal # network internal
database:
build:
context: "./bin/${DATABASE}"
container_name: "${COMPOSE_PROJECT_NAME}-${DATABASE}"
...
networks:
- internal # network internal
networks:
default:
external: true
name: nginx-proxy-man
internal:
internal: true
.env file just change COMPOSE_PROJECT_NAME
COMPOSE_PROJECT_NAME=domain1_com
.
.
.
PHPVERSION=php56
DATABASE=mysql57
webserver.container_name: domain1_com-php56 - will join the default network (name: nginx-proxy-man), previously created for Nginx Proxy Manager to be accessible from the outside.
Note: container_name is unique in the same network.
database.container_name: domain1_com-mysql57 - easier to distinguish
In the same docker-compose.yml, the services will connect to each other via the service name because of the same network domain1_com_internal. And to be more secure, set this network with the option internal: true
Note, if you don't explicitly specify networks for each service, but just use a common external network for both docker-compose.yml, then it's likely that domain1_com will use domain2_com's database.
Another option is just running up the first module with the 'docker-compose' check the ip related with the module, and connect the second module with the previous net like external, and pointing the internal ip
example
app1 - new-network created in the service lines, mark as external: true at the bottom
app2 - indicate the "new-network" created by app1 when goes up, mark as external: true at the bottom, and set in the config to connect, the ip that app1 have in this net.
With this, you should be able to talk with each other
*this way is just for local-test focus, in order to don't do an over complex configuration
** I know is very 'patch way' but works for me and I think is so simple some other can take advantage of this
Answer for Docker Compose '3' and up
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file. Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have had a similar example where I was working with separate docker-compose files working on a docker swarm with an overlay network to do that all I had to do is change the networks parameters as so:
first docker-compose.yaml
version: '3.9'
.
.
.
networks:
net:
driver: overlay
attachable: true
docker-compose -p app up
since I have specified the app name as app using -p the initial network will be app_net.
Now in order to run another docker-compose with multiple services that will use the same network you will need to set these as so:
second docker-compose.yaml
version: '3.9'
.
.
.
networks:
net-ref:
external: true
name: app_net
docker stack deploy -c docker-compose.yml mystack
No matter what name you give to the stack the network will not be affected and will always refer to the existing external network called app_net.
PS: It's important to make sure to check your docker-compose version.
version: '2'
services:
bot:
build: .
volumes:
- '.:/home/node'
- /home/node/node_modules
networks:
- my-rede
mem_limit: 100m
memswap_limit: 100m
cpu_quota: 25000
container_name: 236948199393329152_585042339404185600_bot
command: node index.js
environment:
NODE_ENV: production
networks:
my-rede:
external:
name: name_rede_externa
Follow up of JohnHarris answer, just adding some more details which may be useful to someone: Lets take two docker-compose file and connect them through networks:
1st foldername/docker-compose.yml:
version: '2'
services:
some-contr:
container_name: []
build: .
...
networks:
- somenet
ports:
- "8080:8080"
expose:
# Opens port 8080 on the container
- "8080"
environment:
PORT: 8080
tty: true
networks:
boomnet:
driver: bridge
2nd docker-compose.yml:
version: '2'
services:
pushapiserver:
container_name: [container_name]
build: .
command: "tail -f /dev/null"
volumes:
- ./:/[work_dir]
working_dir: /[work dir]
image: [name of image]
ports:
- "8060:8066"
environment:
PORT: 8066
tty: true
networks:
- foldername_somenet
networks:
foldername_somenet:
external: true
Now you can make api calls to one another services(b/w diff containers)like:
http://pushapiserver:8066/send_push call from some code in files for 1st docker-compose.yml
Two common mistakes (atleast i made them few times):
take note of [foldername] in which your docker-compose.yml file is present. Please see above in 2nd docker-compose.yml i have added foldername in network bc docker create network by [foldername]_[networkname]
Port: this one is very common. Please note i have used 8066 when trying to make connection i.e. http://pushapiserver:8066/... 8066 is port of docker container(2nd docker-compose.yml) so when trying to talk with different docker compose.
docker will use docker container port[8066] and not host machine mapped port
[8060]

Error creating a docker-compose connecting a java and a mysql containers

I am trying to connect the container of my springboot application with the container of a mysql image using docker-compose, however when I run docker-compose up my terminal starts a loop where it starts the spring application, try to connect with the MySQL container, fails and keep trying. The error that I get is com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failures
docker-compose file:
version: '3.8'
services:
mysqldb:
image: mysql
platform: linux/x86_64
env_file: ./.env
restart: always
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: .
restart: always
env_file: ./.env
ports:
- $APP_LOCAL_PORT:$APP_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:
.env:
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=12345678
MYSQLDB_DATABASE=dronefeederdb
MYSQLDB_LOCAL_PORT=3306
MYSQLDB_DOCKER_PORT=3306
APP_LOCAL_PORT=8080
APP_DOCKER_PORT=8080
Application.yaml:
server:
port: 8080
spring:
datasource:
username: ${DB_USER}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}:${DB_PORT}/${DB_NAME}
jpa:
hibernate:
ddl-auto: update
show-sql: true
open-in-view: false
#https://ia-tec-development.medium.com/lombok-e-spring-data-jpa-142398897733
security.user:
name: dronefeeder
password: dronefeeder
#https://www.baeldung.com/spring-boot-security-autoconfiguration
resilience4j.circuitbreaker:
configs:
default:
waitDurationInOpenState: 10s
failureRateThreshold: 10
#instances:
#estudantes:
#baseConfig: default
Dockerfile:
FROM openjdk:11.0-jdk as build-image
WORKDIR /app
COPY . .
RUN ./mvnw clean package -DskipTests
FROM openjdk:11.0-jre
COPY --from=build-image /app/target/*.jar /app/app.jar
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar", "/app/app.jar"]
Repository link:
https://github.com/julia-baptista/dronefeeder/tree/docker-configuration
I believe the issue is your application's use of localhost for the SQL URL in the Application.yaml property file. Since your app runs on a container by itself it tries to look at localhost of the container, while your SQL server is in another container, with its own localhost. Localhost in docker container do not refer to the host, they refer to the localhost within the container itself. If you want to access the host machine, this is an excellent answer From inside of a Docker container, how do I connect to the localhost of the machine?
url: jdbc:mysql://localhost:3306/dronefeederdb
localhost should not be used, you need to use the sql continainer url.
The fastest option is to use host.docker.internal instead of localhost. But it's not the best.
Another quick option is to run the two containers on the same docker network. Define that in your compose file the same way as the volumes. Then set each container to that network. See Networking in Compose. Then you can set your SQL url to use the SQL container name instead of localhost. So this..
url: jdbc:mysql://localhost:3306/dronefeederdb becomes url: jdbc:mysql://mysql/dronefeederdb
Neither option is robust, since you're hardcoding the container name in the application property file. A better solution is to have an environment variable in your webApp image that can accept the URL to the SQL server. Then you can provide the SQL location when running the container, or in your compose file (Environment variables in Compose). This way the SQL server can be anywhere.
Update:
There were a couple of issues in the compose and env files that caused mySQL container to fail startup. Thus the webApp was not able to connect.
Credentials
MYSQL_USER was set to root. mySql already creates the user root. You cannot create it again. I changed that to foo. See the Environment Variables section in the official docker image readme for more.
MYSQL_PASSWORD was not set. This is the password for the user your app will use. I set this to pass!123
The apps DB_PASSWORD was set to user root. That would have been ok if sql had started and it was using the root user I guess. But I changed that to the non-root user since were setting DB_USER=foo
Network was not defined
The two containers need to be on the same "docker network" if they are to run together in docker in the same machine. There's more to this which is beyond my experience. But in this case it needs to be on the same network for app to access mysqldb by its container name. I created dronefeederNet and added each container to it.
Files:
.env
MYSQLDB_USER=foo
MYSQLDB_PASSWORD=pass!123
MYSQLDB_ROOT_PASSWORD=12345678
MYSQLDB_DATABASE=dronefeederdb
MYSQLDB_LOCAL_PORT=3307
MYSQLDB_DOCKER_PORT=3306
APP_LOCAL_PORT=8081
APP_DOCKER_PORT=8080
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql
platform: linux/x86_64
env_file: ./.env
restart: always
environment:
- MYSQL_USER=$MYSQLDB_USER
- MYSQL_PASSWORD=$MYSQLDB_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- dronefeederNet
app:
depends_on:
- mysqldb
build: .
restart: always
env_file: ./.env
ports:
- $APP_LOCAL_PORT:$APP_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
networks:
- dronefeederNet
volumes:
db:
networks:
dronefeederNet:
Give this a try and I hope it runs. I was able to start it up ok.
You need to add in the app definition block a depends on: sentence, to make docker compose to not boot the application until the database is up.
Check this documentation: Docker Compose Startup Order

Issue with communication between containers after deploying OPA with my application on GKE

from my application side, I have a function named myfunction and via this function we can call OPA using its endpoint and the OPAinput as function parameters and it gives the response back through the “data” in “function(context, data)” section. This is how I call the function.
myfunction('http://localhost:8181/v1/data/play/policy', OPAinput , {
onSuccess : function(context, data) {
var permit = data.result.permit;
Log.info('permit '+ permit);
Log.info("Successfully posted data.");
}, onFail : function(context) {
Log.info("Failed to post data");
}
});
When I tested this function by running OPA with the application locally, it worked fine.But now I have deployed OPA with the application as a sidecar container on GKE, and I tried the same thing but it doesn't work. It says that
“Cannot get property "permit" of null at jdk.scripting.nashorn/jdk.nashorn.internal.runtime.ECMAErrors.error(ECMAErrors.java:57) at jdk.scripting.nashorn/jdk.nashorn.internal.runtime.ECMAErrors.typeError(ECMAErrors.java:213………….”
This is the OPA logs
2020-06-26 15:38:22.000 IST {"level":"info","msg":"Initializing server.","insecure_addr":"","diagnostic-addrs":[],"addrs":[":8181"]}
2020-06-26 16:24:52.000 IST {"msg":"Received request.","req_path":"/v1/data/play/policy","req_id":1,"level":"info","req_method":"POST","client_addr":"127.0.0.1:39530"}
2020-06-26 16:24:52.000 IST {"resp_status":200,"level":"info","req_method":"POST","req_id":1,"client_addr":"127.0.0.1:39530","req_path":"/v1/data/play/policy","resp_bytes":2,"msg":"Sent response.","resp_duration":9.564696}
apiVersion: v1
kind: Deployment
metadata:
name: rss-site
namespace: myapp
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
deployment: myapp
app: myapp
pod: myapp
template:
metadata:
labels:
deployment: myapp
app: myapp
pod: myapp
spec:
containers:
- name: opa
image: openpolicyagent/opa:latest
ports:
- name: http
containerPort: 8181
args:
- "run"
- "--ignore=.*" # exclude hidden dirs created by Kubernetes
- "--server"
- "/policies"
volumeMounts:
- readOnly: true
mountPath: /policies
name: example-policy
- name: myapp
image: nickchase/myapp:v1
ports:
- containerPort: 9763
protocol: TCP
volumeMounts:
- name: identity-server-conf
mountPath: /home/myapp/myapp-config-volume/repository/conf/deployment.toml
subPath: deployment.toml
serviceAccountName: "myappsvc-account"
volumes:
- name: myapp-server-conf
configMap:
name: myapp-server-conf
- name: example-policy
configMap:
name: example-policy
Could you please help me to identify this issue :(
When I tested this function by running OPA with the application locally, it worked fine.But now I have deployed OPA with the application as a sidecar container on GKE, and I tried the same thing but it doesn't work. It says that
If it works locally and not in GKE that means something is different. Since it gives back a HTTP 200 response then likely the OPA container is running OK, but that either the policy, input, or data is different than what you had running locally.
Try enabling the console decision logger via --set=decision_logs.console=true with the OPA args. This will show you in the log output for OPA what the input it received was as well as the result it sent back. That should help guide the investigation.
I would also double check that all of the policies and data have been loaded into OPA in the same way as you had locally. Differences in the directory paths can affect any *.json/*.yaml files loaded, and potentially if you had any missing or otherwise different *.rego files it would affect the result as well.

How to have the pod created run an application (command and args) and at the same time have a deployment and service referring to it?

Context:
Tech: Java, Docker Toolbox, Minikube.
I have a java web application (already packaged as web-tool.jar) that I want to run while having all the benefits of kubernetes.
In order to instruct kubernetes to take the image locally I use an image tag:
docker build -t despot/web-tool:1.0 .
and then make it available for minikube by:
docker save despot/web-tool:1.0 | (eval $(minikube docker-env) && docker load)
The docker file is:
FROM openjdk:11-jre
ADD target/web-tool-1.0-SNAPSHOT.jar app.jar
EXPOSE 1111
EXPOSE 2222
1. How can I have the pod created, run the java application and at the same time have a deployment and service referring to it?
1.1. Can I have a deployment created that will propagate a command and arguments when creating the pod? (best for me as I ensure creating a deployment and a service prior to creating the pod)
1.2. If 1.1. not feasible, can I kubectl apply some pod configuration with a command and args to the already created deployment/pod/service? (worse solution as additional manual steps)
1.3. If 1.2. not feasible, is it possible to create a deployment/service and attach it to an already running pod (that was started with "kubectl run ... java -jar app.jar reg")?
What I tried is:
a) Have a deployment created (that automatically starts a pod) and exposed (service created):
kubectl create deployment reggo --image=despot/web-tool:1.0
With this, a pod is created with a CrashLoopBackoff state as it doesn't have a foreground process running yet.
b) Tried the following in the hope of the deployment accepting a command and args that will propagate to the pod creation (1.1.):
kubectl create deployment reggo --image=despot/web-tool:1.0 -- java -jar app.jar reg
The same outcome of the pod, as the deployment doesn't accept command and args.
c) Tried applying a pod configuration with a command and args after the deployment created the pod, so I ran the command from a), found the id (reggo-858ccdcddd-mswzs) of the pod with (kubectl get pods) and then I executed:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: reggo-858ccdcddd-mswzs
spec:
containers:
- name: reggo-858ccdcddd-mswzs
command: ["java"]
args: ["-jar", "app.jar", "reg"]
EOF
but I got:
Warning: kubectl apply should be used on resource created by either
kubectl create --save-config or kubectl apply The Pod
"reggo-858ccdcddd-mswzs" is invalid:
* spec.containers[0].image: Required value
* spec.containers: Forbidden: pod updates may not add or remove containers
that lets me think that I can't execute the command by applying the command/args configuration.
Solution (using Arghya answer):
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: reggo
spec:
selector:
matchLabels:
app: reggo-label
template:
metadata:
labels:
app: reggo-label
spec:
containers:
- name: reggo
image: "despot/web-tool:1.0"
command: ["java"]
args: ["-jar", "app.jar", "reg"]
ports:
- containerPort: 1111
EOF
and executing:
kubectl expose deployment reggo --type=NodePort --port=1111
You could have the java -jar command as ENTRYPOINT in the docker file itself which tells Docker to run the java application.
FROM openjdk:11-jre
ADD target/web-tool-1.0-SNAPSHOT.jar app.jar
EXPOSE 1111
EXPOSE 2222
ENTRYPOINT ["java", "-jar", "app.jar", "reg"]
Alternatively the same can be achieved via command and args section in a kubernetes yaml
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
command: ["java"]
args: ["-jar", "app.jar", "reg"]
Now coming to the point of Forbidden: pod updates may not add or remove containers error the reason its happening is because you are trying to modify an existing pod object's containers section which is not allowed. Instead of doing that you can get the entire deployment yaml and open it up in an editor and edit it to add the command section and then delete the existing deployment and finally apply the modified deployment yaml to the cluster.
kubectl get deploy reggo -o yaml --export > deployment.yaml
Delete the existing deployment via kubectl delete deploy reggo
Edit deployment.yaml to add right command
Apply the yaml to the cluster kubectl apply -f deployment.yaml
As already mentioned, recommended approach is to add ENTRYPOINT in your Dockerfile with the command that you are willing to use.
However if you want to create a deployment using kubectl command you can run
kubectl run $DEPLOYMENT_NAME --image=despot/web-tool:1.0 --command -- java -jar app.jar reg
Additionally if you want to expose the deployment using same command you can pass:
--expose=true argument which will create a service of type ClusterIP and
--port=$PORT_NUMBER to choose port on which it will exposed.
to change port type to NodePort you have to run:
kubectl run $DEPLOYMENT_NAME --image=despot/web-tool:1.0 --expose=true --service-overrides='{ "spec": { "type": "NodePort" } }' --port=$PORT_NUMBER --command -- java -jar app.jar reg

Spring Boot App Crashes when Scaled By Kubernetes

I have a Spring Boot app running with Spring Actuator enabled. I am using the Spring Actuator health endpoint to serve as the readiness and liveliness checks. All works fine with a single replica. When I scale out to 2 replicas both pods crash. They both fail readiness checks and end up in an endless destroy/re-create loop. If I scale them back in to 1 replica the cluster recovers and the Spring Boot app becomes available. Any ideas what might be causing this issue?
Here is the deployment config (the context root of the Spring Boot app is /dept):
apiVersion: apps/v1
kind: Deployment
metadata:
name: gl-dept-deployment
labels:
app: gl-dept
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
selector:
matchLabels:
app: gl-dept
template:
metadata:
labels:
app: gl-dept
spec:
containers:
- name: gl-dept
image: zmad5306/gl-dept:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /dept/actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /dept/actuator/health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 10
successThreshold: 1
failureThreshold: 5
The curl command hangs. It appears the entire minikube server hangs, dashboard quits responding
So in that case, I would guess the VM backing minikube is sized too small to handle all the items that are deployed inside it. I haven't played with minikube in order to know how much it carries over from its libmachine underpinnings, but in the case of docker-machine, one can provide --virtualbox-memory=4096 (or set an environment variable env VIRTUALBOX_MEMORY_SIZE=4096 docker-machine ...). And, of course, one should use the memory settings that correspond to the driver in use by minikube (so, HyperKit, xhyve, HyperV, whatever).

Categories

Resources