My java app (backend) create some files during work. And when i make rebuild after some changes, this file deletes and my app need to create it again. How to save this files permanently? I try to create volume but it doesn't work.
This is my docker-compose config:
version: '3'
services:
examledb:
container_name: examle-docker-db
image: postgres
volumes:
- examle-docker-db:/var/lib/postgresql/data
ports:
- "5555:5432"
expose:
- "5555"
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_USER=postgres
- POSTGRES_DB=examle
- PGDATA=/var/lib/postgresql/data/pgdata
networks:
- examle-docker-network
restart: unless-stopped
backend:
container_name: examle-docker-backend
build: ./backend
volumes:
- /var/lib/docker/volumes/example_prod_example-backend-volume/_data:/root/projects/example_PROD/backend
ports:
- "8080:8080"
- "8888:8888"
depends_on:
- examledb
networks:
- examle-docker-network
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://examle-docker-db:5432/examle
restart: unless-stopped
frontend:
container_name: examle-docker-frontend
build: ./frontend
restart: unless-stopped
command: serve -s dist/vu4y-frontend -l 4200
networks:
- examle-docker-network
nginx:
image: nginx:stable
container_name: examle-docker-nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./data/nginx:/etc/nginx/conf.d
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
depends_on:
- frontend
- backend
networks:
- examle-docker-network
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
networks:
- examle-docker-network
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
volumes:
examle-docker-db: { }
networks:
examle-docker-network:
driver: bridge
Also I try to create volume like this:
volumes:
- example-backend-volume:/root/projects/example_PROD/backend
It also doesn't work.
My docker-compose.yml layout in /root/projects/Example
Any advice will be very helpful. All files creates inside backend folder in the same category with src and pom.xml.
I solved the problem. My problem was that I don't understand how docker's volume works. I thought that need to write root where my docker app is run, but need to write any path in right path (in my case /root/projects/example_PROD/backend) and saves files in app there.
Related
I am trying to run my java application (a project at my uni) in debug mode in IntelliJ with a dockerfile.
I found this tutorial:
https://www.jetbrains.com/help/idea/debug-a-java-application-using-a-dockerfile.html#create-remote-debug-config
I tried to follow each step (#create-remote-debug-configuration), but at 4:
Select the Docker configuration that runs your app (MyPassApp) and
specify the command to use when running your app in the Custom Command
field. The remote debug configuration will use this custom command
instead of the one defined in the Dockerfile. This command should
contain the -agentlib option to let the debugger attach to the
process:
java -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005 -jar JavaPassFromConsole.jar
I don't know what any of those commands mean, so I dont know what to put in the custom command section.
This is my docker-compose.yml file:
version: '3'
volumes: mysql_data: {}
networks:
back:
services:
backend:
build:
context: '.'
dockerfile: 'docker_config/backend/Dockerfile'
depends_on:
- db
links:
- db
ports:
- 8080:8080
- 9990:9990
environment:
CONTACT_USERNAME: <XXX>
CONTACT_PASSWORD: <XXX>
networks:
- back db:
container_name: db
image: mysql:5.7
restart: always
volumes:
- mysql_data:/var/lib/mysql
command: --lower_case_table_names=1
environment:
MYSQL_DATABASE: <XXX>
MYSQL_ROOT_PASSWORD: root
networks:
- back
ports:
- 3306:3306
- 5005:5005 phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 4000:80
environment:
MYSQL_ROOT_PASSWORD: root
networks:
- back
Any help is appreciated!!
I am trying to do a httpRequest from library to people-service, but not that i've tried works.
i saw that if use the name it shoud ork, so i have tried http://library:8080, but it returns the error
java.lang.IllegalArgumentException: unsupported URI
Here is my Dockerfile:
FROM openjdk:11-jdk-slim
ARG JAR_FILE
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
and my docker-compose:
version: '3.5'
services:
library:
build:
context: ./
ports:
- "8081:8081"
networks:
- library-network
restart: on-failure
mysql-service:
image: mysql:5.7
ports:
- "3306:3306"
networks:
- library-network
environment:
- MYSQL_ROOT_PASSWORD=admin
- MYSQL_USER=admin
- MYSQL_PASSWORD=admin
- MYSQL_DATABASE=bootdb
restart: on-failure
people-service:
build:
context: ./
args:
JAR_FILE: ./target/library.jar
ports:
- "8080:8080"
environment:
- DB_HOST=jdbc:mysql://mysql-service:3306/library
networks:
- library-network
depends_on:
- mysql-service
- library
restart: on-failure
networks:
library-network:
driver: bridge
Your question says:
I am trying to do a httpRequest from library to people-service
From that I assume the request would be initiated in library. Then the URL should look like http://people-service:8080.
But from the error message I'd assume your library is not able to run http requests - regardless of where they are going to, which is a bit strange. Investigate on that first, using simple urls like http://stackoverflow.com (which will return http status 301).
I have three services.
Config server
Eureka server
api-gateway
If I run them individually it's working fine. Then I am trying to introduce docker on above services. So I have prepare 3 dockerfile for each services:
VOLUME /tmp
ADD config-server/build/libs/config-server-0.0.1-SNAPSHOT.jar config-server-0.0.1-SNAPSHOT.jar
CMD ["java", "-jar", "config-server-0.0.1-SNAPSHOT.jar"]
VOLUME /var/lib/config-repo
EXPOSE 10270
FROM java:8
VOLUME /tmp
ADD eureka-server/build/libs/eureka-server-1.0-SNAPSHOT.jar eureka-server-1.0-SNAPSHOT.jar
CMD ["java","-jar","eureka-server-1.0-SNAPSHOT.jar"]
EXPOSE 10210
FROM java:8
VOLUME /tmp
ADD api-gateway/build/libs/api-gateway-0.0.1-SNAPSHOT.jar api-gateway-0.0.1-SNAPSHOT.jar
RUN bash -c 'touch /api-gateway-0.0.1-SNAPSHOT.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/api-gateway-0.0.1-SNAPSHOT.jar"]
And then I have prepared my docker-compose file:
version: '3'
services:
eureka-server:
restart: always
container_name: eureka-server
build:
context: .
dockerfile: eureka-server/Dockerfile_eureka_server
expose:
- 10210
ports:
- 10210:10210
networks:
- servicenet
config-server:
restart: always
container_name: config-server
build:
context: .
dockerfile: config-server/Dockerfile_config_server
expose:
- 10270
ports:
- 10270:10270
healthcheck:
test: ["CMD", "curl", "-f", "http://config-server:10270"]
interval: 30s
timeout: 10s
retries: 5
networks:
- servicenet
api-gateway:
restart: on-failure
container_name: api-gateway
build:
context: .
dockerfile: api-gateway/Dockerfile_api_gateway
expose:
- 10200
ports:
- 10200:10200
networks:
- servicenet
links:
- config-server
- eureka-server
networks:
servicenet:
driver: bridge
But api-gateway start before config-server completely start its service. Thats why api gateway start on 8080 port and looking for eureka server at host localhost and port 8621. though its not getting this ports and hosts in docker its continuing looking for eureka server but not again fetching config from config server. Is there anything wrong with my configuration?
My application.properties file on github like this
server.port=10200
#Eureka configuration
eureka.instance.metadataMap.instanceId=${vcap.application.instance_id:${spring.application.name}:${spring.application.instance_id:${random.value}}}
lombok.equalsAndHashCode.callSuper = call
eureka.instance.instanceId=${spring.application.name}:${spring.application.instance_id:${random.value}}
eureka.client.registryFetchIntervalSeconds=5
eureka.client.serviceUrl.defaultZone=http://eureka-server:10210/eureka
spring.cloud.service-registry.auto-registration.enabled=true
eureka.client.enabled=true
eureka.client.serviceUrl.registerWithEureka=true
lombok.anyConstructor.suppressConstructorProperties = true
#Zuul Configuration
# A prefix that can added to beginning of all requests.
#zuul.prefix=/api
# Disable accessing services using service name (i.e. gallery-service).
# They should be only accessed through the path defined below.
zuul.ignored-services=*
# Map paths to services
zuul.routes.gallery-service.path=/gallery/**
zuul.routes.gallery-service.service-id=gallery-manager
zuul.routes.image-service.path=/image/**
zuul.routes.image-service.service-id=image-service
zuul.routes.book-manager.path=/book-manager/**
zuul.routes.book-manager.service-id=book-manager
zuul.routes.auth-service.path=/auth/**
zuul.routes.auth-service.service-id=auth-manager
zuul.routes.remote-library.path=/remote/**
zuul.routes.remote-library.service-id=remote-library
#zuul.routes.auth-service.strip-prefix=false
# Exclude authorization from sensitive headers
zuul.routes.auth-service.sensitive-headers=Cookie,Set-Cookie
NB: If I try with other services rather than api-gateway its working fine. I am using zuul proxy for api gateway service.
A quick fix would be to add a 'depends_on' clause so that the api-service depends on the config server - that way the api-service won't start until the config server is up.
To make this workable we have to wait still other services are up. So I have written a script for waiting and rewrite docker compose like this
version: '3'
services:
eureka-server:
restart: always
container_name: eureka-server
build:
context: .
dockerfile: eureka-server/Dockerfile_eureka_server
expose:
- 10210
ports:
- 10210:10210
networks:
- servicenet
config-server:
restart: always
container_name: config-server
build:
context: .
dockerfile: config-server/Dockerfile_config_server
expose:
- 10270
ports:
- 10270:10270
healthcheck:
test: ["CMD", "curl", "-f", "http://config-server:10270"]
interval: 30s
timeout: 10s
retries: 5
networks:
- servicenet
api-gateway:
restart: always
container_name: api-gateway
build:
context: .
dockerfile: api-gateway/Dockerfile_api_gateway
expose:
- 10200
ports:
- 10200:10200
networks:
- servicenet
links:
- config-server
depends_on:
- config-server
command: ["./wait-for-it.sh", "config-server:10270","--","./wait-for-it.sh", "eureka-server:10210", "--", "java","-jar", "api-gateway-0.0.1-SNAPSHOT.jar"]
And also rewrite docker files like this(for api-gateway)
FROM java:8
VOLUME /tmp
ADD api-gateway/build/libs/api-gateway-0.0.1-SNAPSHOT.jar api-gateway-0.0.1-SNAPSHOT.jar
ADD wait-for-it.sh wait-for-it.sh
RUN chmod +x wait-for-it.sh
I have Java microservices running in docker container which is not able to connect to mysql hosted locally.
docker is running in network having ip address as 172.0...
If I execute Java service directly as java -jar, it is able to connect to mysql running in 10.0..
docker-compose file
version: '2.0'
services:
config-server:
image: test/config-server
container_name: config-server
environment:
- GIT_USERNAME=${GIT_USERNAME}
- GIT_PASSWORD=${GIT_PASSWORD}
ports:
- 8889:8889
entrypoint: ["java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Dspring.profiles.active=docker", "-Drun.arguments=GIT_USERNAME=${GIT_USERNAME}, GIT_PASSWORD=${GIT_PASSWORD} -Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
discovery-server:
image: test/discovery-server
container_name: discovery-server
links:
- config-server
depends_on:
- config-server
entrypoint: ["./wait-for-it.sh","config-server:8889","--timeout=60","--","java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Dspring.profiles.active=docker", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
ports:
- 8761:8761
web-authentication:
image: test/web-authentication
container_name: web-authentication
links:
- config-server
- discovery-server
depends_on:
- discovery-server
entrypoint: ["./wait-for-it.sh","discovery-server:8761","--timeout=60","--","java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Dspring.profiles.active=docker", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
ports:
- 8444:8444
gateway:
image: test/gateway
container_name: gateway
links:
- config-server
- discovery-server
- web-authentication
depends_on:
- discovery-server
entrypoint: ["./wait-for-it.sh","discovery-server:8761","--timeout=60","--","java", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-Dspring.profiles.active=docker", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
ports:
- 81:8765
The issue was resolved after configuring networks configuration in docker-compose.yml, the issue was mysql and Docker containers were running in different subnet.
help me please, I have one docker-compose a file. It has 2 test services and 2 workers. And there is a script. I need to make a sequential call: i.e. at first when I make docker-compose up 2 test services (post-service-test and rabb-service-test) should go up, then a script should run (it builds the application based on these running test services), then I need to stop and delete these test services and only then I have to raise 2 working services (post-service and rabb-service). Can you please tell me how to do this, having this kind of docker-compose file:
version: '3'
services:
postgres:
container_name: post-service
image: postgres:9-alpine
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- ${PWD}/db_migration/cdp_dump.sql:/home/postgres/cdp_dump.sql
ports:
- "5432:5432"
networks:
- work_network
labels:
container_group: work_env
rabbitmq:
container_name: rabb-service
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
- RABBITMQ_DEFAULT_VHOST=${RABBITMQ_DEFAULT_VHOST}
ports:
- "15672:15672"
- "5672:5672"
depends_on:
- postgres
networks:
- work_network
labels:
container_group: work_env
postgres_test:
container_name: post-service-test
image: postgres:9-alpine
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
networks:
- test_network
labels:
container_group: test_env
rabbitmq_test:
container_name: rabb-service-test
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
depends_on:
- postgres_test
networks:
- test_network
labels:
container_group: test_env
networks:
work_network:
test_network:
application:
container_name: build
image: openjdk:8-jdk
environment:
- POSTGRES_HOST=${POSTGRES_HOST}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- RABBITMQ_HOST=${RABBITMQ_HOST}
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASSWORD=${RABBITMQ_DEFAULT_PASS}
volumes:
- ${project_home}:/root
command:
/bin/bash < build_script
Maybe I wrote the script incorrectly in a file? The script itself looks like this and it's working, if you run it separately. But I need to add it here so that everything does docker-compose. The script itself:
docker run --name build -i --net test-network \
-v ${project_home}:/root \
-e POSTGRES_HOST=${POSTGRES_HOST} \
-e POSTGRES_DB=${POSTGRES_DB} \
-e POSTGRES_USER=${POSTGRES_USER} \
-e POSTGRES_PASSWORD=${POSTGRES_PASSWORD} \
-e RABBITMQ_HOST=${RABBITMQ_HOST} \
-e RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER} \
-e RABBITMQ_DEFAULT_PASSWORD=${RABBITMQ_DEFAULT_PASS} \
openjdk:8-jdk /bin/bash < build_script
And another question: can I do such commands as docker stop post-service-test, docker rm post-service-test, docker stop rabb-service-test, docker rm rabb-service-test (stop and delete test services) to execute separately, and also to place in docker-compose or it is impossible?
Thank you in advance !
You have specified the application service at the wrong place inside your docker-compose file. According to your file, it is a part of the networks block instead of the actual services block. tl;dr: Move the network block below the application block in docker-compose file.
The flow that you're trying to achieve won't be possible with just docker-compose, and 1 docker-compose file. You should break it down into at-least 2 files and write something that would control the flow, probably a shell script or something. It would also help you in separation of concerns as test and working services will be in different files
So your configuration could look something like this
docker-compose-test.yaml:
version: '3'
services:
postgres_test:
container_name: post-service-test
image: postgres:9-alpine
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
networks:
- test_network
labels:
container_group: test_env
rabbitmq_test:
container_name: rabb-service-test
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
depends_on:
- postgres_test
networks:
- test_network
labels:
container_group: test_env
application:
container_name: build
image: openjdk:8-jdk
environment:
- POSTGRES_HOST=${POSTGRES_HOST}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- RABBITMQ_HOST=${RABBITMQ_HOST}
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASSWORD=${RABBITMQ_DEFAULT_PASS}
volumes:
- ${project_home}:/root
networks:
- test_network
command: /bin/bash < build_script
networks:
test_network:
docker-compose.yaml
version: '3'
services:
postgres:
container_name: post-service
image: postgres:9-alpine
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- ${PWD}/db_migration/cdp_dump.sql:/home/postgres/cdp_dump.sql
ports:
- "5432:5432"
networks:
- work_network
labels:
container_group: work_env
rabbitmq:
container_name: rabb-service
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_DEFAULT_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_DEFAULT_PASS}
- RABBITMQ_DEFAULT_VHOST=${RABBITMQ_DEFAULT_VHOST}
ports:
- "15672:15672"
- "5672:5672"
depends_on:
- postgres
networks:
- work_network
labels:
container_group: work_env
networks:
work_network:
(Example script you would need to control the flow)
run.sh:
#!/bin/bash
docker-compose -f docker-compose-test.yaml up -d
# Wait for container `build` to exit
docker wait build
docker-compose -f docker-compose-test.yaml down
docker-compose -f docker-compose.yaml up -d
NOTE:
Using docker wait you can wait for your application container to exit so you know its safe to stop and remove the containers.
No problem, I understood how to make this.
I create one docker-compose.yml config file and run this:
1) docker-compose up -d rabbitmq_test
2) make build-script (this is application in docker-compose.yml file)
3) docker-compose stop rabbitmq_test postgres_test
4) docker-compose rm -f rabbitmq_test postgres_test
5) docker-compose up -d rabbitmq