Externalize application.properties to docker env - java

I'm developping an application using spring-boot and Docker. For security reasons I want to not use any more application.properties and use only environnement variable.
If you have best practices I will be grateful.
This is a snipet of my docker-compose.yml
version: "2.1"
services:
app_users:
image: images/app_users
container_name: app_user_ctn
build:
context: ../..
dockerfile: docker/dev/Dockerfile
ports:
- "30333:8080"
external_links:
- mysql
environment:
SPRING_DATASOURCE_URL: jdbc:mysql://mysql/myDB?autoReconnect=true
SPRING_DATASOURCE_USERNAME: mysqluser1
SPRING_DATASOURCE_PASSWORD: mysqlpwsword
SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.mysql.jdbc.Driver
LDAP_PASSWORD: ldapPswd
LDAP_URLS: ldap://myServer:389
LDAP_USERNAME: cn=admin,dc=com,dc=expl
When I make a request to ldap I get NulPointerException because ldap environnement variables are not initialize.
When I use application.yml it works.
...
spring:
ldap:
password: ldapPswd
urls: ldap://myServer:389
username: cn=admin,dc=com,dc=expl
....
Would you have any ideas ?
Best regards

Related

Error creating a docker-compose connecting a java and a mysql containers

I am trying to connect the container of my springboot application with the container of a mysql image using docker-compose, however when I run docker-compose up my terminal starts a loop where it starts the spring application, try to connect with the MySQL container, fails and keep trying. The error that I get is com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failures
docker-compose file:
version: '3.8'
services:
mysqldb:
image: mysql
platform: linux/x86_64
env_file: ./.env
restart: always
environment:
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
app:
depends_on:
- mysqldb
build: .
restart: always
env_file: ./.env
ports:
- $APP_LOCAL_PORT:$APP_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
volumes:
db:
.env:
MYSQLDB_USER=root
MYSQLDB_ROOT_PASSWORD=12345678
MYSQLDB_DATABASE=dronefeederdb
MYSQLDB_LOCAL_PORT=3306
MYSQLDB_DOCKER_PORT=3306
APP_LOCAL_PORT=8080
APP_DOCKER_PORT=8080
Application.yaml:
server:
port: 8080
spring:
datasource:
username: ${DB_USER}
password: ${DB_PASSWORD}
url: jdbc:mysql://${DB_HOST}:${DB_PORT}/${DB_NAME}
jpa:
hibernate:
ddl-auto: update
show-sql: true
open-in-view: false
#https://ia-tec-development.medium.com/lombok-e-spring-data-jpa-142398897733
security.user:
name: dronefeeder
password: dronefeeder
#https://www.baeldung.com/spring-boot-security-autoconfiguration
resilience4j.circuitbreaker:
configs:
default:
waitDurationInOpenState: 10s
failureRateThreshold: 10
#instances:
#estudantes:
#baseConfig: default
Dockerfile:
FROM openjdk:11.0-jdk as build-image
WORKDIR /app
COPY . .
RUN ./mvnw clean package -DskipTests
FROM openjdk:11.0-jre
COPY --from=build-image /app/target/*.jar /app/app.jar
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom", "-jar", "/app/app.jar"]
Repository link:
https://github.com/julia-baptista/dronefeeder/tree/docker-configuration
I believe the issue is your application's use of localhost for the SQL URL in the Application.yaml property file. Since your app runs on a container by itself it tries to look at localhost of the container, while your SQL server is in another container, with its own localhost. Localhost in docker container do not refer to the host, they refer to the localhost within the container itself. If you want to access the host machine, this is an excellent answer From inside of a Docker container, how do I connect to the localhost of the machine?
url: jdbc:mysql://localhost:3306/dronefeederdb
localhost should not be used, you need to use the sql continainer url.
The fastest option is to use host.docker.internal instead of localhost. But it's not the best.
Another quick option is to run the two containers on the same docker network. Define that in your compose file the same way as the volumes. Then set each container to that network. See Networking in Compose. Then you can set your SQL url to use the SQL container name instead of localhost. So this..
url: jdbc:mysql://localhost:3306/dronefeederdb becomes url: jdbc:mysql://mysql/dronefeederdb
Neither option is robust, since you're hardcoding the container name in the application property file. A better solution is to have an environment variable in your webApp image that can accept the URL to the SQL server. Then you can provide the SQL location when running the container, or in your compose file (Environment variables in Compose). This way the SQL server can be anywhere.
Update:
There were a couple of issues in the compose and env files that caused mySQL container to fail startup. Thus the webApp was not able to connect.
Credentials
MYSQL_USER was set to root. mySql already creates the user root. You cannot create it again. I changed that to foo. See the Environment Variables section in the official docker image readme for more.
MYSQL_PASSWORD was not set. This is the password for the user your app will use. I set this to pass!123
The apps DB_PASSWORD was set to user root. That would have been ok if sql had started and it was using the root user I guess. But I changed that to the non-root user since were setting DB_USER=foo
Network was not defined
The two containers need to be on the same "docker network" if they are to run together in docker in the same machine. There's more to this which is beyond my experience. But in this case it needs to be on the same network for app to access mysqldb by its container name. I created dronefeederNet and added each container to it.
Files:
.env
MYSQLDB_USER=foo
MYSQLDB_PASSWORD=pass!123
MYSQLDB_ROOT_PASSWORD=12345678
MYSQLDB_DATABASE=dronefeederdb
MYSQLDB_LOCAL_PORT=3307
MYSQLDB_DOCKER_PORT=3306
APP_LOCAL_PORT=8081
APP_DOCKER_PORT=8080
docker-compose.yml
version: '3.8'
services:
mysqldb:
image: mysql
platform: linux/x86_64
env_file: ./.env
restart: always
environment:
- MYSQL_USER=$MYSQLDB_USER
- MYSQL_PASSWORD=$MYSQLDB_PASSWORD
- MYSQL_ROOT_PASSWORD=$MYSQLDB_ROOT_PASSWORD
- MYSQL_DATABASE=$MYSQLDB_DATABASE
ports:
- $MYSQLDB_LOCAL_PORT:$MYSQLDB_DOCKER_PORT
volumes:
- db:/var/lib/mysql
networks:
- dronefeederNet
app:
depends_on:
- mysqldb
build: .
restart: always
env_file: ./.env
ports:
- $APP_LOCAL_PORT:$APP_DOCKER_PORT
environment:
- DB_HOST=mysqldb
- DB_USER=$MYSQLDB_USER
- DB_PASSWORD=$MYSQLDB_PASSWORD
- DB_NAME=$MYSQLDB_DATABASE
- DB_PORT=$MYSQLDB_DOCKER_PORT
stdin_open: true
tty: true
networks:
- dronefeederNet
volumes:
db:
networks:
dronefeederNet:
Give this a try and I hope it runs. I was able to start it up ok.
You need to add in the app definition block a depends on: sentence, to make docker compose to not boot the application until the database is up.
Check this documentation: Docker Compose Startup Order

Connect To Docker Compose MongoDb Via Spring boot application

My web app can't connect to the MongoDB container
here are my application.yml
spring:
data:
mongodb:
uri: mongodb://mongo:27017
host: mongo
port: 27017
database: my-db-name
and this is my Docker-Compose
version: "3"
services:
java:
build:
context: ./
ports:
- "8080:8080"
links:
- mongo
depends_on:
- mongo
networks:
- shared-net
mongo:
image: 'mongo'
ports:
- 27017:27017
hostname: mongo
volumes:
- ./data/db:/data/db
networks:
- shared-net
networks:
shared-net:
driver: bridge
and this is the Dockerfile wrote for running java
FROM openjdk:11
COPY ./code/lemon-backend/target/lemon-0.0.1-SNAPSHOT.jar /usr/src/
WORKDIR /usr/src/
EXPOSE 8080
CMD ["java", "-jar", "lemon-0.0.1-SNAPSHOT.jar"]
I can't even build the application using these options
I get this exception:
org.mongodb.driver.cluster: Exception in monitor thread while connecting to server mongo:27017
if possible try giving solutions with docker-compose, thanks
OLD VERSION ANSWER
IMPORTANT NOTE:
older versions of MongoDB ignore this configuration in application.properties, proceed ahead & use the new solutions I added
This workaround is used for old versions of spring and mongo that ignore the normal configuration (other than uri)
. I had a warning that this property cant be resolved
but hopefully, it worked :)
dockerspring.data.mongodb.uri= mongodb://<your_mongodb_container_name>:27017/<name_of_your_db>
the mongodb part is not changeable but mongo before the port number is actually the name of the container witch you have specified in your docker-compose
SPRING BOOT SOLUTION
spring:
data:
mongodb:
host: <mongo-db-container-name>
port: <mongo-db-port>
database: <database-name>
DOCKER SOLUTION
In Your Dockerfile Add This Option For Executing Java
ENTRYPOINT [“java”,”-Dspring.data.mongodb.uri=mongodb://mongo:27017/name_of_your_db”, “-Djava.security.egd=file:/dev/./urandom”,”-jar”,”/<name_of_your_app>.jar”]
Linking Java And Mongo Containers + Giving Them Names
here this is my final docker-compose.yml,
I hope that it helps you
version: "3"
services:
java:
build:
context: ./
ports:
- "8080:8080"
container_name: java
links:
- mongo
depends_on:
- mongo
networks:
- shared-net
mongo:
image: 'mongo'
ports:
- 27017:27017
container_name: mongo
volumes:
- /home/sinoed/data/db:/data/db
networks:
- shared-net
networks:
shared-net:
driver: bridge
Compare this version and the one specified in the question carefully

Docker not catching env vars

I'm trying to run 3 containers through docker-compose, with postgres, cassandra and my webapp, which has a embedded tomcat server with some dependencies as ARP/Native. This libraries are located in a folder called "lib" at jar's same level.
I'm running a PoC on Windows 10 (using Linux containers) before moving it to a CentOS server, if it works on the PoC. I searched over the net and seems like is not an isolated problem but or I have no find the solution, or the solution showed didn't work for me. Here is my docker-compose.yml, with all the related files/folders stored at same level:
version: '3.1'
services:
fulmar-webapp:
container_name: "my-webapp"
image: openjdk:11-jre-slim
hostname: mywebapp
volumes:
- ./lib:/home/lib
- ./fulmar-1.0.1-SNAPSHOT-exec.jar:/home/mywebapp-1.0.1-SNAPSHOT-exec.jar
entrypoint:
- java
- -jar
- /home/mywebapp-1.0.1-SNAPSHOT-exec.jar
environment:
- LD_LIBRARY_PATH:/home/lib
- spring.datasource.url=jdbc:postgresql://postgresql:5432/mydb
- spring.datasource.username=postgres
- spring.datasource.password=postgres
- spring.jpa.hibernate.ddlAuto=update
network_mode: bridge
ports:
- 8443:8443
- 8080:8080
links:
- postgresql
- cassandra
postgresql:
container_name: "mydb"
image: postgres:11.1-alpine
restart: always
environment:
POSTGRES_DB: mydb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
volumes:
- ./startup.sql:/docker-entrypoint-initdb.d/startup.sql
- postgresdata:/var/lib/postgresql/data
ports:
- 5432:5432
network_mode: bridge
cassandra:
container_name: "cassandra"
image: cassandra
ports:
- 9042:9042
network_mode: bridge
volumes:
postgresdata:
Not sure if is not properly mapping the folder with the libraries, or is not actually mounting the volume. This is exactly the Environment var I need to put in there:
Environment=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/libraries/lib
Two results I have encountered:
1-Tomcat exception cause it can not find the libraries:
[ERROR][org.apache.catalina.util.LifecycleBase#log(175)] Failed to
initialize component [Connector[org.apache.coyote.http11.Http11AprProtocol-
8080]] | org.apache.catalina.LifecycleException: The configured protocol
[org.apache.coyote.http11.Http11AprProtocol] requires the APR/native library
which is not available
2-WARNING: The LD_LIBRARY_PATH variable is not set. Defaulting to a blank string.
Thanks you all in advance
EDIT: just let you know the once I run docker-compose up, and my app throws this exception, the container is no longer available so I'm unable to run any commands in it
You have a wrong syntax, it should be like this:
environment:
- LD_LIBRARY_PATH=/home/lib

Java ee api gets 404 on payara production server

I'm running a java ee application on payara server with docker-compose, the application seems to work normally locally. But when deployed it will give a 404 on all "/api" requests. The jsp files seem to work fine.
#ApplicationPath("/api")
public class SimulationApplication extends Application {
}
Is there something that could cause this behaviour.
I already tried restarting the server and docker. And the server logs don't show anything special. The only exception that it throws is that it can't backup the domain.xml to domain.xml.bak. I have tried to start it without the domain.xml mapped but this won't fix the api.
Docker-compose
version: "2"
services:
java_ee:
container_name: 'java'
image: payara/server-full
ports:
- '8080:8080'
- '4848:4848'
links:
- 'db:db'
volumes:
- './payara/autodeploy:/opt/payara41/glassfish/domains/domain1/autodeploy'
- './payara/lib:/opt/payara41/glassfish/domains/domain1/lib'
environment:
JVM_OPTS: "-Xmx12g -Xms12g -XX:MaxPermSize=1024m"
angular:
container_name: 'angular'
image: nginx
ports:
- '80:80'
volumes:
- './angular:/usr/share/nginx/html'
db:
image: mysql
container_name: 'mysql'
command: mysqld --user=root --verbose
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: "Db"
MYSQL_USER: "user"
MYSQL_PASSWORD: "pass"
MYSQL_ROOT_PASSWORD: "pass"
MYSQL_ALLOW_EMPTY_PASSWORD: "no"
I didn't know for sure if this would be better suited for serverfault, if that fits better i'll post it there.

Turbine can only find one host when using docker

I have 3 projects: A hystrix dashboard, a turbine server (using AMQP) and an API
When I start in development env, I set up 2 instances of the API (using port 8080 and 8081). To test the turbine aggregation, I make calls and in the dashboard, I can see Hosts: 2.
Although when I use Docker, even when the load balancer hits the 2 server, I only see one Host on the hystrix dashboard.
My assumptions:
1- as both containers start on the same port (8080), Turbine sees them as one
2- as I also dockerize RabbitMQ, this may be causing problems
here is my docker-compose.yml file
version: '2'
services:
postgres:
image: postgres:9.5
ports:
- "5432"
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: fq
volumes:
- /var/lib/postgresql
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672"
- "15672"
environment:
RABBITMQ_DEFAULT_USER: turbine
RABBITMQ_DEFAULT_PASS: turbine
volumes:
- /var/lib/rabbitmq/
hystrix:
build: hystrixdashboard/.
links:
- turbine_server
ports:
- "8989:8989"
turbine_server:
build: turbine/.
links:
- rabbitmq
ports:
- "8090:8090"
persona_api:
build: persona/.
ports:
- "8080"
links:
- postgres
- rabbitmq
lb:
image: 'dockercloud/haproxy:1.5.1'
links:
- persona_api
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
my persona_api config file
spring:
application:
name: persona_api
profiles:
active: dev
rabbitmq:
addresses: 127.0.0.1:5672
username: turbine
password: turbine
useSSL: false
server:
compression.enabled: true
port: ${PORT:8080}
params:
datasource:
driverClassName: org.postgresql.Driver
username: postgres
password: postgres
maximumPoolSize: 10
poolName: fq_connection_pool
spring.jpa:
show-sql: true
hibernate:
ddl-auto: update
turbine:
aggregator:
clusterConfig: persona_api
appConfig: persona_api
---
spring:
profiles: dev
params:
datasource:
jdbcUrl: jdbc:postgresql://127.0.0.1:5432/fq
---
spring:
profiles: docker
rabbitmq:
addresses: rabbitmq:5672
params:
datasource:
jdbcUrl: jdbc:postgresql://postgres:5432/fq
I'm afraid that if I deploy it to production (on Rancher or Docker cloud), I'll see the same problem.
here is a GIF of what is happening when I set up two APIs load balanced
try:
hystrix.stream.queue.send-id=false
in your API
I do assume your problem is the RabbitMQ connection you are using. Cause the connection string you are using is localhost but actually except the RabbitMQ container on none the connection will be available on localhost. I do suggest that you inject the Rabbit host into your Spring connection using environment variables. If I read your files correct it should be ${RABBITMQ_PORT_5672_TCP_ADDR} instead of localhost. But be aware that I couldn't try. Its just an educated guess. Better you double check by doing an env inside your persona_api container when everything is running.
It's should be fixed your issue.
eureka:
instance:
prefer-ip-address: true
instance-id: ${spring.cloud.client.ipAddress}:${server.port} #make the application unique on the rancher service layer
spring:
application:
index: ${random.uuid} #make the application unique on the rancher containe layer,same service but with multi-instances.
https://github.com/spring-cloud/spring-cloud-netflix/issues/740
Need instance-id:${spring.cloud.client.ipAddress}:${server.port} and index: ${random.uuid} both

Categories

Resources