Flyway can't find migrations when database is run on docker container - java

I have been following this tutorial trying to set up a database using flyway migration scripts. The only difference with the tutorial is that I have been trying to use it in a Spring Boot application. For some reason, when I run "docker-compose up" I always get the the following logging in my terminal.
flyway_1 | Flyway Community Edition 7.5.3 by Redgate flyway_1 |
Database: jdbc:postgresql://postgres:5432/db-name (PostgreSQL 12.2)
flyway_1 | Successfully validated 0 migrations (execution time
00:00.041s) flyway_1 | WARNING: No migrations found. Are your
locations set up correctly? flyway_1 | Current version of schema
"public": << Empty Schema >> flyway_1 | Schema "public" is up to
date. No migration necessary.
However, I have a migration script under src/main/resources/db/migration. I am not sure why it is not able to find it, as it seems that is where flyway is supposed to look for them by default.
Here is my docker-compose.yml file
version: '3'
services:
flyway:
image: flyway/flyway:7.5.3
command: -configFiles=/flyway/conf/flyway.config -locations=filesystem:/flyway/sql -connectRetries=60 migrate
volumes:
- ${PWD}/src/main/java/resources/db/migration
- ${PWD}/docker-flyway.config:/flyway/conf/flyway.config
depends_on:
- postgres
postgres:
image: postgres:12.2
restart: always
ports:
- "5432:5432"
environment:
- POSTGRES_USER=example-username
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db-name
And here is my docker-flyway.config file.
flyway.url=jdbc:postgresql://postgres:5432/db-name
flyway.user=example-username
flyway.password=pass
flyway.baselineOnMigrate=false

The Flyway files need to have the .sql extension. Mentioned here: https://flywaydb.org/documentation/concepts/migrations#naming

I realized that I had one of the volume mappings written incorrectly in my docker-compose.yml file. I don't exactly understand the mapping itself, but after copying what was in this post it ran the migration script properly.

Related

How to run Spring boot app's unit test on GitHub Action

I have an app that does CRUD basically. I am able to run my unit tests locally but on the CI(GitHub Action) it's failing. I am getting the error because of PostgreSQL. Here you can see the error. I couldn't be able to fix that. You can access the whole repository on this LINK. You can see my ci.yaml file below;
name: CI
on:
pull_request:
push:
branches: [develop, main]
concurrency:
group: ci-${{ github.ref }}-group
cancel-in-progress: true
jobs:
default:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Set up JDK
uses: actions/setup-java#v3
with:
java-version: '17'
distribution: 'temurin'
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Update dependency graph
uses: advanced-security/maven-dependency-submission-action#571e99aab1055c2e71a1e2309b9691de18d6b7d6
- name: Build Jar file
run: ./project-dev build-jar
- name: Save Jar file
uses: actions/upload-artifact#v3
with:
name: demo-0.0.1-SNAPSHOT
path: target/demo-0.0.1-SNAPSHOT.jar
retention-days: 1
Can someone help me to run my unit tests on the CI, please?
You need to make sure that the database runs.
Your program expects a Posgres DB named school_management to be available under localhost:5432.
However, such a database isn't available in your script.
For setting up the database, you could use the an existing action like this one :
steps:
- uses: harmon758/postgresql-action#v1
with:
postgresql version: '11'
postgresql db: school_management
postgresql user: learning
postgresql password: sa123456
Alternatively, you could use PosgreSQL service containers as described here:
# Service containers to run with `container-job`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: sa123456
POSTGRES_USER: learning
POSTGRES_DB: school_management
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
However this makes it run using a different hostname so you have to change your spring.datasource.url to jdbc:postgresql://localhost:5432/school_management or similar.
Integrated in your workflow, it could look like the following:
name: CI
on:
pull_request:
push:
branches: [develop, main]
concurrency:
group: ci-${{ github.ref }}-group
cancel-in-progress: true
jobs:
default:
runs-on: ubuntu-latest
# Service containers to run with `container-job`
services:
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_PASSWORD: sa123456
POSTGRES_USER: learning
POSTGRES_DB: school_management
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout#v3
- name: Set up JDK
uses: actions/setup-java#v3
with:
java-version: '17'
distribution: 'temurin'
# override spring.datasource.url
- name: Setup config
run: |
mkdir config
echo 'spring.datasource.url=jdbc:postgresql://postgres:5432/school_management' > config/application.properties
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Update dependency graph
uses: advanced-security/maven-dependency-submission-action#571e99aab1055c2e71a1e2309b9691de18d6b7d6
- name: Build Jar file
run: ./project-dev build-jar
- name: Save Jar file
uses: actions/upload-artifact#v3
with:
name: demo-0.0.1-SNAPSHOT
path: target/demo-0.0.1-SNAPSHOT.jar
retention-days: 1
Another possibility is to use an embedded database like H2 for tests.
With this, you don't have to setup any database.
Looking at your logs line 1351
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Your tests are trying to connect to a local Postgres instance that is not available. Also looking at your tests you have both unit and integration tests. Whereas an integration test needs to load the application context meaning that your running application inside of the pipeline will not be able to connect to Postgres. Hence, all of your integration tests will fail that utilize Postgres.
However, your other tests are passing, line 2085:
2023-02-14 12:13:39.378 INFO 1740 --- [ main] o.s.j.d.e.EmbeddedDatabaseFactory : Starting embedded database: url='jdbc:h2:mem:d00124ab-b172-4fd1-bf29-b4836ae2f938;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=false', username='sa'
these are working since your application is connecting correctly to the h2 database that you have.
the StudentRepositoryTest are working since you have the following annotation in your class #DataJpaTest which will boot up this integration test and connect to the in-memory database.
I think the test that is failing is the following DemoApplicationTests:
#SpringBootTest
class DemoApplicationTests {
#Test
void contextLoads() {
}
}
Since this test load the application context (the whole application) and will automatically try to connect with postgres.
So to fix the issue just delete the file. or a better solution which I would recommend (which is a bit more advanced) is to use something called testcontainers and actually run a postgres database inside of a container.
The reason why am suggesting the latter solution is normally once you want to run an integration test you try to have the exact solution that your application runs on production. Hence, an h2 database might have edge cases that does not match postgres database

Cannot access Github Action MSSQL database in tests

I have a Spring Boot application with MSSQL database. I would like to run github action and run tests for pull requests and merges to master. However I have problem with connecting to database from GA tests. My application uses YAML configuration and I have separate config file for CI tests.
Here is workflow:
name: Java CI with Maven
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
jobs:
build:
runs-on: ubuntu-latest
services:
mssql:
image: mcr.microsoft.com/mssql/server:2019-latest
env:
SA_PASSWORD: myPassword
ACCEPT_EULA: 'Y'
DBNAME: test
ports:
- 1433:1433
steps:
- uses: actions/checkout#v3
- name: Set up JDK 11
uses: actions/setup-java#v3
with:
java-version: '11'
distribution: 'temurin'
cache: maven
- name: Build with Maven
run: mvn -ntp -U clean test -P junit-ci
And junit-ci config file:
spring:
datasource:
driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
url: jdbc:sqlserver://mssql:1433;database=test;
username: sa
password: myPassword
And here is error:
[main] ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host mssql, port 1433 has failed. Error: "mssql. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:285)
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2434)
Suggest to use Testcontainers instead. This way it is the testcontainers Java library which takes care of starting a database to use it for Integration Testing. It is sooo much easier than the path you are currently on. IMHO.
Your GitHub Action YAML then becomes simpler. It will just be pure Maven actions. You also make your test code less dependent on GitHub Actions as your CI system.
Using Testcontainers has other advantages: Your test can have full control over the database (or the container in which it runs). For example you can have a test where you kill the container during the test, thereby similating the effect on your application on a database which suddenly is lost.
Btw: Strictly speaking, what you are attempting is not Unit Tests, but Integration Tests. Maven makes a distinction. It is advisable to make this distinction. Integration Tests are often quite heavy. By using Testcontainers approach the database container will only be started when you tell Maven to execute Integration Tests. In your example, the database container is always started, regardless if it is needed.

docker-compose java application connection to mongodb

2 Containers, one Java application and the second mongodb.
If I run my java app locally and mongodb in a container, it connects but if both run inside a container, java app can't connect to mongodb.
docker-compose file is as follows, am I missing something
version: "3"
services:
user:
image: jboss/wildfly
container_name: "user"
restart: always
ports:
- 8081:8080
- 65194:65193
volumes:
- ./User/target/User.war:/opt/jboss/wildfly/standalone/deployments/User.war
environment:
- JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,address=0.0.0.0:65193,suspend=n,server=y -Djava.net.preferIPv4Stack=true
- MONGO_HOST=localhost
- MONGO_PORT=27017
- MONGO_USERNAME=myuser
- MONGO_PASSWORD=mypass
- MONGO_DATABASE=mydb
- MONGO_AUTHDB=admin
command: >
bash -c "/opt/jboss/wildfly/bin/add-user.sh admin Admin#007 --silent && /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0"
links:
- mongo
mongo:
image: mongo:4.0.10
container_name: mongo
restart: always
volumes:
- ./assets:/docker-entrypoint-initdb.d/
environment:
- MONGO_INITDB_ROOT_USERNAME=myuser
- MONGO_INITDB_ROOT_PASSWORD=mypass
ports:
- 27017:27017
- 27018:27018
- 27019:27019
Edit
I'm also confused about the following.
links:
- mongo
depends_on:
- mongo
At 2019 July, official docker documentation :
Source: https://docs.docker.com/compose/compose-file/#links
Solution #1 : environment file before start
Basically We centralize all configurations in a file with environment variables and execute it before docker-compose up
The following approach helped me in these scenarios:
Your docker-compose.yml has several containers with complex dependencies between them
Some of your services in your docker-compose needs to connect to another process in the same machine. This process could be a docker container or not.
You need to share variables between several docker-compose files like host, passwords, etc
Steps
1.- Create one file to centralize configurations
This file could be named: /env/company_environments with extension or not.
export MACHINE_HOST=$(hostname -I | awk '{print $1}')
export GLOBAL_LOG_PATH=/my/org/log
export MONGO_PASSWORD=mypass
export MY_TOKEN=123456
2.- Use the env variables in your docker-compose.yml
container A
app_who_needs_mongo:
environment:
- MONGO_HOST=$MACHINE_HOST
- MONGO_PASSWORD=$MONGO_PASSWORD
- TOKEN=$MY_TOKEN
- LOG_PATH=$GLOBAL_LOG_PATH/app1
container B
app_who_needs_another_db_in_same_host:
environment:
- POSTGRESS_HOST=$MACHINE_HOST
- LOG_PATH=$GLOBAL_LOG_PATH/app1
3.- Startup your containers
Just add source before docker-compose commands:
source /env/company_environments
docker-compose up -d
Solution #2 : host.docker.internal
https://stackoverflow.com/a/63207679/3957754
Basically use a feature of docker in which host.docker.internal could be used as the ip of the server in which your docker-compose has started several containers
You probably cant connect because you set the MONGO_HOST as localhost and mongo is a linked service.
In order to use linked services network, you must specify the MONGO_HOST as the name of the service - mongo, like that:
MONGO_HOST=mongo

run java in docker-compose

this is my docker-compose file.yaml:
version: '3.3'
services:
db:
container_name: dbContainer
image: mysql:5.7
volumes:
- /home/crismon-01/Documenti/TESI/Docker/mysqlLogin/datas:/var/lib/mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: "root"
MYSQL_USER: "root"
MYSQL_PASSWORD: "root"
MYSQL_DATABASE: "schema1"
java:
container_name: loginJava
image: openjdk:7
depends_on:
- db
volumes:
- ./home/crismon-01/Documenti/TESI/Docker/mysqlLogin:/usr/src/myapp
working_dir: /usr/src/myapp
command: bash -c "java -jar LogiIn.jar"
it is a compose with two cotnainer one with mysql and one with javacode that use the db, now i need to run it, and i have this error:
crismon-01#crismon01-XPS15:~/Documenti/TESI/Docker/mysqlLogin$ docker-compose up
Starting dbContainer ... done
Starting mysqllogin_java_1 ... done
Attaching to dbContainer, mysqllogin_java_1
dbContainer | Initializing database
dbContainer | 2018-04-12T15:35:07.134004Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbContainer | 2018-04-12T15:35:07.135231Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
dbContainer | 2018-04-12T15:35:07.135247Z 0 [ERROR] Aborting
dbContainer |
java_1 | Error: Unable to access jarfile LogiIn.jar
dbContainer exited with code 1
mysqllogin_java_1 exited with code 1
could someone have idea of the dource of error?
The problem is that you are specifying command sections in the compose section of the java service. Only appears to be taken, which is the last one.
The solution is to group both commands into one command as such
java:
image: openjdk:7
depends_on:
- db
volumes:
- /home/crismon-01/Documenti/TESI/Docker/mysqlLogin:/usr/src/myapp
command: bash -c "cd /usr/src/myapp && java -jar LogiIn.jar"
Take a look at Using Docker-Compose, how to execute multiple commands for more info.
Alternatively, you can only set working_dir property and remove the cd command.
volumes:
- /home/crismon-01/Documenti/TESI/Docker/mysqlLogin:/usr/src/myapp
working_dir: /usr/src/myapp
command: java -jar LogiIn.jar
Testcontainers library has support for Docker Compose
Quoting official documentation
A single class rule, pointing to a docker-compose.yml file, should be sufficient to launch any number of services required by your tests:
#ClassRule public static DockerComposeContainer environment =
new DockerComposeContainer(new File("src/test/resources/compose-test.yml"))
.withExposedService("redis_1", REDIS_PORT)
.withExposedService("elasticsearch_1", ELASTICSEARCH_PORT);
In this example, compose-test.yml should have
content such as:
redis: image: redis elasticsearch: image: elasticsearch
For more details see official documentation
https://www.testcontainers.org/modules/docker_compose/

DotCMS Docker MySQL 500

I am trying to build a docker-compose file for the development of a dotcms site.
I have the following in my docker-compose.yml:
version: "3"
services:
dotcms:
image: openjdk
command: /app/bin/startup.sh run
ports:
- 8080:8080
volumes:
- ./:/app
depends_on:
- db
db:
image: mysql
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0 --lower_case_table_names=1
volumes:
- ./db:/var/lib/mysql
ports:
- 3308:3306
environment:
MYSQL_ROOT_PASSWORD: dotcms
MYSQL_DATABASE: dotcms
MYSQL_USER: dotcms
MYSQL_PASSWORD: dotcms
after running docker-compose up
When I try to load localhost:8080 I get a 500 error. I look in the dotcms database and there is a table called db_version however that is all there is. No other tables are created.
I have tried deleting the dotcms database and recreating then running docker-compose up once again, but I get the same issue.
I have also tried deleting the ./db folder (the mounted volume for the mysql database) and rerunning, again same issue.
Update
I have updated the dotcms container to run: command: sh -c "sleep 30 && /app/bin/startup.sh run"
I also added --general_log=1 --general_log_file=/var/log/mysql/query.log to the db command
I deleted the local db folder and ran docker-composer up again.
Still getting the same results.
Here are the logs:
dotcms.log: https://pastebin.com/5WnrarK8
catalina.log: https://pastebin.com/Z3vHbnp2
localhost.log: https://pastebin.com/S2CSPqxQ
from the db container
mysql.error.log: https://pastebin.com/4bYwB2Z2
mysql.query.log: https://pastebin.com/maDUXFm5
(This query file was very large, I removed everything before the first entry showing: mysql-connector-java-5.1.37
docker logs <container id>
db.container.log: https://pastebin.com/Wz7aRhVc
dotcms.container.log: https://pastebin.com/qNVBfTpf
I'm not a mysql expert, but the log suggests this is a mysql issue unrelated to dotCMS. The mysql.error.log shows that mysql shuts down almost immediately after it starts up - meaning it may be shutting down before dotCMS has a chance to access the database, causing the dotCMS query to fail.
Consider this section of the mysql.error.log (lines 38-44 in your pastebin):
2017-09-19T08:13:37.664410Z 0 [Note] mysqld: ready for connections.
Version: '5.7.19' socket: '/tmp/tmp.Dlc2I8QgCt/mysqld.sock' port: 3306 MySQL Community Server (GPL)
2017-09-19T08:13:37.664422Z 0 [Note] Executing 'SELECT * FROM INFORMATION_SCHEMA.TABLES;' to get a list of tables using the deprecated partition engine. You may use the startup option '--disable-partition-engine-check' to skip this check.
2017-09-19T08:13:37.664430Z 0 [Note] Beginning of list of non-natively partitioned tables
2017-09-19T08:13:37.664491Z 0 [Note] Giving 1 client threads a chance to die gracefully
2017-09-19T08:13:37.664553Z 0 [Note] Shutting down slave threads
2017-09-19T08:13:37.664684Z 3 [ERROR] 1053 Server shutdown in progress
There's almost no time between the [Note] mysqld: ready for connections message and the [ERROR] 1053 Server shutdown in progress message. And the mysql query shown in the dotcms.log error message doesn't show at all in the mysql.query.log (or at least in the portion of it you've posted), indicating that it never reached the mysql database.
So if you haven't already, I suggest you try starting up mysql in the Docker container without starting dotCMS at all, and check the logs to make sure it starts up and stays up without problems. Then add the dotCMS startup, and if that causes mysql to have issues, compare the mysql logs with and without dotCMS to see what changes.
Other than that, double-check your dotCMS context.xml file (in /dotserver/tomcat-8.0.18/webapps/ROOT/META-INF) to make sure you've configured properly to access the mysql db.
I was facing a different error but you made my day with the parameter
command: mysqld --character-set-server=utf8 --collation-server=utf8_unicode_ci --init-connect='SET NAMES UTF8;' --innodb-flush-log-at-trx-commit=0 --lower_case_table_names=1
I was trying dockerized mysql (5.6, 5.7, 5.7.29, etc...) but SQL initialization was always failing due to SQL errors possible related with collation or tablenames case.
Thank you very much

Categories

Resources