I have the following docker-compose:
version: '3.1'
services:
db:
container_name: db
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=world
volumes:
- ./mysql-db/:/docker-entrypoint-initdb.d
networks:
- my-network
app:
depends_on:
- db
container_name: app
build: App/
networks:
- my-network
networks:
my-network:
driver: bridge
This builds the mysql image and uses a local file to create the database. I am able to connect to the database through a database client on my host machine. I know the db is running and working with those credentials on port 3306.
App/Dockerfile:
# Build stage
FROM maven:latest AS build
COPY src /app/src
COPY pom.xml /app
# need to assemble to package in plugins
RUN mvn -f /app/pom.xml clean compile assembly:single
# Package stage
FROM openjdk:latest
COPY --from=build /app/target/seMethods-1.0-jar-with-dependencies.jar /usr/local/lib/build.jar
ENTRYPOINT ["java", "-jar", "/usr/local/lib/build.jar"]
This builds the jar file using maven.
App/src/App.java
// sql imports
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
public class App
{
public static void main( String[] args )
{
try {
String dbUrl = "jdbc:mysql://db:3306/world";
Connection con = DriverManager.getConnection(dbUrl,"root","password");
String testStatement = "SELECT * FROM city;";
PreparedStatement preparedTest = con.prepareStatement(testStatement);
ResultSet rs = preparedTest.executeQuery();
while(rs.next()){
System.out.println(rs.getRow());
}
} catch (Exception e) {
// handle any errors
System.out.println(String.format("Error: %s", e));
}
}
}
When my docker-compose runs, the containers are created although my app stops with the following:
Error: com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
How can I connect my db container to app?
You are trying to access a ressource outside your app docker container without having set ports on it. By default as you likely know docker container are insulated from the system, thus you can not access port 3306 from inside the container while you can from your host machine. Add the ports to the docker compose file.
Solved.
When creating the db image, the init file used to generate the inital database took a few seconds to complete. Adding a Thread.sleep() hotfix to the start of my java app allowed the database tables to be created and then I am able to connect.
Related
I am working on a to-do list using a Java server and Postgres DB - specifically trying to set up a local dev environment using Docker Compose. The server has previously been deployed to Heroku and the database connection works without trouble. I am getting a suitable driver not found error when attempting to establish a connection between the server and DB in Docker.
The Java DB connection code:
public void connectToDatabase() {
try {
String url = System.getenv("JDBC_DATABASE_URL");
conn = DriverManager.getConnection(url);
System.out.println("Database Connection Successful");
} catch (Exception e) {
e.printStackTrace();
}
}
The Java Server Dockerfile:
FROM gradle:7.4-jdk17-alpine
ADD --chown=gradle . /code
WORKDIR /code
EXPOSE 5000
CMD ["gradle", "--stacktrace", "run"]
The image builds without problems. However, when starting with docker compose up, I get the following error: java.sql.SQLException: No suitable driver found for "jdbc:postgresql://tasks-db:5432/test-tasks-db?user=postgres&password=postgres"
The server still runs, just without the DB connection - I can access other endpoints/features.
Docker Compose:
version: "3.9"
services:
java-service:
build:
context: ./EchoServer/
dockerfile: Dockerfile
ports:
- "5000:5000"
environment:
- PORT=5000
- JDBC_DATABASE_URL="jdbc:postgresql://tasks-db:5432/test-tasks-db?user=postgres&password=postgres"
tasks-db:
image: postgres
restart: always
ports:
- "1235:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- tasks-db:/var/lib/postgresql/data
volumes:
tasks-db:
driver: local
logvolume01: {}
Grateful for any help, have been blocked on this most of the evening.
EDIT: build.gradle dependencies:
dependencies {
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.7.0'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.7.0'
implementation 'org.json:json:20210307'
implementation 'org.postgresql:postgresql:42.3.1'
}
So after a lot of trial and error, it was a problem with the docker-compose yml, specifically the environmental variable for the DB.
BAD:
JDBC_DATABASE_URL="jdbc:postgresql://tasks-db:5432/test-tasks-db?user=postgres&password=postgres"
This gives DriverManager the url wrapped in quotes. You do not want this.
GOOD:
JDBC_DATABASE_URL=jdbc:postgresql://tasks-db:5432/test-tasks-db?user=postgres&password=postgres
Lack of quotation marks in docker-compose.yml leads to a happy DriverManager.
You don't show how you build your Java application - but you're missing a dependency on the postgres JDBC driver. It does not come with Java, but has to be presented to a Java application in its classpath. Using maven, you would add this dependency: https://mvnrepository.com/artifact/org.postgresql/postgresql/42.3.3
I am writing a selenium test to verify a file being downloaded. It works fine in locally. And I can easily access the file both through the 'target' folder and inside the container /home/seluser/Downloads.
The test script is:
#BeforeMethod
public void setUp() throws MalformedURLException {
folder = new File("target");
for(File file : folder.listFiles()) {
file.delete();
}
System.setProperty("webdriver.chrome.driver", "chromedriver.exe");
ChromeOptions options = new ChromeOptions();
Map<String, Object> prefs = new HashMap<String, Object>();
prefs.put("profile.default_content_settings.popups", 0);
prefs.put("download.default_directory", folder.getAbsolutePath());
options.setExperimentalOption("prefs", prefs);
DesiredCapabilities cap = new DesiredCapabilities();
cap.setBrowserName("chrome");
cap.setCapability(ChromeOptions.CAPABILITY, options);
//driver = new ChromeDriver(cap);
//driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), cap);
driver = new RemoteWebDriver(new URL("http://docker:4444/wd/hub"), cap);
}
#AfterMethod
public void tearDown() {
driver.quit();
}
#Test
public void downloadFileTest() throws InterruptedException {
driver.get("http://the-internet.herokuapp.com/download");
driver.findElement(By.linkText("some-file.txt")).click();
Thread.sleep(2000);
File listOffFiles[] = folder.listFiles();
Assert.assertTrue(listOffFiles.length > 0);
for(File file : listOffFiles) {
Assert.assertTrue(file.length() > 0);
}
}
Let me explain a little. First I create a folder named "target" in the project root repository. Then I set the download path in the chrome docker comtainer via container volumes in docker-compose file.
version: "3"
services:
chrome:
image: selenium/node-chrome:4.0.0-20211013
container_name: chrome
shm_size: 2gb
depends_on:
- selenium-hub
volumes:
- ./target:/home/seluser/Downloads
environment:
- SE_EVENT_BUS_HOST=selenium-hub
- SE_EVENT_BUS_PUBLISH_PORT=4442
- SE_EVENT_BUS_SUBSCRIBE_PORT=4443
- SE_NODE_GRID_URL=http://localhost:4444
ports:
- "6900:5900"
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4444:4444"
This setting works fine locally. when I run it in the gitlab CI, i cannot push a empty fold to gitlab, so i have to create a file and store it in the folder and push it to gitlab. But it the test script, I delete this file in the setup stage in case it disturb the assertion. But the pipeline fails. The result does not give me more details , just said the assertionException. Here is the gitlab-ci.yml:
image: adoptopenjdk/openjdk11
stages:
- gradle-build
- docker-test
.gradle_template: &gradle_definition
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
before_script:
- export GRADLE_USER_HOME=${CI_PROJECT_DIR}/.gradle
gradle-build:
<<: *gradle_definition
stage: gradle-build
script:
- chmod +x ./gradlew
- ./gradlew --build-cache assemble
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- build
- .gradle
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 day
only:
- feature/multi-browsers
chrome-test:
stage: docker-test
image:
name: docker/compose:1.29.2
entrypoint: [ "/bin/sh", "-c" ]
services:
- docker:19.03.12-dind
variables:
DOCKER_TLS_CERTDIR: ""
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2375/
cache:
key: "$CI_COMMIT_REF_NAME"
policy: pull
paths:
- build
- .gradle
dependencies:
- gradle-build
before_script:
- docker info
- docker-compose --version
script:
- apk add --no-cache docker-compose
- apk add openjdk11
- docker-compose down
- docker-compose up --build --force-recreate --no-deps -d
- echo "Hello, chrome-test"
- chmod +x ./gradlew
- ./gradlew :test --tests "LogInTest_chrome"
artifacts:
when: always
reports:
junit: build/test-results/test/**/TEST-*.xml
paths:
- build/reports/*
expire_in: 1 day
only:
- feature/multi-browsers
I wonder if someone has experience with this download test in gitlab CI. I think the download path I set maybe not work in gitlab CI. I even have no ideas how to check if a file is downloaded or not it gitlab CI.
I dont exactly know what you expect to run on http://docker:4444, but that seems not correct at: driver = new RemoteWebDriver(new URL("http://docker:4444/wd/hub"), cap);.
I geuss you want to connect with the selenium-hub instead. Personally I always prefer the use of GitLab services over running docker-compose in your pipeline. Maybe this answer about running E2E tests with Docker in GitLab helps.
Statement : 'Let me explain a little. First I create a folder named "target" in the project root repository.' I assume that you are creating this by executing folder = new File("target"); This actually will not create the directory, what you need to apply something like the following:
String directory = "/target";
directory.mkdir()
However, when this part of your code runs, this will not actually create the directory due to your docker-compose.yml where the volume mapping configuration from your chrome-node is mapped to the host /target directory.
volumes:
./target:/home/seluser/Downloads
This will already exist, as docker will create it, so the java code will return a boolean value of 'false' (as it does not need to create the directory) and will proceed to the next step.
docker-compose.yml: there is a mapping for your 'selenium/node-chrome' image where you have mapped /target to /home/seluser/Downloads. As this directory does not initially exist when 'docker-compose up' is run, docker will create the directory on the host, so it can be mapped to your desired volume. Here lies 2 problems, first of all docker runs as root and will create the new '/target' directory on the host, but only gives it linux permissions 'drwxr-xr-x' (755). This means that only root user can write to that directory. So even though the configuration has mapped the volume, when you download to the directory as 'seluser', it will not be able to write and the browser will be returned with a 'Permission denied' response.
The other issue is the java code declares to the RemoteWebDriver that the download will need to be saved to prefs.put("download.default_directory", folder.getAbsolutePath());, which has been declared as "target", this is an issue as the chrome-node where the browser resides does not have a directory called '/target', so will fail to
download anyway. This is the cause of your 'assertion exception'
Propose that the following should be done for efficiency and stability:
Update the gitlab-ci.yml prior to the 'docker-compose down & docker-compose up --build --force-recreate --no-deps -d' commands
mkdir /target
chmod -R 777 /target
This will ensure that the directory can be written to by 'seluser' on the node-chrome container
Update the docker-compose.yml
For Windows Host => GitLab CI Docker(WSL2) => Chrome-Node Docker
volumes:
./target:/home/seluser/Downloads
For Linux Host => GitLab CI Docker(Linux) => Chrome-Node Docker
volumes:
/target:/home/seluser/Downloads
This will ensure that the Chrome browser can find a directory named '/target' and that the java test can see a file written to the host '/target' directory
Also your selenium-hub configuration needs to be updated, ports for publish events and subscribe events have not been mapped, update to the following:
selenium-hub:
image: selenium/hub:4.0.0-20211013
container_name: selenium-hub
ports:
- "4442:4442"
- "4443:4443"
- "4444:4444"
I have a Java Spring Boot app which works with a Postgres database. I want to use Docker for both of them. Initially, I created a docker-compose.yml file as given below:
version: '3.2'
services:
postgres:
restart: always
container_name: sample_db
image: postgres:10.4
ports:
- '5432:5432'
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_DB=${POSTGRES_DB}
# APP**
web:
build: .
environment:
SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/test
expose:
- '8080'
ports:
- '8080:8080'
Then,inside the application.properties file I defined the following properties.
server.port=8080
spring.jpa.generate-ddl=true
spring.datasource.url=jdbc:postgresql://postgres:5432/test
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.username=root
spring.datasource.password=root
spring.flyway.baseline-on-migrate=true
spring.flyway.enabled=true
# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.PostgreSQLDialect
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = validate
spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: true
Also,I created a Dockerfile in my project directory, which looks like this:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
EXPOSE 8080
RUN mkdir -p /app/
RUN mkdir -p /app/logs/
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
FROM postgres
ENV POSTGRES_PASSWORD postgres
ENV POSTGRES_DB testdb
COPY schema.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
I issued these commands and ended up in the error as given below.
mvn clean package
docker build ./ -t springbootapp
docker-compose up
ERROR: for household-appliances_web_1 Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown
ERROR: for web Cannot start service web: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown
ERROR: Encountered errors while bringing up the project.
Kindly anyone help on this!
I had this error when setting up a Rails appliation for Docker:
My docker-entrypoint.sh file was placed in the root folder of my application with this content:
#!/bin/sh
set -e
bundle exec rails server -b 0.0.0.0 -e production
And in my Dockerfile, I defined my entrypoint command this way:
RUN ["chmod", "+x", "docker-entrypoint.sh"]
ENTRYPOINT ["docker-entrypoint.sh"]
But I was getting the error below when I ran the docker-compose up command:
ERROR: for app Cannot start service app: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "docker-entrypoint.sh": executable file not found in $
PATH": unknown
Here's how I fixed it:
Specify an actual path for the docker-entrypoint.sh file, that is instead of:
ENTRYPOINT ["docker-entrypoint.sh"]
use
ENTRYPOINT ["./docker-entrypoint.sh"]
This tells docker that the docker-entrypoint.sh file is located in the root folder of your application, you could also specify a different path if the path to your docker-entrypoint.sh is different, but ensure you do not miss out on the ./ prefix to your docker-entrypoint.sh file path definition.
So mine looked like this afterwards:
RUN ["chmod", "+x", "docker-entrypoint.sh"]
ENTRYPOINT ["./docker-entrypoint.sh"]
That's all.
I hope this helps
application.properties file content is irrelevant to question, so you can remove it.
Lets look to your Dockerfile, I will remove irrelevant code
FROM openjdk:8-jdk-alpine
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
FROM postgres
COPY schema.sql /docker-entrypoint-initdb.d/
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
So you are using multistage building, you just copying file from host to first stage.
As final stage you are using postgres image and telling to set ENTRYPOINT to java, but java does not exists in the postgres image.
What you should change:
You should have postgres containe separated from java container like you have it in docker-compose.yml file and second suggestion use CMD instead of ENTRYPOINT.
Your final Dockerfile should be
FROM openjdk:8-jdk-alpine
COPY target/household-0.0.1-SNAPSHOT.jar /app/app.jar
CMD ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app/app.jar"]
The FROM postgres line creates a second image (it is a multi-stage build) that is based on the PostgreSQL database server. Everything above that line is effectively ignored. So your final image is running a second database, and not a JVM.
You don't need this line, and you don't need to extend the database server to run a client. You can delete this line, and the application will start up.
You'll also have to separately get that schema file into the database container. Just bind-mounting the file in volumes: in the docker-compose.yml file is an easy path. If you have a database migration system in your application, running migrations on startup will be a more robust approach.
In my docker-compose.yaml file I use the image "my-service" (among other remote images that work fine)
version: "2"
services:
myservice:
image: my-service
Normally I build the "my-service" image with maven using the io.fabric8 docker-maven-plugin.
My Dockerfile:
FROM vertx/vertx3-alpine
ENV VERTICLE_HOME /opt/lib
ENV NAME my-service
ENV EXEC_JAR_NAME my-service.jar
COPY target/my-service-1.0-SNAPSHOT.jar $VERTICLE_HOME/$EXEC_JAR_NAME
COPY target/lib $VERTICLE_HOME
COPY src/main/resources/settings.json /etc/company/myservice/settings.json
ENTRYPOINT ["sh", "-c"]
CMD ["java -cp $VERTICLE_HOME/$EXEC_JAR_NAME com.company.myservice.MyVerticle"]
Is there a way using the DockerComposeContainer from Testcontainers for docker-compose to use my local image of my-service?
This is my test set up
public class MyServiceIT {
#ClassRule
public static DockerComposeContainer compose =
new DockerComposeContainer(new File("src/test/resources/docker-compose.yml"));
Currently I get the following error message as it is using local images.
7:15:34.282 [tc-okhttp-stream-454305524] INFO 🐳 [docker/compose:1.8.0] - STDERR: pull access denied for my-service, repository does not exist or may require 'docker login'
17:15:34.283 [main] WARN org.testcontainers.containers.DockerComposeContainer - Exception while pulling images, using local images if available
It sounds like I need to build the image for use in my test, but I am not sure how to do that.
That's not an error message, but just a warning if docker-compose pull fails, see here.
You can also make Docker Compose build the images for you (although it is highly recommended to use withLocalCompose(true) in that case)
I am now working on a docker project with two docker containers - one for the oracle db and the other with a java application.
The container for oracle db is working ok. I used the already built image for oracle and created my tablespaces and users in it.
Commands I used to pull and use the oracle db container is as given below:
docker pull wnameless/oracle-xe-11g
docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g
Now I have my own Java application that interacts with the oracle db and I run it using the command given below:
docker run -it --name mypgm myrepo/oracletesting
It runs an interactive java program that asks for the Oracle DB details and allows users to interact with the DB.
However I could not figure out how I have to specify details such as
Driver Name, Connection URL, Username, and Password
The values I gave are as given below:
Driver Name: oracle.jdbc.OracleDriver
Connection URL:jdbc:oracle:thin:#localhost:1521:orcl11g
Username: imtheuser
Password: **********
I dont know whats going wrong where but its not working.
I tried giving different inputs for Connection URL after inspecting the docker container ip address as well:
Connection URL: jdbc:oracle:thin:#172.17.0.2:1521:orcl11g
Am I giving the Connection URL and/or the port number correct? Can someone help me out to correctly connect these two containers and to get the project moving?
Thanks for your kind help..
You have to link the containers.
The oracle container should have a name.
try the following:
docker network create my-network # Create a network for containers
docker run -d -p 49160:22 -p 49161:1521 --network my-network --name oracle-db -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g
docker run -it --network my-network --name mypgm myrepo/oracletesting
Use as connection url to following string jdbc:oracle:thin:#oracle-db:1521:orcl11g
You can use a domain name in oracle connection string: oracle.dbhost.com, and use a --addhost oracle.dbhost.com:[ip address] when running your app in docker, or configure a dns to resolve the domain name.