I am trying to create a Docker container for a Spring Boot application which uses PostgreSQL as database. My aim is build a container which runs app as well as PostgreSQL. I created a Dockerfile as below:
FROM ubuntu:15.10
LABEL version="1"
ADD app.jar app.jar
RUN bash -c 'touch /app.jar'
# Install Java, Postgresql
......
USER postgres
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER test WITH SUPERUSER PASSWORD 'test';" &&\
createdb -O test test
USER root
RUN echo "local all all trust" >> /etc/postgresql/9.4/main/pg_hba.conf
RUN echo "listen_addresses='127.0.0.1'" >> /etc/postgresql/9.4/main/postgresql.conf
RUN echo "localhost vspyfe-db" >> /etc/hosts
#Define working directory.
WORKDIR /data
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/tmp", "/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
COPY ./entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["bash", "/usr/lib/postgresql/9.4/bin/postgres", "-D", "/var/lib/postgresql/9.4/main", "-c", "config_file=/etc/postgresql/9.4/main/postgresql.conf"]
and entrypoint.sh is :
#!/bin/sh
/etc/init.d/postgresql start
java -Djava.security.egd=file:/dev/./urandom -jar /app.jar
I build with this command docker build --rm=true -t vspyfe/base:v1 ., and
when I run the image with this command docker run -i -t vspyfe/base:v1 I got exception which says
Caused by: org.postgresql.util.PSQLException: FATAL: role "test" does not exist
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:471)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:112)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:66)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:125)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:32)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
at org.postgresql.Driver.makeConnection(Driver.java:393)
at org.postgresql.Driver.connect(Driver.java:267)
Even if I try to create test db with default username and password (postgres, postgres), then I got exception says test db does not exist. I do not understand where I am doing wrong. In Dockerfile I specified each parameter that I used in application.properties file.
Have you checked your connection properties? I have a working example of a spring-boot app working nicely with postgres. Both are running in different containers. Please refer to:
Spring Boot + Postgres example, using the fabric8.io plugin
To run the example:
mvn clean package -DskipTests docker:stop docker:build docker:run
Related
I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you
I would like to start two services in a stack.
Mysql
Spring boot app
The main problem is that spring boot starts before database (or starts when connection to database is not allowed). Then in logs I could see: java.net.UnknownHostException: database.
We could use startup order:
https://docs.docker.com/compose/startup-order/
So what I do? I copy wait-for-it.sh to file with docker-compose, add line
command: ["./wait-for-it.sh", "database:3306", "--", "java -Dspring.profiles.active=prod -jar app.jar"]
The result is:
java.lang.IllegalArgumentException: Invalid argument syntax: --
My entrypoint in backend Dockerfile:
ENTRYPOINT ["java","-Dspring.profiles.active=prod", "-jar","app.jar"]
How to make that spring boot app will wait for MySQL database under docker stack?
When you run the container, the ENTRYPOINT and CMD are combined. In your example you've set ENTRYPOINT to run the Java process, but then override CMD in the docker-compose.yml: instead of actually running the wait-for-it.sh script, it just gets passed as extra parameters to the JVM.
A typical pattern for using both of these together is to have ENTRYPOINT be some sort of wrapper that does first-time setup, then takes CMD as additional parameters. For this to work CMD needs to be a complete shell command. Change the Dockerfile to look like:
COPY wait-for-it.sh entrypoint.sh .
# ENTRYPOINT _must_ be in JSON-array form
ENTRYPOINT ["./entrypoint.sh"]
# CMD may be either string or JSON-array form
# (This is exactly what you originally had as ENTRYPOINT)
CMD ["java", "-Dspring.profiles.active=prod", "-jar", "app.jar"]
The entrypoint script can be very simple:
#!/bin/sh
# Wait for the database to be up
if [ -n "$MYSQL_HOST" ]; then
./wait-for-it.sh "$MYSQL_HOST:3306"
fi
# Run the CMD
exec "$#"
The important detail here is that I've configured the database host to be passed as an environment variable. This requires a shell to run to expand it, which is tricky to do in the JSON-array ENTRYPOINT syntax, so I've moved it into a separate script.
Finally, in the docker-compose.yml, do not override command: (or entrypoint:), but do make sure to set the environment variable for the script to be able to find the database.
version: '3.8'
services:
database: { ... }
application:
environment:
MYSQL_HOST: database
depends_on:
- database
# no command: override
The wrapper here will run whenever the container starts up, so if you docker-compose run application bash to get an interactive shell based on the image, it will still wait for the database to be up.
If you control both the Dockerfile and the docker-compose.yml, you shouldn't usually need to override command: in the Compose settings. I find the entrypoint-wrapper pattern useful enough that I generally default to using CMD in my Dockerfiles (there is no requirement to have an ENTRYPOINT).
We have a legacy application that I am trying to dockerize. The jar of the application has both the application and an activemq bundled together. (We cannot change the way it is built). And has certain installation steps. I created the following initial Dockerfile for this however I am facing an issue (mentioned after the Dockerfile) when I run the image.
The Dockerfile looks like this :
FROM registry:4000/openjdk:8-jre-alpine
RUN addgroup -S appuser && adduser -S -G appuser appuser
ADD ./fe.jar /home/appuser
RUN chmod +x /home/appuser/fe.jar \
&& chown appuser:appuser /home/appuser/fe.jar
USER appuser
RUN ["java", "-jar", "/home/appuser/fe.jar", "-i"]
WORKDIR /home/appuser/fe/activemq/bin
CMD ["/bin/sh", "-c", "activemq"]
The RUN command extracts the application and the activemq at the location into folder called fe.
The WORKDIR seems to be setting the working directly to activemq/bin. I confirmed this by using sh script which triggers when the image is run. In the sh script I trigger an ls and pwd command to see the contents and the location.
However when I run the image which triggers the CMD command I get the error that :
/bin/sh: activemq: not found
What can be the possible issue here?
If activemq is an executable in your bin directory (and not in PATH) then you need to edit your CMD:
CMD ["/bin/sh", "-c", "./activemq"]
Also make sure that your script is executable.
Found the problem. The activemq script starts with #!/bin/bash and I am trying to run it using sh. I need to first install bash in the image and then run the activemq script using one.
I got the hint from this answer : docker alpine /bin/sh script.sh not found
Now it moved ahead however the container dies after running immediately. Not sure what the issue is. Doesn't even give any error.
I want to access mongo db running on url: "xyz" (remote host), and in my spring boot application properties, I have mentioned the mongodb url to "xyz".
Now when i run this application inside a docker container, it can not connect to remote url, and shows the connection refused error.
How do We access the remote database from inside a container ?
Below is my DockerFile
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache bash
RUN apk add --no-cache curl
EXPOSE 8090
COPY target/<jar file> /application.jar
RUN mkdir /logs
RUN /bin/sh -c "apk add --no-cache bash"
ENTRYPOINT ["/usr/bin/java"]
CMD ["-DLOG_DIR=/logs", "-DLOG_FILE=application.log", "-jar", "-Dspring.profiles.active=local", "-Xmx1g", "/application.jar", "&"]
My Application.properties:
spring.data.mongodb.uri = <mongodb-url>
I build my docker image as:
docker build -t app:app .
I run the docker image as:
docker run -d <imageId>
Following command gets the job done.
docker run --net=host <imageId>
Be careful with this command because --net=host makes the container share network with the host. The network of the container will no longer be isolated.
I have an sbt project that spins up a server on a specified port. Here is related excerpt from build.sbt:
port in container.Configuration := sys.env.getOrElse("MY_VAR_SEARCH_PORT", 8080).toString.toInt
When I run the project from sbt, $MY_VAR_SEARCH_PORT gets picked up, and all is good.
However, for prod I use sbt-assembly and run a jar in a docker container, so the launch command looks like this:
docker run -it -p 80:80 -e MY_VAR_SEARCH_PORT=80 mydockerhubrepo/myimageid /docker-entrypoint.sh java -Djava.io.tmpdir=/tmp/jetty -Drun.mode=production -Denv=prod -jar /usr/local/jetty/start.jar
I can see that var gets passed to the container, but it is not being picked up by the jar, as it spins up a server on default port.
What would be a good way to make sbt-assembly jar access environment variables? Or maybe I can pass this var as java argument - then, how to access it from build.sbt file?
Move java startup command to a shell script that would access env vars without a problem:
In your project, add api_startup.sh:
#!/bin/sh
echo "API startup script running... with ENV=$ENV"
java -Djava.io.tmpdir=/tmp/jetty -Drun.mode=production -Denv=$ENV -Drun.port=$MY_VAR_SEARCH_PORT -jar /usr/local/jetty/start.jar
In your Dockerfile, add lines:
ADD api_startup.sh /api_startup.sh
RUN chown jetty:jetty /api_startup.sh
CMD ["/api_startup.sh"]
Now you can run it like this:
docker run -it -p 80:80 -e MY_VAR_SEARCH_PORT=80 mydockerhubrepo/myimageid