Right now I have my Dockerfile as below:
FROM openjdk:8-jdk-alpine
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
What if I want to run some junit test after I start the springboot application? I am using Maven for dependencies.
Where should I put those lines?
mvn test
mvn clean test
mvn clean compile test
Or what other commands should I use?
Premise: Even if the solution is oriented to your specifics it would be better to execute tests during target jar build phase
To execute tests on your Dockerfile you can do one of the following:
copy also source files on Dockerfile and execute maven test commands. Doing this you can also build your target jar directly on the Container and also you need maven to be installed on the Container.
copy just the target jar file and use it to execute your tests you need to include test-classes on the target jar. ( See at How can I include test classes into Maven jar and execute them? ).
Regardless the way you choose, you can modify your entrypoint to execute multiple commands. You can do it basically in 2 ways:
(a) Creating a bash script that executes your commands( See at Multiple commands on docker ENTRYPOINT )
(b) Using supervisord ( See at How to write a Dockerfile which I can start a service and run a shell and also accept arguments for the shell? ). This is a better solution in order to manage process related to the container life ( so the process with PID 1 ).
Example
Let's suppose you choose to copy all source files ( option 1 ) and use a bash
script ( option 1(a) ) to do it.
Create the command.sh file as follow in order to attach container to Spring application process even if mvn test will be executed:
#!/bin/bash
#Execute Spring application
CMD="java -jar target/app.jar"
$CMD &
SERVICE_PID=$!
#Execute Tests
mvn test
#Wait for Spring execution
wait "$SERVICE_PID"
Your Dockerfile will look like follow:
#Start from maven docker image
FROM maven:3.6.1-jdk-8-alpine
#Copy all sources
COPY . .
#Build ( because you want to execute tests after the spring boot application is started you should disable test during build phase )
RUN mvn install
#Start container
COPY commands.sh /scripts/commands.sh
RUN ["chmod", "+x", "/scripts/commands.sh"]
ENTRYPOINT ["/scripts/commands.sh"]
Related
I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you
I am trying to run my DockerFile and Gradle build inside . I have tried all the ways to do it , but can't understand.
Here is my dockerFile it is working properly, but DO NOT making the gradle BUILD , can someone help me With it:
FROM gradle:4.7.0-jdk8-alpine AS build
COPY . /temp
RUN gradle build --no-daemon
FROM java:8-jdk AS TEMP_BUILD_IMAGE
COPY . /tmp
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]
The output is
'<unknown> Dockerfile: DockerFile' has been deployed successfully.
But it is not building me the Gradle. P.S i am new to DOCKER , and maybe i am doing the wrong stuff in my docker FIle
You have a multistage build here that you need to understand.
FROM gradle:4.7.0-jdk8-alpine AS build
COPY . /temp
RUN gradle build --no-daemon
This will create a docker container, copy the complete docker build context into the container and run gradle. You did not show the complete console output so I can only guess that this ran successfully. Also you did not show your build.gradle file so noone can tell you where to search for the compile result.
FROM java:8-jdk AS TEMP_BUILD_IMAGE
COPY . /tmp
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/app.jar"]
With these lines you create the next stage's docker container, and again you copy your project into the container. But nowhere I see that you would transport the build output from the first stage into the second stage. As this is missing, the resulting container of course does not contain the build result, and you believe it did not happen.
You need to add a line such as
COPY --from=build /whateverPointsToYourBuildOutput /whereverYouWantItInTheContainer
See https://docs.docker.com/engine/reference/builder/#copy
I have this working simple dockerfile.
FROM openjdk:8-jdk-alpine
WORKDIR /data
COPY target/*.jar, myapp.jar
ENTRYPOINT ["java","-jar",myapp.jar]
I build my jar using maven either locally or in a pipeline then use that .jar here. I've seen many examples installing maven in the dockerfile instead of doing the build before. Doesnt that just make the image larger? Is there a benefit of doing that?
Usually I have a CICD server which I use for building my jar file and then I generate a docker image using it. Build a jar consumes resources and doing it when you're running your docker container can take longer depending on your configuration. In a normal CICD strategy, build and deploy are different steps. I also believe your docker image should be as lean as possible.
That's my opinion.
I hope I could help you somehow.
I think you are looking for Multi-stage builds.
Example of multistage Dockerfile:
# syntax=docker/dockerfile:1
FROM golang:1.16
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go ./
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app ./
CMD ["./app"]
Notice the COPY --from=0 ... line, it's copying the result of the build that happens in the first container to the second.
These mutistage builds are good idea for builds that need to install their own tools in specific versions.
Example taken from https://docs.docker.com/develop/develop-images/multistage-build/
I use docker-compose to launch different Spring Boot apps.
My docker images are defined with this kind of Dockerfile:
FROM openjdk:8-jdk-alpine
ADD app.jar app.jar
ENTRYPOINT ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/app.jar"]
However, I would like to benefit from debugging and hot-reload features using something like mvn spring-boot:run without being dependent of a particular IDE.
What is the best way to accomplish debugging and hot-reloading with Spring Boot in a Docker container without being dependent of a particular IDE?
Notes:
my source files are build into a jar (with Maven) which is copied to a different location containing the definition of my Docker images ; meaning my sources files are not in the docker image.
the reason I want to develop in the Docker container is that my apps depend on each other, and are configured in the docker-compose environment, so I cannot easily run one app alone outside the docker network and environment.
I thought of mounting a volume containing my spring boot projects in the docker containers, and then use mvn spring-boot:run in the container ; but I can't prevent maven to download all dependencies from the internet (I tried specifying a local repository containing all my dependencies without success). I would like to know if this a decent solution and how to make it work.
You have to follow the following steps to build and run spring boot application in docker.
Step-1 : Create a File called Dockerfile in your Project.
Step-2 : Write the Following Code on you Dockerfile
# Use the official maven/Java 8 image to create a build artifact.
# https://hub.docker.com/_/maven
FROM maven:3.6-jdk-11 as builder
# Copy local code to the container image.
WORKDIR /app
COPY pom.xml .
COPY src ./src
# Build a release artifact.
RUN mvn package -DskipTests
# Use AdoptOpenJDK for base image.
# It's important to use OpenJDK 8u191 or above that has container support enabled.
# https://hub.docker.com/r/adoptopenjdk/openjdk8
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM adoptopenjdk/openjdk11:alpine-slim
# Copy the jar to the production image from the builder stage.
COPY --from=builder /app/target/your-app-name*.jar /your-app-name.jar
# Run the web service on container startup.
CMD ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar", "/your-app-name.jar"]
Step-3 : Start Your Docker Desktop Application
Step-4 : Open Your Terminal or Windows PowerShell. Then go to Project Directory.
Step-6 : Write the Following Command to create image for your application (You must have internet connection to download all dependencies).
docker build -f Dockerfile -t your-app-name .
Step-7 : After image creation success. Write the following code to run the image in Docker container.
docker run -p docker-port:app-port image-name
Following your line of thinking you can try to copy your dependencies from a volume into the project container and then use the offline mode in something like this:
FROM openjdk:8-jdk-alpine
WORKDIR /app
# copy the Project Object Model file
COPY ./pom.xml ./pom.xml
# copy your dependencies
COPY app.jar app.jar
# copy your other files
COPY ./src ./src
# Set fetch mode to offline to avoid downloading them from the internet
RUN mvn dependency:go-offline
Apparently it's also possible to configure the offline mode globally by setting the offline property in the ~/.m2/settings.xml file, you can setup that and copy your m2 file and reference it when running the container
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<offline>true</offline>
</settings>
mvn -Dmaven.repo.local=~/.m2/settings.xml ...
You can find more information here:
https://www.baeldung.com/maven-offline
Specifying Maven's local repository location as a CLI parameter
Context:
It's an application written in Kotlin and using Spring-boot with Maven.
Basically, I'd like to know if it makes sense what I'm doing.
Running mvn install then the target folder will be created with the corresponding jar file.
Therefore the Dockerfile will be just copying the jar file into the working directory of the container and run java -jar WHATEVER.jar.
Example of the simple Dockerfile
FROM openjdk:8-jre-alpine
COPY target/app-DEV-SNAPSHOT.jar .
EXPOSE 8089
CMD ["java", "-jar", "./app-DEV-SNAPSHOT.jar"]
But I'd say, makes much more sense to me to use the multi-stage building and in the first stage generate the jar file and in the second stage, execute it. I tried this second approach but I'm facing an issue with main class doesn't exist
Multi-stage Dockerfile:
FROM maven:3.5.2-jdk-8-alpine as BUILD
ENV APP_HOME=/usr/src/service
COPY ./src /usr/src/service
COPY pom.xml /usr/src/service
WORKDIR /usr/src/service
RUN mvn install
FROM openjdk:8-jre-alpine
COPY --from=BUILD /usr/src/service/target/app-DEV-SNAPSHOT.jar ./
EXPOSE 8080
CMD ["java", "-jar", "./app-DEV-SNAPSHOT.jar"]
Which one is the correct one?
You should use the multistage dockerfile. Reason being you want to have least dependency on the host system. When you run mvn on host you add dependency of mvn and in turn java.
My recommendation would be to use multistage docker to build in one stage and copy to another stage