Use cp in entrypoint for docker run - java

There is Dockerfile
FROM openjdk:11.0.12-jre-slim
COPY target/app.jar /app.jar
COPY configs configs
ENTRYPOINT ["java","-jar","/app.jar"]
In folder configs contains json configs for java application.
The build docker command is:
docker build --build-arg -f ~/IdeaProjects/app --no-cache -t app:latest
And the run command is:
docker run --entrypoint="cp configs var/opt/configs/ && java -jar app.jar" app:latest
Let's omit the ability to copy configs in the Dockerfile via COPY command. Unfortunately, this must be done using --entrypoint.
An error occurs when the docker run command was executed:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "cp configs var/opt/configs/ && java -jar app.jar": stat cp configs var/opt/configs/ && java -jar app.jar: no such file or directory: unknown.
Could you explain why the error occurred in this case?

I would do this with an entrypoint wrapper script. A Dockerfile can have both an ENTRYPOINT and a CMD; if you do, the CMD gets passed as arguments to the ENTRYPOINT. This means you can make the ENTRYPOINT a shell script that does first-time setup, then ends with exec "$#" to replace itself with the CMD.
#!/bin/sh
# docker-entrypoint.sh
# copy the configuration to the right place
cp configs var/opt/configs/
# run the main container command
exec "$#"
In the Dockerfile, make sure to COPY the script in (it should be checked in to source control as executable) and set it as the ENTRYPOINT.
...
COPY docker-entrypoint.sh .
ENTRYPOINT ["./docker-entrypoint.sh"] # must be JSON-array syntax
CMD ["java", "-jar", "/app.jar"] # what was previously ENTRYPOINT
When you run the container it's straightforward to replace the CMD, so you can double-check that this is doing the right thing by running an interactive shell in place of the java application.
docker run -v "$PWD/alt-configs:/configs" --rm -it my-image sh
If you do need to override the command like this at docker run time, the command you show uses && to run two commands consecutively. This needs to run a shell to be understood correctly, and in this context you need to manually provide a /bin/sh -c wrapper.
I would still recommend changing ENTRYPOINT to CMD in your Dockerfile; then you could run a relatively straightforward
docker run \
... \
-v "$PWD/alt-configs:/configs" \
my-image \
/bin/sh -c 'cp configs var/opt/configs && java -jar /app.jar'
If you use --entrypoint, it only takes the first word out of this command, and it is a Docker options so it needs to come before the image name. I'd recommend designing your image to avoid needing this awkward construct.
docker run \
... \
-v "$PWD/alt-configs:/configs" \
--entrypoint /bin/sh \
my-image \
-c 'cp configs var/opt/configs && java -jar /app.jar'
Your proposed command is having problems because it's trying to pass the entire command, including the embedded spaces and shell operators, as a single word, but that causes the OS-level process handling to try to look for an executable file with spaces and ampersands in the filename, hence the "no such file or directory" error.

Related

gitlab-runner passes wrong arguments to custom image

I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you

Running java by arguments in Docker

Let's say we have the following Dockerfile for the purpose of creating a java image and compiling two scripts.
FROM openjdk:latest
COPY src JavaDocker
WORKDIR JavaDocker
RUN mkdir -p bin
RUN javac -d bin ./com/myapp/HelloWorld1.java
RUN javac -d bin ./com/myapp/HelloWorld2.java
WORKDIR bin
ENTRYPOINT java
How can I run any of these two scripts that have been compiled?
I'm using the command: docker run myapp-image "com.myapp.Server"
And I get:
Usage: java [options] <mainclass> [args...]
(to execute a class)
or java [options] -jar <jarfile> [args...]
(to execute a jar file)
or java [options] -m <module>[/<mainclass>] [args...]
java [options] --module <module>[/<mainclass>] [args...]
(to execute the main class in a module)
or java [options] <sourcefile> [args]
(to execute a single source-file program)
Arguments following the main class, source file, -jar <jarfile>,
-m or --module <module>/<mainclass> are passed as the arguments to
main class.
I'd suggest building a separate image per application; that can help clarify what the image is supposed to do. I also generally recommend using CMD over ENTRYPOINT.
So a Dockerfile that runs only the first application could look like:
FROM openjdk:latest
# Prefer an absolute path for clarity.
WORKDIR /JavaDocker
# Set up the Java class path.
RUN mkdir bin
ENV CLASSPATH=/JavaDocker/bin
# Use a relative path as the target, to avoid repeating it.
# (If you change the source code, repeating `docker build` will
# skip everything before here.)
COPY src .
# Compile the application.
RUN javac -d bin ./com/myapp/HelloWorld1.java
# Set the main container command.
CMD ["java", "com.myapp.HelloWorld1"]
What if you do have an image that contains multiple applications? If you use CMD here, it's very easy to provide an alternate command when you run the image:
docker run myapp-image \
java com.myapp.HelloWorld2
# Wait, what's actually in this image?
docker run --rm myapp-image \
ls -l bin/com/myapp
I generally recommend reserving ENTRYPOINT for a wrapper script that does some first-time setup, then runs exec "$#" to run a normal CMD. There's an alternate pattern of giving a complete command in ENTRYPOINT, and using CMD to provide its arguments. In both of these cases ENTRYPOINT needs to be JSON-array syntax, not shell syntax.
ENTRYPOINT ["java", "com.myapp.HelloWorld1"] # <-- JSON-array syntax
CMD ["-argument", "to-program-1"]
docker run myapp-image \
-argument=different -options
but it's harder to make that image do something else
docker run \
--entrypoint ls \ # <-- first word of the command is before the image name
myapp-image \
-l bin/com/myapp # <-- and the rest after
docker run \
--entrypoint java \
myapp-image \
com.myapp.HelloWorld2
Your original Dockerfile will probably work if you change the ENTRYPOINT line from shell to JSON-array syntax; using shell syntax will cause the CMD part to be ignored (including a command passed after the docker run image-name). You might find it easier to make one complete application invocation be the default and include the java command if you need to run the other.

How can I allow user to run Docker application with different arguments

I have built a Docker image for my Java application - https://bitbucket.org/ijabz/songkongdocker/src/master/Dockerfile
The last line is
CMD /opt/songkong/songkongremote.sh
the (simplified) contents of songremote.sh are
#!/bin/sh
umask 000
./jre/bin/java-jar lib/songkong-6.9.jar -r
and it works fine.
However I have a customer who wants to run songkong with the -m option and a path
e.g
#!/bin/sh
umask 000
./jre/bin/java-jar lib/songkong-6.9.jar -m /users/music
So is there a way a single docker can allow for two different command to be run, or do I have to build another docker.
Either way how do I allow /users/music to be provided by user
The full contents of songkongremote are a bit more complicated, since more options have to be passed to Java
#!/bin/sh
umask 000
./jre/bin/java -XX:MaxRAMPercentage=60 -XX:MetaspaceSize=45 -Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog -Dorg.jboss.logging.provider=jdk -Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLogging -Dhttps.protocols=TLSv1.1,TLSv1.2 --add-opens java.base/java.lang=ALL-UNNAMED -jar lib/songkong-6.9.jar -r
Update
I follows Cascaders answer and it has done something (look at Execution Command)
but songkong.sh is acting as if no parameters have been passed at all (rather than -r option passed).
songkong.sh (renamed from songkongremote.sh) now contains
#!/bin/sh
umask 000
./jre/bin/java -XX:MaxRAMPercentage=60 -XX:MetaspaceSize=45 -Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog -Dorg.jboss.logging.provider=jdk -Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLogging -Dhttps.protocols=TLSv1.1,TLSv1.2 --add-opens java.base/java.lang=ALL-UNNAMED -jar lib/songkong-6.10.jar "$#"
and the end of Dockerfile is now
EXPOSE 4567
ENTRYPOINT ["/sbin/tini"]
# Config, License, Logs, Reports and Internal Database
VOLUME /songkong
# Music folder should be mounted here
VOLUME /music
WORKDIR /opt/songkong
ENTRYPOINT /opt/songkong/songkong.sh
CMD ["-r"]
I don't understand if okay that there are two entrypoints or the signifcance of the sbin/tini one
So is there a way a single docker can allow for two different command
to be run, or do I have to build another docker.
Yes, you can handle this using single Dockerfile base on command with help of entrypoint.
entrypoint.sh
#!/bin/sh
umask 000
if [ "$1" == "-m" ];then
echo "starting container with -m option"
./jre/bin/java-jar lib/songkong-6.9.jar "$#"
else
./jre/bin/java-jar lib/songkong-6.9.jar -r
fi
so if the docker run command look like this
docker run -it --rm myimage -m /users/music
it will execute first condition
But if [ "$1" == "-m" ] this will break if user pass like "-m /users/music" in double-quotes, as the condition only checking for first argument. you can adjust this accordingly.
So the Dockerfile will only contain entrypoint
ENTRYPOINT ["/opt/songkong/songkongremote.sh"]
Or you can CMD specify in Dockerfile
ENTRYPOINT ["/opt/songkong/songkongremote.sh"]
CMD ["-m", "/song/myfile]"
update:
You can not define two entrypoint per docker image, the first will be ignore.
Now to process to argument from CMD you should use array type syntax otherwise it will seems empty
ENTRYPOINT ["/opt/songkong/songkong.sh"]
CMD ["-r"]
You may try to use ENTRY point for the command, and CMD only for arguments
ENTRYPOINT ["/sbin/tini", "/opt/songkong/songkongremote.sh" ]
CMD [] # Arguments are passed here
You also need to handle arguments in your .sh file:
#!/bin/sh
umask 000
./jre/bin/java-jar lib/songkong-6.9.jar "$#"
You can switch to ENTRYPOINT instead of CMD. Your docker file will look like:
...
ENTRYPOINT [ "/opt/songkong/songkongremote.sh" ]
...
Your customers would then run your image as:
docker run img -r
or
docker run img -m /users/music
You can even alter your bash script to accept arguments and pass those arguments using the same method.
If you need to provide default arguments and the ability to override them on demand, the following approach will work:
Dockerfile:
...
ENTRYPOINT [ "/opt/songkong/songkongremote.sh" ]
CMD ["-r"]
...
Your modified script:
#!/bin/sh
umask 000
./jre/bin/java \
-XX:MaxRAMPercentage=60 \
-XX:MetaspaceSize=45 \
-Dcom.mchange.v2.log.MLog=com.mchange.v2.log.jdk14logging.Jdk14MLog \
-Dorg.jboss.logging.provider=jdk \
-Djava.util.logging.config.class=com.jthink.songkong.logging.StandardLogging \
-Dhttps.protocols=TLSv1.1,TLSv1.2 \
--add-opens java.base/java.lang=ALL-UNNAMED \
-jar lib/songkong-6.9.jar "$#"
Then, when customers run docker run img the CMD parameter will kick-in, but when they run docker run img -m /users/music the entrypoint command will be executed with the provided arguments.

Docker exit witout throwing an error or executing code

I'm trying to run talend job inside a docker, build goes ok but when I run container it just exits without any error. Here is my dockerfile:
FROM store/oracle/serverjre:8
ARG talend_job=export_data_xml
ENV TALEND_JOB ${talend_job}
ENV ARGS ""
WORKDIR /opt/talend
COPY . /opt/talend
### Install Talend Job
RUN yum install -y unzip && \
unzip ${TALEND_JOB}.zip && \
rm -rf ${TALEND_JOB}.zip && \
chmod +x ${TALEND_JOB}/${TALEND_JOB}_run.sh
VOLUME /data
CMD ["/bin/sh","-c","${TALEND_JOB}/${TALEND_JOB}_run.sh ${ARGS}"]
Run command:
docker run -it demo:latest
It doesn't execute a code or throw error. Any idea what can be wrong or how to debug it at least?
Thanks.
My best guess it would be that is a problem with the file paths.
You can debug it by running your image with:
docker run -ti demo:latest /bin/sh
In order to go inside the container,
check if export_data_xml_run.sh is in the right path (/opt/talend/export_data_xml) and try to run it form there
Some things worth a try:
Add some echo statements after each command, eg:
RUN echo "About to install unzip..." && yum install -y unzip && echo "unzip installed" \
...
If its failing here, you should at least see the echo statements.
Below, it looks like you're setting pwd to /opt/talend, then performing a COPY of whatever is in pwd to the same dir... effectively a null operation.
WORKDIR /opt/talend
COPY . /opt/talend
This last line, be sure to pass the string args as individual strings. I've been working on something recently and the two strings I passed as "command, input" which was treated as a single string, rather than two individual strings (which is what I actually wanted):
CMD ["/bin/sh","-c","${TALEND_JOB}/${TALEND_JOB}_run.sh ${ARGS}"]
You could also log into the docker container to try and debug it once its running using:
docker exec -it [imageId] sh
Once inside the container you can run various commands to validate everything is as you expect it to be.
HTH

How to pass System property to docker containers?

So I know you can pass Environment variables to a docker container using -e like:
docker run -it -e "var=var1" myDockerImage
But I need to pass a System Property to a docker container, because this is how I run my JAR:
java -Denvironment=dev -jar myjar.jar
So how can I pass a -D System property in Docker? Like:
docker run -it {INSERT Denvironment here} myDockerImage
Use the variable you passed into the container on the java command:
docker run -it -e "ENV=dev" myDockerImage
java -Denvironment=$ENV -jar myjar.jar
One more way to do it, if running under Tomcat, is setting your system variables in your Dockerfile using ENV JAVA_OPTS like this:
ENV JAVA_OPTS="-Djavax.net.ssl.trustStore=C:/tomcatDev.jks -D_WS_URL=http://some/url/"
Hope it helps!
One can also use the following start.sh ENTRYPOINT for the Docker container, make sure to use the array syntax, e.g.:
Dockerfile:
...
ENTRYPOINT ["/start.sh"]
The actual start.sh script:
#!/bin/bash
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
exec $JAVA_HOME/bin/java -jar myjar.jar $#
Then you can just pass the Java system properties directly to your application as docker run container arguments:
docker run myDockerImage "-Dvar=var1"
Have a start.sh file, e.g.:
#!/usr/bin/env sh
exec java -Djava.security.egd=file:/dev/./urandom $* -jar /app.jar
In your Dockerfile:
...
COPY start.sh /start.sh
RUN chmod a+rx /start.sh
ENTRYPOINT ["/start.sh"]

Categories

Resources