"Activemq not found" error after running custom Docker image - java

We have a legacy application that I am trying to dockerize. The jar of the application has both the application and an activemq bundled together. (We cannot change the way it is built). And has certain installation steps. I created the following initial Dockerfile for this however I am facing an issue (mentioned after the Dockerfile) when I run the image.
The Dockerfile looks like this :
FROM registry:4000/openjdk:8-jre-alpine
RUN addgroup -S appuser && adduser -S -G appuser appuser
ADD ./fe.jar /home/appuser
RUN chmod +x /home/appuser/fe.jar \
&& chown appuser:appuser /home/appuser/fe.jar
USER appuser
RUN ["java", "-jar", "/home/appuser/fe.jar", "-i"]
WORKDIR /home/appuser/fe/activemq/bin
CMD ["/bin/sh", "-c", "activemq"]
The RUN command extracts the application and the activemq at the location into folder called fe.
The WORKDIR seems to be setting the working directly to activemq/bin. I confirmed this by using sh script which triggers when the image is run. In the sh script I trigger an ls and pwd command to see the contents and the location.
However when I run the image which triggers the CMD command I get the error that :
/bin/sh: activemq: not found
What can be the possible issue here?

If activemq is an executable in your bin directory (and not in PATH) then you need to edit your CMD:
CMD ["/bin/sh", "-c", "./activemq"]
Also make sure that your script is executable.

Found the problem. The activemq script starts with #!/bin/bash and I am trying to run it using sh. I need to first install bash in the image and then run the activemq script using one.
I got the hint from this answer : docker alpine /bin/sh script.sh not found
Now it moved ahead however the container dies after running immediately. Not sure what the issue is. Doesn't even give any error.

Related

gitlab-runner passes wrong arguments to custom image

I have written a small CLI using Java, Argparse4j, and packaged it in docker using this Dockerfile:
FROM openjdk:18
ENV JAR_NAME "my-jar-with-dependencies.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
ENTRYPOINT ["./run.sh"]
The last line of the Dockerfile then invokes a simple bash script:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
The CLI I wrote has three main actions: upload, download and apply. Therefore argparse4j expects one of these actions to be passed as the first parameter, i.e.
java -jar cli.jar download #... whatever other argument
This works just fine when running the docker image locally, but completely fails when running in the CI pipeline:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- download -f zip -u true test-download.zip
This is the error that is returned:
Executing "step_script" stage of the job script 00:01
Using docker image sha256:<sha> for <url>/my-image:<tag> with digest <url>/my-image:<tag>#sha256:<sha> ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
usage: tool [-h] ACTION ...
tool: error: invalid choice: 'sh' (choose from 'upload', 'download',
'apply')
I have tried following the suggestion in gitlab-runner doesn't run ENTRYPOINT scripts in Dockerfile but I can't seem to get the CI part to work correctly.
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
Does anyone have an idea of what is happening or how to fix it?
I would like to avoid using the entrypoint directive as it needs to be used on multiple files, so I rather fix the issue at the root.
You can change your Dockerfile instead to keep default ENTRYPOINT (as openjdk:18 doesn't define any entrypoint, it will be empty):
FROM openjdk:18
# ...
# ENTRYPOINT ["./run.sh"] # remove this
# Add run.sh to path to be able to use `run.sh` from any directory
ENV PATH="${PATH}:/opt/app"
And update your run.sh to specify full path to jar:
#!/bin/bash
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"
Now your container will start in Gitlab without having to specify entrypoint keyword for job. You can then setup something like this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
# Specify command using run.sh
# This command is run from within your container
# Note that script is not an argument passed on your image startup
# But independent commands run within your container using shell
- run.sh download -f zip -u true test-download.zip
Notes:
Gitlab won't run your script in Dockerfile's WORKDIR but in a dedicated directory where your project will be cloned.. Using ./ will look for script and jar in current directory at the moment your command is run, but they wouldn't be found if not run from /opt/app. Specyfing full path to jar and adding your run.sh script to PATH make sure they'll be found wherever your run.sh from. Alternatively you could run cd /opt/app in your job's script but it may cause unwanted side effects.
Without ENTRYPOINT you won't be able to run Docker commands like this
docker run " <url>/my-image:<tag>" download ...
You'll need to specify either COMMAND or --entrypoint such as
docker run "<url>/my-image:<tag>" run.sh download ...
docker run --entrypoint run.sh "<url>/my-image:<tag>" download ...
You specified not wanting to do this, but overriding image entrypoint on your job seems a much simpler and straightforward solution. Using multiple files you may leverage Gitlab's extends and include.
And now for the fun part
what is happening
When Gitlab run your container for a job it will use the entrypoint defined in your Dockerfile by default. From doc:
The runner starts a Docker container using the defined entrypoint. The default from Dockerfile that may be overridden in the
.gitlab-ci.yml file.
The runner attaches itself to a running container.
The runner prepares a script (the combination of before_script, script, and after_script).
The runner sends the script to the container’s shell stdin and receives the output.
And what the doc doesn't say is that Gitlab will try to use various form of sh as Docker command. In short for step 1. it's like running this Docker command:
# Gitlab tries to run container for your job
docker run -it "<url>/my-image:<tag>" sh
It doesn't work as Gitlab will use default entrypoint and the final command run in Docker is:
./run.sh sh
Where ./run.sh is the entrypoint from Dockerfile and sh is the command provided by Gitlab. It causes the error you see:
tool: error: invalid choice: 'sh' (choose from 'upload', 'download', 'apply')
You never reach your job's script (step 4). See ENTRYPOINT vs. CMD for details.
Furthermore, the script you define is a command itself. Even if your container started, it wouldn't work as the following command would be run inside your container:
download -f zip -u true test-download.zip
# 'download' command doesn't exists
# You probably want to run instead something like:
/opt/app/run.sh download -f zip -u true test-download.zip
So, after a bit of research, I have been able to find a solution that works for me.
From my research (and as Pierre B. pointed out in his answer), Gitlab essentially tries to inject a shell script that performs a check for which shell is available.
Now, my solution is in no way elegant, but does achieve what I wanted. I modified the Dockerfile like so:
FROM openjdk:18-bullseye
ENV JAR_NAME "my-jar.jar"
ENV PROJECT_HOME /opt/app
RUN mkdir -p $PROJECT_HOME
WORKDIR $PROJECT_HOME
COPY run.sh $PROJECT_HOME/run.sh
RUN chmod +x $PROJECT_HOME/run.sh
COPY target/$JAR_NAME $PROJECT_HOME/cli.jar
RUN echo '#!/bin/bash \njava $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar $PROJECT_HOME/cli.jar "$#"' > /usr/bin/i18n && \
chmod +x /usr/bin/i18n
ENTRYPOINT ["./run.sh"]
And also modified the run.sh script this way:
#!/bin/bash
if [[ -n "$CI" ]]; then
echo "this block will only execute in a CI environment"
exec /bin/bash
else
echo "Not in CI. Running the image normally"
java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar ./cli.jar "$#"
fi
This works because Gitlab, in its list of predefined variables, provides a CI env var that is set when the script is running on the CI. By doing so, I skip the java invocation but leave it in the case I need to use it when not on a CI.
Now when I need to use my image, all I need to specify in my .gitlab-ci.yml file is this:
download:
stage: download
image: <url>/my-image:<tag>
variables:
URL: <URL>
API_KEY: <API_KEY>
CI_DEBUG_TRACE: "true"
script:
- i18n download -f zip -u true test-download.zip # This calls the script that was injected in the Dockerfile
This way I essentially mimic an actual CLI, and can use it in all my projects that require this tool.
I am not sure though why I need to "echo" the script for the CLI, and I can't simply copy it. For some reason the env variables are not passed down and I couldn't spend any more time debugging it. So for now, it will stay like this.
If you have any recommendations on how to clean this up, please leave some comments and I will edit my answer!
Try to wrap your script in single quotes:
script:
- 'download -f zip -u true test-download.zip'
EDIT:
Oh, this open bug in gitlab could be relevant to you

Build Docker Image with Java and Node.js together (Error : java: not found )

I've an API application that run both node and java files(jar) together. When I run the application locally it works, but once I create a docker image for it, I got an error from the postman .
parsingError: Command failed: java -jar Binary\***.jar -s Files\*** -v -j
/bin/sh: 1: java: not found
I believe that I'm missing the jave configuration in docker file but I don't know how to put the configuration for java and node together. any help will be appricatited!
Here is the Docker config.
FROM node:14.18.1
WORKDIR /code
ENV PORT 3000
COPY package.json /code/package.json
RUN npm install
COPY . /code
CMD ["node","app.js"]
Note: I'm using Temurin JDK
Maybe inside node:14.18.1 there is no java installed.
You have to add another step inside dockerfile:
RUN apk update && apk add openjdk(versionjdk)

Docker exit witout throwing an error or executing code

I'm trying to run talend job inside a docker, build goes ok but when I run container it just exits without any error. Here is my dockerfile:
FROM store/oracle/serverjre:8
ARG talend_job=export_data_xml
ENV TALEND_JOB ${talend_job}
ENV ARGS ""
WORKDIR /opt/talend
COPY . /opt/talend
### Install Talend Job
RUN yum install -y unzip && \
unzip ${TALEND_JOB}.zip && \
rm -rf ${TALEND_JOB}.zip && \
chmod +x ${TALEND_JOB}/${TALEND_JOB}_run.sh
VOLUME /data
CMD ["/bin/sh","-c","${TALEND_JOB}/${TALEND_JOB}_run.sh ${ARGS}"]
Run command:
docker run -it demo:latest
It doesn't execute a code or throw error. Any idea what can be wrong or how to debug it at least?
Thanks.
My best guess it would be that is a problem with the file paths.
You can debug it by running your image with:
docker run -ti demo:latest /bin/sh
In order to go inside the container,
check if export_data_xml_run.sh is in the right path (/opt/talend/export_data_xml) and try to run it form there
Some things worth a try:
Add some echo statements after each command, eg:
RUN echo "About to install unzip..." && yum install -y unzip && echo "unzip installed" \
...
If its failing here, you should at least see the echo statements.
Below, it looks like you're setting pwd to /opt/talend, then performing a COPY of whatever is in pwd to the same dir... effectively a null operation.
WORKDIR /opt/talend
COPY . /opt/talend
This last line, be sure to pass the string args as individual strings. I've been working on something recently and the two strings I passed as "command, input" which was treated as a single string, rather than two individual strings (which is what I actually wanted):
CMD ["/bin/sh","-c","${TALEND_JOB}/${TALEND_JOB}_run.sh ${ARGS}"]
You could also log into the docker container to try and debug it once its running using:
docker exec -it [imageId] sh
Once inside the container you can run various commands to validate everything is as you expect it to be.
HTH

Building a docker image from dockerfile, can't get java to run correctly, but can run java from image as command line parameters in docker run command

I am new to docker and dockerfile files, having just started trying to write them. I have built a simple java console application and can successfully build a docker image from a dockerfile, but if I include
CMD ["java","-jar","app.jar"]
when I try to run the image I always get a bin/sh error, typically "java not found" or the like.
However, when I don't include the CMD line and just use this Dockerfile to build my image
FROM openjdk:8-jre-alpine
COPY app.jar /app.jar
and then run
docker run -it --rm my-container:tag
I can then run
java -jar app.jar
and the application runs as expected.
I can also run
docker run -it --rm my-container:tag java -jar app.jar
and the application runs as expected.
Every guide I read says I should be able to use CMD or ENTRYPOINT as written above, but nothing ever works.
What might I be missing in this simple example?
Thank you,
Trevor
EDIT: I am running docker version 18.06.1-ce-mac73 (26764) on MacOS Sierra. I am not positive that docker works this way, but I have two image versions in my public docker hub. The dockerfile for v1 is:
FROM openjdk:8-jre-alpine
COPY 454calendar.jar app.jar
The dockerfile for v2 is:
FROM openjdk:8-jre-alpine
ENV PROJECT_DIR=/app
WORKDIR $PROJECT_DIR
COPY 454calendar.jar $PROJECT_DIR
If I add
CMD [“java”,”-jar”,”454calendar.jar”]
to the v2 dockerfile and rebuild, I get this error with the docker run command.
/bin/sh: [“java”,”-jar”,”454calendar.jar”]: not found
Without the CMD line, I can run container and it starts right into the /app working directory where I can run the java command and execute the program.
The two versions of the container in my public docker repository do not have the CMD line in their respective dockerfiles.
The solution was maddeningly simple. Thanks to #Rakesh, I checked the configuration for TextEdit on MacOS and saw that Smart Quotes was turned on. Once I turned off that option and retyped the double quotes, then rebuilt and ran the docker container, the application started up just as expected.
I don't see any problem with your approach. I was able to make a HelloWorld application to run with the below Dockerfile.
FROM openjdk:8-jre-alpine
RUN mkdir /app
RUN cd /app
COPY HelloWorld.jar /app/HelloWorld.jar
WORKDIR /app
CMD ["java","-jar", "HelloWorld.jar"]
I'm on the following docker version
docker -v
Docker version 18.06.1-ce, build e68fc7a
docker-compose -v
docker-compose version 1.22.0, build f46880f

mkdir command with docker

Inside my docker container, this command mkdir -p -m 755 directoryName creates a directory (Blue File) at the given path. However, outside docker, when I attempt to create a directory with the same command mkdir -p -m 755 ContainerID:/root/.../directoryName it seems to be creating an Executable (Green File).
This is causing trouble because with my "create directory" command i'm copying stuff to it, and the command is failing when I do it outside of docker.
This is what my full command will be, when I execute outside docker:
mkdir -p -m 755 ContainerID:/root/../dirName && docker cp someImage.jpg ContainerID:/root/../dirName
Any thoughts on how to to make this work?
To be honest, I have never heard of such mkdir syntax, referencing a different host, but in any case (even if it was supported) I would not use it. You should execute anything you want to to inside a docker container as docker exec ContainerID mkdir -p -m 755 /root/../dirName
If you want to put several commands inside the same docker exec call you can do it by executing docker exec ContainerID bash -c "whatever && whatever2 && ... whateverX"
Have in mind that these commands will be executed as the user referenced in the Dockerfile with an USER clause, defaulting to root. There are some images in which the user is set to something different, leading to permission issues while doing stuff like this. The right approach to follow would depend on whatever you want to achieve.
Hope that helps! :)

Categories

Resources