I'm trying to run Jenkins image as jenkins2 user in following command:
docker run -d \
--name $cname \
--restart=always \
-v jenkins-home2:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 \
-p 50000:50000 \
--user $(id -u jenkins2) \
--network $network \
jenkins/jenkins:lts
Passing user id worked so far with other containers. But in Jenkins maven fails to properly get user's home directory to reach local repository (~/.m2/ ). Instead it uses ./?/.m2/ directory.
From digging a bit I found out that java have system props set to user.name=? and user.home=?
If user is not used in run docker line then those two props are correctly set to user.name=jenkins and user.home=/var/jenkins_home
Before I start trying to manually override java proprs inside the container, I'd like to ask if there more appropriate solution to the problem?
Related
New to linux and working on containerizing our stack and essentially here is the problem that I am running into with the code below:
a) I have to execute this dockerfile as a non-root user for elastic search to work (requirement)
b)If I add USER $USERNAME to the bottom of the script before CMD i get the error:
"mkdir: cannot create directory ‘/root’: Permission denied
Can not write to /root/.m2/copy_reference_file.log. Wrong volume permissions? Carrying on"
c) If I remove the USER $USERNAME from the bottom of the file then I get the elastic search issue referenced above.
What I am asking is, how can I fix this in my dockerfile?
# Custom image from Maven on DockerHub
# Language: dockerfile
FROM maven:3.6.3-amazoncorretto-8
# Set the working dir
WORKDIR /app
# Create a non root user
ARG USERNAME=jefferson
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Add linux dependenciesq
RUN yum install wget -y
RUN yum install shadow-utils -y
# Create the user
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& yum install sudo -y \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 777 /etc/sudoers.d/$USERNAME \
&& sudo groupadd docker \
&& sudo usermod -aG docker $USERNAME \
&& newgrp docker
# Change to the root folder and edit the settings.xml for Maven
WORKDIR /root/.m2
RUN rm -rf settings.xml
RUN echo '<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" \
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" \
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 \
http://maven.apache.org/xsd/settings-1.0.0.xsd"> \
</settings>' >> settings.xml
WORKDIR /app
COPY . ./
USER $USERNAME
# Run the application
CMD ["mvn", "clean", "verify", "-Pcargo.run", "-X"]
If i understand well your needs, on the official docker maven image give you have a way to follow :
Why not fully use a non-root user to run maven (and not trying to write in the root directory) ?
at this page : https://hub.docker.com/_/maven under the header named
Running as non-root
it tells you to use a MAVEN_CONFIG env var and to add the -Duser.home= flag when calling maven to run maven without using the root user
here is the full Dockerfile modified using this way (from your own Dockerfile):
# Custom image from Maven on DockerHub
# Language: dockerfile
FROM maven:3.6.3-amazoncorretto-8
# Set the working dir
WORKDIR /app
# Create a non root user
ARG USERNAME=jefferson
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Add linux dependenciesq
RUN yum install wget -y
RUN yum install shadow-utils -y
ENV MAVEN_CONFIG=/var/maven/.m2
# Create the user
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& yum install sudo -y \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 600 /etc/sudoers.d/$USERNAME \
&& sudo groupadd docker \
&& sudo usermod -aG docker $USERNAME \
&& newgrp docker
# Change to the root folder and edit the settings.xml for Maven
WORKDIR "/var/maven/.m2"
RUN rm -rf settings.xml \
&& chown $USER_UID:$USER_GID .
RUN echo '<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" \
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" \
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 \
http://maven.apache.org/xsd/settings-1.0.0.xsd"> \
</settings>' >> settings.xml
WORKDIR /app
COPY . ./
USER $USERNAME
# Run the application
CMD ["mvn", "clean", "verify", "-Duser.home=/var/maven", "-Pcargo.run", "-X"]
the rights of the sudoers file you added was too permissive so i changed it to 600.
as an example :
with the following command line (you don't have to run this command line in your case because the example command line given (on the maven docker image site) is just starting a new container with interactive mode and mounting volumes)
docker run -v ~/.m2:/var/maven/.m2 -ti --rm -u 1000 -e MAVEN_CONFIG=/var/maven/.m2 maven mvn -Duser.home=/var/maven archetype:generate
I have a Jenkins running as a docker container, now I want to build a Docker image using pipeline, but Jenkins container always tells Docker not found.
[simple-tdd-pipeline] Running shell script
+ docker build -t simple-tdd .
/var/jenkins_home/workspace/simple-tdd-pipeline#tmp/durable-
ebc35179/script.sh: 2: /var/jenkins_home/workspace/simple-tdd-
pipeline#tmp/durable-ebc35179/script.sh: docker: not found
Here is how I run my Jenkins image:
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v
/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock
jenkins
And the DockerFile of Jenkins image is:
https://github.com/jenkinsci/docker/blob/9f29488b77c2005bbbc5c936d47e697689f8ef6e/Dockerfile
You're missing the docker client. Install it as this in Dockerfile:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Source
In your Jenkins interface go to "Manage Jenkins/Global Tool Configuration"
Then scroll down to Docker Installations and click "Add Docker". Give it a name like "myDocker"
Make sure to check the box which says "Install automatically". Click "Add Installer" and select "Download from docker.com". Leave "latest" in the Docker version. Make sure you click Save.
In your Jenkinsfile add the following stage before you run any docker commands:
stage('Initialize'){
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
Edit: May 2018
As pointed by Guillaume Husta, this jpetazzo's blog article discourages this technique:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Docker client should be installed inside a container as described here. Also, jenkins user should be in docker group, so execute following:
$ docker exec -it -u root my-jenkins /bin/bash
# usermod -aG docker jenkins
and finally restart my-jenkins container.
Original answer:
You could use host's docker engine like in this #Adrian Mouat blog article.
docker run -d \
--name my-jenkins \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This avoids having multiple docker engine version on host and jenkins container.
The problem is in your Jenkins, it isn't capable to use the docker engine, even if you do install the docker from the plugin manager. From what I got researching there are some alternatives to workaround this issue:
1: Build a image using some docker image with pre-installed docker in it like provided by getintodevops/jenkins-withdocker:lts
2: Build the images from jenkins/jenkins mounting the volumes to your host then install the docker all by yourself by creating another container with same volumes and executing the bash cmd to install the docker or using Robert suggestion
docker run -p 8080:8080 -p 50000:50000 -v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:latest
or 3: The most simple, just add the installed docker path from your host machine to be used by your jenkins container with: -v $(which docker):/usr/bin/docker
Your docker command should look like this:
docker run \
--name jenkins --rm \
-u root -p 8080:8080 -p 50000:50000 \
-v $(which docker):/usr/bin/docker\
-v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock \
jenkins/jenkins:latest
[Source]https://forums.docker.com/t/docker-not-found-in-jenkins-pipeline/31683
Extra option: Makes no sense if you just want to make use of a single Jenkis server but it's always possible to install a OS like Ubuntu using an image and install the jenkins .war file from there
docker run -d \
--group-add docker \
-v $(pwd)/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-p 8080:8080 -p 50000:50000 \
jenkins/jenkins:lts
Just add option --group-add docker when docker run.
Add docker path i.e -v $(which docker):/usr/bin/docker to container in volumes like
docker run -d \
--name my-jenkins \
-v $(which docker):/usr/bin/docker \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This section helped me install docker inside the jenkins container: https://www.jenkins.io/doc/book/installing/docker/#downloading-and-running-jenkins-in-docker
Also, I had to replace FROM jenkins/jenkins:2.303.1-lts-jdk11 in the Dockerfile in step 4(a) with jenkins/jenkins.
There is Dockerfile
FROM openjdk:11.0.12-jre-slim
COPY target/app.jar /app.jar
COPY configs configs
ENTRYPOINT ["java","-jar","/app.jar"]
In folder configs contains json configs for java application.
The build docker command is:
docker build --build-arg -f ~/IdeaProjects/app --no-cache -t app:latest
And the run command is:
docker run --entrypoint="cp configs var/opt/configs/ && java -jar app.jar" app:latest
Let's omit the ability to copy configs in the Dockerfile via COPY command. Unfortunately, this must be done using --entrypoint.
An error occurs when the docker run command was executed:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "cp configs var/opt/configs/ && java -jar app.jar": stat cp configs var/opt/configs/ && java -jar app.jar: no such file or directory: unknown.
Could you explain why the error occurred in this case?
I would do this with an entrypoint wrapper script. A Dockerfile can have both an ENTRYPOINT and a CMD; if you do, the CMD gets passed as arguments to the ENTRYPOINT. This means you can make the ENTRYPOINT a shell script that does first-time setup, then ends with exec "$#" to replace itself with the CMD.
#!/bin/sh
# docker-entrypoint.sh
# copy the configuration to the right place
cp configs var/opt/configs/
# run the main container command
exec "$#"
In the Dockerfile, make sure to COPY the script in (it should be checked in to source control as executable) and set it as the ENTRYPOINT.
...
COPY docker-entrypoint.sh .
ENTRYPOINT ["./docker-entrypoint.sh"] # must be JSON-array syntax
CMD ["java", "-jar", "/app.jar"] # what was previously ENTRYPOINT
When you run the container it's straightforward to replace the CMD, so you can double-check that this is doing the right thing by running an interactive shell in place of the java application.
docker run -v "$PWD/alt-configs:/configs" --rm -it my-image sh
If you do need to override the command like this at docker run time, the command you show uses && to run two commands consecutively. This needs to run a shell to be understood correctly, and in this context you need to manually provide a /bin/sh -c wrapper.
I would still recommend changing ENTRYPOINT to CMD in your Dockerfile; then you could run a relatively straightforward
docker run \
... \
-v "$PWD/alt-configs:/configs" \
my-image \
/bin/sh -c 'cp configs var/opt/configs && java -jar /app.jar'
If you use --entrypoint, it only takes the first word out of this command, and it is a Docker options so it needs to come before the image name. I'd recommend designing your image to avoid needing this awkward construct.
docker run \
... \
-v "$PWD/alt-configs:/configs" \
--entrypoint /bin/sh \
my-image \
-c 'cp configs var/opt/configs && java -jar /app.jar'
Your proposed command is having problems because it's trying to pass the entire command, including the embedded spaces and shell operators, as a single word, but that causes the OS-level process handling to try to look for an executable file with spaces and ampersands in the filename, hence the "no such file or directory" error.
I'm tasked with creating a very simple, web browser accessible gui that can run a specific java file within a docker container. To do this I've chosen to set up a php-apache server that serves an index.php document with the gui. The Dockerfile looks like this:
FROM php:7.0-apache
COPY src /var/www/html
EXPOSE 80
This gets the gui (index.php is inside the src folder) I've written up and running no problem, but it cannot access and run the required java files (obviously, since this creates a separate container).
The Question:
How can I set up a php-apache server inside the existing Dockerfile (provided below) doing the same thing as the Dockerfile above? My aim is to run the java file using php scripts and display the result to the user.
FROM openjdk:8-jre-slim
WORKDIR /usr/src/app
COPY ["./build/libs/*.jar", "./fooBar.jar"]
ENV JAVA_OPTS=${FOO_JAVA_OPTS}
CMD ["/usr/bin/tail", "-f", "/dev/null"]
I have not written the java file myself, only being tasked with running specific commands using it.
As it is Debian based images. one way of doing it, install packages in the container and create the new images from that.
root#310c94d8d75f:/usr/src/app# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
2: root#310c94d8d75f:/usr/src/app# apt update
3- root#310c94d8d75f:/usr/src/app# apt install apache2
4- root#310c94d8d75f:/usr/src/app# apt install php
finally run : docker commit
after this, you will get a new image with the mentioned name.
Ref: https://docs.docker.com/engine/reference/commandline/commit/
2: you can add the same command in Dockerfile and rebuild.
FROM openjdk:8-jre-slim
WORKDIR /usr/src/app
COPY ["./build/libs/*.jar", "./fooBar.jar"]
ENV JAVA_OPTS=${FOO_JAVA_OPTS}
CMD ["/usr/bin/tail", "-f", "/dev/null"]
RUN apt update && apt install apache2 -y && apt install php -y
There appears to be no easy way of merging images like I initially hoped (You cannot have multiple FROM statements in your Dockerfile). What I eventually ended up doing was to manually merge the two images (openjdk and php) into something like this:
FROM php:7.0-apache
ENV LANG C.UTF-8
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
RUN ln -svT "/usr/lib/jvm/java-8-openjdk-$(dpkg --print-architecture)" /docker-java-home
ENV JAVA_HOME /docker-java-home/jre
ENV JAVA_VERSION 8u212
ENV JAVA_DEBIAN_VERSION 8u212-b01-1~deb9u1
RUN set -ex; \
if [ ! -d /usr/share/man/man1 ]; then \
mkdir -p /usr/share/man/man1; \
fi; \
apt-get update; \
apt-get install -y --no-install-recommends openjdk-8-jre-headless="$JAVA_DEBIAN_VERSION"; \
rm -rf /var/lib/apt/lists/*; \
[ "$(readlink -f "$JAVA_HOME")" = "$(docker-java-home)" ]; \
update-alternatives --get-selections | awk -v home="$(readlink -f "$JAVA_HOME")" 'index($3, home) == 1 { $2 = "manual"; print | "update-alternatives --set-selections" }'; \
update-alternatives --query java | grep -q 'Status: manual'
COPY ["./build/libs/*.jar", "./FooBar.jar"]
ENV JAVA_OPTS=${FOO_JAVA_OPTS}
COPY gui/src /var/www/html
EXPOSE 80
Both are Debian based images so the merging were relatively easy (I also removed much of the cluttering comments from the original image source) and since the openjdk image were simpler, I added it on top of the php image instead of the other way around.
I have a little experience with docker-compose and Laravel, this set goes fine, but how could I make the same with dspace?
I would like to have work directory in my host, not all into container.
I have tried dspace-docker that is in dockerhub, is this one: https://github.com/4Science/dspace-docker, but I had troubles wit him.
Thank you!
The following Docker images can be used to run DSpace locally. There is not yet a published Docker Compose file.
- https://hub.docker.com/r/dspace/dspace-tomcat/
- https://hub.docker.com/r/dspace/dspace-postgres-pgcrypto/
The following page describes how to utilize these images on either Windows or MacOS: https://github.com/DSpace-Labs/DSpace-Docker-Images/blob/master/tutorial.md
Here are the key steps.
Clone DSpace
Configure a local.cfg file that assumes DSpace will run in a container. [dspace-install] will be within the container.
Run the DSpace maven build on your workstation
Run the DSpace ant update in a container to install the code at [dspace-install]
The MacOS setup is described here. See the link above for Windows.
docker network create dspacenet
docker volume create pgdataD6
docker run -it -d --network dspacenet -p 5432:5432 --name dspacedb -v pgdataD6:/pgdata -e PGDATA=/pgdata dspace/dspace-postgres-pgcrypto
docker run -it --rm --network dspacenet -v "$(pwd)"/dspace/target/dspace-installer:/installer -v dspaceD6:/dspace -w /installer dspace/dspace-tomcat ant update clean_backups
docker run -it --network dspacenet -v dspaceD6:/dspace -p 8080:8080 --name dspacetomcat -e DSPACE_INSTALL=/dspace dspace/dspace-tomcat