I have a little experience with docker-compose and Laravel, this set goes fine, but how could I make the same with dspace?
I would like to have work directory in my host, not all into container.
I have tried dspace-docker that is in dockerhub, is this one: https://github.com/4Science/dspace-docker, but I had troubles wit him.
Thank you!
The following Docker images can be used to run DSpace locally. There is not yet a published Docker Compose file.
- https://hub.docker.com/r/dspace/dspace-tomcat/
- https://hub.docker.com/r/dspace/dspace-postgres-pgcrypto/
The following page describes how to utilize these images on either Windows or MacOS: https://github.com/DSpace-Labs/DSpace-Docker-Images/blob/master/tutorial.md
Here are the key steps.
Clone DSpace
Configure a local.cfg file that assumes DSpace will run in a container. [dspace-install] will be within the container.
Run the DSpace maven build on your workstation
Run the DSpace ant update in a container to install the code at [dspace-install]
The MacOS setup is described here. See the link above for Windows.
docker network create dspacenet
docker volume create pgdataD6
docker run -it -d --network dspacenet -p 5432:5432 --name dspacedb -v pgdataD6:/pgdata -e PGDATA=/pgdata dspace/dspace-postgres-pgcrypto
docker run -it --rm --network dspacenet -v "$(pwd)"/dspace/target/dspace-installer:/installer -v dspaceD6:/dspace -w /installer dspace/dspace-tomcat ant update clean_backups
docker run -it --network dspacenet -v dspaceD6:/dspace -p 8080:8080 --name dspacetomcat -e DSPACE_INSTALL=/dspace dspace/dspace-tomcat
Related
I have a Jenkins running as a docker container, now I want to build a Docker image using pipeline, but Jenkins container always tells Docker not found.
[simple-tdd-pipeline] Running shell script
+ docker build -t simple-tdd .
/var/jenkins_home/workspace/simple-tdd-pipeline#tmp/durable-
ebc35179/script.sh: 2: /var/jenkins_home/workspace/simple-tdd-
pipeline#tmp/durable-ebc35179/script.sh: docker: not found
Here is how I run my Jenkins image:
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v
/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock
jenkins
And the DockerFile of Jenkins image is:
https://github.com/jenkinsci/docker/blob/9f29488b77c2005bbbc5c936d47e697689f8ef6e/Dockerfile
You're missing the docker client. Install it as this in Dockerfile:
RUN curl -fsSLO https://get.docker.com/builds/Linux/x86_64/docker-17.04.0-ce.tgz \
&& tar xzvf docker-17.04.0-ce.tgz \
&& mv docker/docker /usr/local/bin \
&& rm -r docker docker-17.04.0-ce.tgz
Source
In your Jenkins interface go to "Manage Jenkins/Global Tool Configuration"
Then scroll down to Docker Installations and click "Add Docker". Give it a name like "myDocker"
Make sure to check the box which says "Install automatically". Click "Add Installer" and select "Download from docker.com". Leave "latest" in the Docker version. Make sure you click Save.
In your Jenkinsfile add the following stage before you run any docker commands:
stage('Initialize'){
def dockerHome = tool 'myDocker'
env.PATH = "${dockerHome}/bin:${env.PATH}"
}
Edit: May 2018
As pointed by Guillaume Husta, this jpetazzo's blog article discourages this technique:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
Docker client should be installed inside a container as described here. Also, jenkins user should be in docker group, so execute following:
$ docker exec -it -u root my-jenkins /bin/bash
# usermod -aG docker jenkins
and finally restart my-jenkins container.
Original answer:
You could use host's docker engine like in this #Adrian Mouat blog article.
docker run -d \
--name my-jenkins \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This avoids having multiple docker engine version on host and jenkins container.
The problem is in your Jenkins, it isn't capable to use the docker engine, even if you do install the docker from the plugin manager. From what I got researching there are some alternatives to workaround this issue:
1: Build a image using some docker image with pre-installed docker in it like provided by getintodevops/jenkins-withdocker:lts
2: Build the images from jenkins/jenkins mounting the volumes to your host then install the docker all by yourself by creating another container with same volumes and executing the bash cmd to install the docker or using Robert suggestion
docker run -p 8080:8080 -p 50000:50000 -v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:latest
or 3: The most simple, just add the installed docker path from your host machine to be used by your jenkins container with: -v $(which docker):/usr/bin/docker
Your docker command should look like this:
docker run \
--name jenkins --rm \
-u root -p 8080:8080 -p 50000:50000 \
-v $(which docker):/usr/bin/docker\
-v $HOME/.jenkins/:/var/jenkins_home
-v /var/run/docker.sock:/var/run/docker.sock \
jenkins/jenkins:latest
[Source]https://forums.docker.com/t/docker-not-found-in-jenkins-pipeline/31683
Extra option: Makes no sense if you just want to make use of a single Jenkis server but it's always possible to install a OS like Ubuntu using an image and install the jenkins .war file from there
docker run -d \
--group-add docker \
-v $(pwd)/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $(which docker):/usr/bin/docker \
-p 8080:8080 -p 50000:50000 \
jenkins/jenkins:lts
Just add option --group-add docker when docker run.
Add docker path i.e -v $(which docker):/usr/bin/docker to container in volumes like
docker run -d \
--name my-jenkins \
-v $(which docker):/usr/bin/docker \
-v /var/jenkins_home:~/.jenkins \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 8080:8080 jenkins
This section helped me install docker inside the jenkins container: https://www.jenkins.io/doc/book/installing/docker/#downloading-and-running-jenkins-in-docker
Also, I had to replace FROM jenkins/jenkins:2.303.1-lts-jdk11 in the Dockerfile in step 4(a) with jenkins/jenkins.
I want to create a Docker image with Java and Docker installed on it. The idea is that the eventual docker container should be able to create Docker images. My Java application executes commands like docker build -t my-image ..
That is why I need Docker installed in my Docker container
In the past I solved the same issue writing the following Dockerfile:
FROM maven:3.6.3-jdk-8
USER root
# Install docker CLI for docker images generation inside the container itself
RUN apt update -y
RUN apt install -y curl
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz | tar xvz -C /tmp/ && mv /tmp/docker/docker /usr/bin/docker
# Customize here your container with your instructions...
Of course, you can change the FROM image as per you needs.
I am starting using playwright, I have pulled the docker image and I am wondering how should I run tests. I have followed some old messages in playwright slack and run this command :
docker run -d -it --rm -v /C/Workspace/playwrite/src/test/java/org/example/TestExample.java:. --ipc host --shm-size 2gb mcr.microsoft.com/playwright/java:v1.14.0-focal
The container has started, the test has been moved in docker but I don't think it has been run. My question is does the docker playwright has already a test runner such as junit? If nor do I have to modify the docker image? I don't find any docs that explain that.
The way it works is to run the Playwright image, which then provides you with an environment to setup and run Playwright
# run in the root of your Playwright project
docker run -v $PWD:/e2e -w /e2e --rm -it mcr.microsoft.com/playwright:v1.16.2-focal /bin/bash
# you are now inside the container (bash)
root#f4cfe077964d:/e2e# npm install
root#f4cfe077964d:/e2e# npx playwright install
root#f4cfe077964d:/e2e# npx playwright test
More details about using the Playwright Docker image here
i have 2 docker file
1. mysql-dockerfile
FROM mysql:5.5
EXPOSE 3306
ENV MYSQL_ROOT_PASSWORD root
ENV MYSQL_DATABASE ToDoList
command used to build dockerfiles are as below
sudo docker build -t mysql-img -f mysql-dockerfile .
sudo docker run -d --name mysqlcontainer -p 3030:3306 mysql-img
2. java-dockerfile
FROM openjdk:8-jre-alpine
EXPOSE 9090
WORKDIR /usr/src
COPY target/*.war todoApp.war
CMD ["java","-jar","todoApp.war"]
command used to build dockerfiles are as below
sudo docker build -t java-img -f java-dockerfile .
docker run --name javacontainer -d -p 4040:9090 java-img
spring boot application consist jdbc url as follow
spring.datasource.url=jdbc:mysql://localhost:3030/ToDoList
i am not able to start project because spring boot application in docker is not able to connect mysql db which is in another container.
one solution i found is to bring two docker container in one docker network or link docker container.
can anyone please suggest good solution, modified docker run command and modified jdbc url.
Put them into one network and use container names as hostnames:
docker network create foo
docker run --network=foo --name mysqlcontainer -d mysql-img
docker run --network=foo --name javacontainer -d java-img
Dont expose ports - they are exposed automatically between containers inside network.
To connect inside, use mysqlcontainer:3306 and javacontainer:9090.
To connect from host, you will need port exposing.
I am new to docker and dockerfile files, having just started trying to write them. I have built a simple java console application and can successfully build a docker image from a dockerfile, but if I include
CMD ["java","-jar","app.jar"]
when I try to run the image I always get a bin/sh error, typically "java not found" or the like.
However, when I don't include the CMD line and just use this Dockerfile to build my image
FROM openjdk:8-jre-alpine
COPY app.jar /app.jar
and then run
docker run -it --rm my-container:tag
I can then run
java -jar app.jar
and the application runs as expected.
I can also run
docker run -it --rm my-container:tag java -jar app.jar
and the application runs as expected.
Every guide I read says I should be able to use CMD or ENTRYPOINT as written above, but nothing ever works.
What might I be missing in this simple example?
Thank you,
Trevor
EDIT: I am running docker version 18.06.1-ce-mac73 (26764) on MacOS Sierra. I am not positive that docker works this way, but I have two image versions in my public docker hub. The dockerfile for v1 is:
FROM openjdk:8-jre-alpine
COPY 454calendar.jar app.jar
The dockerfile for v2 is:
FROM openjdk:8-jre-alpine
ENV PROJECT_DIR=/app
WORKDIR $PROJECT_DIR
COPY 454calendar.jar $PROJECT_DIR
If I add
CMD [“java”,”-jar”,”454calendar.jar”]
to the v2 dockerfile and rebuild, I get this error with the docker run command.
/bin/sh: [“java”,”-jar”,”454calendar.jar”]: not found
Without the CMD line, I can run container and it starts right into the /app working directory where I can run the java command and execute the program.
The two versions of the container in my public docker repository do not have the CMD line in their respective dockerfiles.
The solution was maddeningly simple. Thanks to #Rakesh, I checked the configuration for TextEdit on MacOS and saw that Smart Quotes was turned on. Once I turned off that option and retyped the double quotes, then rebuilt and ran the docker container, the application started up just as expected.
I don't see any problem with your approach. I was able to make a HelloWorld application to run with the below Dockerfile.
FROM openjdk:8-jre-alpine
RUN mkdir /app
RUN cd /app
COPY HelloWorld.jar /app/HelloWorld.jar
WORKDIR /app
CMD ["java","-jar", "HelloWorld.jar"]
I'm on the following docker version
docker -v
Docker version 18.06.1-ce, build e68fc7a
docker-compose -v
docker-compose version 1.22.0, build f46880f