I've just started to use Docker and I don't know why the Wildfly's docker container doesn't have the latest files even though it copies the war. I have a JS file which I've changed things in it, but whenever I access 127.0.0.1:8080/static/js/myjs.js I still get the older one even though I've sudo mvn clean install the app and then build the image.
I've a docker-compose file which looks like this:
version: "3"
services:
app:
build:
context: .
dockerfile: ./docker/docker-app/Dockerfile
ports:
- "8080:8080"
links:
- "db:task_book_db"
depends_on:
- "db"
db:
image: mysql:5.7.22
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=sample_db
- MYSQL_USER=sample_usr
- MYSQL_PASSWORD=sample_pw
ports:
- "3306:3306"
start_dependencies:
image: dadarek/wait-for-dependencies
depends_on:
- "db"
I do sudo docker-compose run --rm start_dependencies && sudo docker-compose up --build app and whenever I've changed something, I just stop the app container then I do sudo docker-compose up --build app again. I've read about volumes but I'm not sure how to use them yet.
As mentioned in the comments:
This issue might be because of browser cache. Try accessing the 127.0.0.1:8080/static/js/myjs.js
after clearing the cache.
Related
I am new to docker-compose and probably don't understand the ideology behind.
Short question -- I see there is docker-compose build command. I'd like to trigger a (gradle) script outside of the Dockerfile file as part of running it.
Let's say that I have a java web service inside Docker that I build in advance by gradle.
I don't add gradle agent to the Dockerfile (as I am expected to keep it small, right?), I only COPY binaries
FROM openjdk:17-jdk-alpine
RUN apk add --no-cache bash
VOLUME /var/log/app/
WORKDIR /app
ARG EXTRACTED
COPY ${EXTRACTED}/application/ ./
...
ENTRYPOINT ["java","org.springframework.boot.loader.JarLauncher"]
And so I build this image by a script
./gradlew build
java -Djarmode=layertools -jar build/libs/*.jar extract --destination build/libs-extracted/
docker build --build-arg EXTRACTED=build/libs-extracted -t my-server .
I can define the following compose.yml. How do I trigger gradle inside of it? Or, same as with my single Dockerfile, I am expected to wrap docker-compose build into a build script?
version: '3.1'
services:
db:
image: postgres:alpine
restart: always
my-server:
image: my-server
restart: always
depends_on:
- db
Maybe I am asking for a hack, but actually I am happy to take another approach if it's cleaner
I discovered multi-stage Dockerfile feature that addresses my task, and I can use it for both individual Dockerfile builds and docker-compose builds.
https://stackoverflow.com/a/61131308/1291049.
I will probably lose gradle daemon optimisations though.
Change your docker compose file:
version: '3.1'
services:
db:
image: postgres:alpine
restart: always
my-server:
build:
context: ./my-server
dockerfile: Dockerfile
restart: always
depends_on:
- db
I've made a war to run on my tomcat server in a Docker container. The servlet runs fine in eclipse, but when I have compiled it put it in my Docker it isn't loading in the required classes properly.
Project structure and manifest file
When I try and access the servlet using my browser I get this error.
Error in browser
I went into the Docker container and checked that gurobi.jar is still in WEB-INF/lib/ and it is.
I don't know why this wouldn't be able to find the jar, as it works fine when it's in Eclipse.
docker-compose.yml
version: '3'
services:
db:
image: mysql:5.7
ports:
- '3306:3306'
environment:
MYSQL_DATABASE: 'projectdb'
MYSQL_USER: ''
MYSQL_PASSWORD: ''
MYSQL_ROOT_PASSWORD: ''
web:
build:
context: .
dockerfile: Dockerfile
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8000:8000"
depends_on:
- db
tomcat:
image: tomcat
ports:
- "8080:8080"
volumes:
- ./GurServ.war:/usr/local/tomcat/webapps/GurServ.war
java:
image: bitnami/java:latest
volumes:
- .:/app
- .:/code
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
EXPOSE 8000
CMD exec gunicorn djangoapp.wsgi:application - bind 0.0.0.0:8000 - workers 3
Project structure
...
Java Resources
src
...
WebContent
META-INF
MANIFEST.MF
WEB-INF
gurobi-javadoc.jar
gurobi.jar
Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
WORKDIR /usr/src/rango
COPY ./ /usr/src/rango
RUN pip install -r requirements.txt
here is my docker-compose file
services:
backend:
container_name: backend
build: ./
command: python manage.py runserver 0.0.0.0:8000
working_dir: /usr/src/rango
ports:
- "8000:8000"
tty: true
links:
- java
- elasticsearch
- node
#java
java:
image: openjdk:9-jre
#elastic search
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.5.0
ports:
- "9200:9200"
node:
image: node:10.13.0
Command i am using :
sudo docker-compose up
when i am running command i am getting error as
backend_node_1_26e7640d2fbb exited with code 0
backend_java_1_b1fbf7e151d7 exited with code 0
both node and java are not running .
i am using elastic search so i need java
plaese have look into my screenshot i have shared below.
A dockers images are self-reliant in terms of the language runtime that they run on, meaning that they include everything that's needed to run the particular process (excluding external dependencies, such as database or other services).
Therefore, ElasticSearch images does not require a Java container, and similarly the Node container is not needed. They are exiting with 0 exit status (indicating that they run successfully to their completion) as you haven't specified a command to execute (and nor is there a default one defined in the base image).
In summary, you can remove the java and node services from your compose file.
The docker container is not able to access the jar file, that is being accessed over the mount point /my/project/dir.
I am certain it is not a permission issue, because I changed the access rights locally, so it should be able to read/write/execute it.
This is the Dockerfile:
FROM tomcat:9-jre8
RUN apt-get update && apt-get install librrds-perl rrdtool -y
VOLUME ["/data/rrdtool", "/my/project/dir"]
ENTRYPOINT [ "java","-jar","/my/project/dir/build/libs/spring-project-0.1.0.jar" ]
And this is the docker-compose.yml file:
version: '2'
services:
db:
container_name: db1
image: mysql:8
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123
MYSQL_USER: user123
MYSQL_PASSWORD: pasw
MYSQL_DATABASE: mydb
expose:
- "3307"
db2:
container_name: db2
image: mysql:8
restart: always
environment:
MYSQL_ROOT_PASSWORD: password123
MYSQL_USER: user123
MYSQL_PASSWORD: pasw
MYSQL_DATABASE: mydb2
expose:
- "3308"
spring:
container_name: spring-boot-project
build:
context: ./
dockerfile: Dockerfile
links:
- db:db1
- db2:db2
depends_on:
- db
- db2
expose:
- "8081"
ports:
- "8081:8081"
restart: always
This is the output from docker-compose logs spring:
Error: Unable to access jarfile /my/project/dir/build/libs/spring-project-0.1.0.jar
I don't see you copying the jar into the container anywhere. You should try moving a VOLUME declaration from Dockerfile to the compose file into the spring service like this:
volumes:
- /my/project/dir:/app
And then inside Dockerfile you should point to the dir:
ENTRYPOINT [ "java","-jar","/app/build/libs/spring-project-0.1.0.jar" ]
Later on if you'd like to deploy it (for example) you should copy the project files directly into the image instead of utilizing the volumes approach. So in Dockerfile you'd then do:
COPY . /app
instead of VOLUME [..]
Putting it all together:
development:
Dockerfile:
FROM tomcat:9-jre8
RUN apt-get update && apt-get install librrds-perl rrdtool -y
ENTRYPOINT [ "java","-jar","/app/build/libs/spring-project-0.1.0.jar" ]
compose-file:
version: '2'
services:
[..]
spring:
container_name: spring-boot-project
build: .
links:
- db:db1
- db2:db2
depends_on:
- db
- db2
ports:
- "8081:8081"
restart: always
volumes:
- /my/project/dir:/app
deployment:
Dockerfile (that is placed inside project's folder, docker build requires it's build context to be in a current directory):
FROM tomcat:9-jre8
RUN apt-get update && apt-get install librrds-perl rrdtool -y
COPY . /app
ENTRYPOINT [ "java","-jar","/app/build/libs/spring-project-0.1.0.jar" ]
compose-file:
version: '2'
services:
[..]
spring:
container_name: spring-boot-project
build: .
links:
- db:db1
- db2:db2
depends_on:
- db
- db2
expose:
- "8081"
If you are using Spring-Boot Project with Maven build. Try with below
Dockerfile.
FROM maven:3.8.4-openjdk-17 as maven-builder
COPY src /app/src
COPY pom.xml /app
RUN mvn -f /app/pom.xml clean package -DskipTests
FROM openjdk:17-alpine
COPY --from=maven-builder app/target/dockube-spring-boot.jar /app-service/dockube-spring-boot.jar
WORKDIR /app-service
EXPOSE 8080
ENTRYPOINT ["java","-jar","dockube-spring-boot.jar"]
dockube-spring-boot.jar // replace with your generated jar name
Here is the Sample Code Available
I'd like to have each app release stored as some kind of container and then have several "server" only containers(DB, web server). How can the cooperation between those containers work?
I can imagine to define some volume in "app" container where the app. binaries will be stored and then use this volume folder as web server's deploy directory in the "server" container.
How will the process to update the application version work? How can I "bind" multiple binaries with one "server" container?
More generally I can imagine to deploy like this
"deploy some-version-releasecontainer to serv1, serv2, serv3", maybe docker is not the proper tooling and I will need some more abstract management like Swarm, kubernate etc. But the main point is to create application binary as selfstanding, read-only entity known to the eco-system.
Maybe you need a docker-compose to make your containers interact on a virtual network (but you can still share resources and volumes). I'll post a simple example here.
The docker-compose is the following:
version: '2'
services:
myapp_service1:
image: myapp_image1:latest
networks:
mynetwork:
aliases:
- myalias1
depends_on:
- mysql
expose:
- 8080
volumes:
- /opt/myapp/logs:/jboss-as-7.1.1.Final/logs
environment:
- "JAVA_OPTS=-Xms64m -Xmx128m -XX:MaxPermSize=128m -Djava.net.preferIPv4Stack=true"
- "db=mysql.myapp"
volumes_from:
- myapp_volume
myapp_service2:
image: myapp_image2:latest
networks:
mynetwork:
aliases:
- myalias2
depends_on:
- mysql
expose:
- 8080
volumes:
- /opt/myapp/logs:/jboss-as-7.1.1.Final/logs
environment:
- "JAVA_OPTS=-Xms64m -Xmx128m -XX:MaxPermSize=128m -Djava.net.preferIPv4Stack=true"
- "db=mysql.myapp"
volumes_from:
- myapp_volume
myapp_volume:
container_name: myapp_volume
image: myapp_volume_image:latest
mysql:
image: mysql
networks:
mynetwork:
aliases:
- mysql.myapp
expose:
- 3306
ports:
- "13306:3306"
volumes:
- /opt/myapp/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
networks:
mynetwork:
driver: bridge
Here we have four containers. The first two are sample applications sharing the same volume. The volume is defined just below the applications and in the end we have a container which runs a simple MySql DB.
The Dockerfile for the shared volume container is that one:
FROM alpine
VOLUME /jboss-as-7.1.1.Final/mydir/config
CMD /bin/sh -c "while true; do sleep 1; done"
The docker compose is a yml file, let's say my-docker-compose.yml. To run it you have to type this in the terminal:
docker-compose -f path/to/compose/my-docker-compose.yml up -d
Of course, the images should already be built.