I'm having issues deploying a Quarkus 1.10 application to Heroku as a Docker image.
The same application, using SpringBoot and a similar Docker Image boots successfully, but somehow Quarkus triggers the infamous R10 Boot Timeout error due to bad bind of $PORT, even when I see that the boot times are really small (2 seconds vs. 4.5 of the SpringBoot version).
If I start the image locally, it works perfectly without issues.
My final Docker Image is like this (omitting the Multi-stage build steps for brevity):
FROM gcr.io/distroless/java:11
ENV QUARKUS_MAILER_FROM=${EMAIL_USERNAME} \
QUARKUS_MAILER_USERNAME=${EMAIL_USERNAME} \
QUARKUS_MAILER_PASSWORD=${EMAIL_PASSWORD}
EXPOSE 8080
COPY --from=backend /usr/src/app/target/*-runner.jar /usr/app/app.jar
COPY --from=backend /usr/src/app/target/lib /usr/app/lib
ENTRYPOINT [ "java", "-jar" ]
CMD ["/usr/app/app.jar", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=${PORT}"]
I'm using the commands to deploy the application:
heroku container:push web
heroku container:release web
I don't see where the error is. I've also tried to remove the EXPOSE directive from the Dockerfile but that's not the cause of error.
I've solved the issue, which is really dumb.
Modifying the ENTRYPOINT and the CMD sections of the Dockerfile as follows:
ENTRYPOINT [ "java" ]
CMD ["-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=${PORT}", "-jar", "/usr/app/app.jar"]
it works. The problem is, the system properties have to be passed before the -jar parameter, otherwise Quarkus doesn't pick the right value for ${PORT}
Related
I have created a working back-end application, but I want to Dockerize it now. The back-end uses the nauty library and everything works fine when not using Docker containers.
To make this work, locally I had to start the application using java -Djava.library.path="<PATH>/backend/lib/" -jar backend.jar. The library path points to the directory where the nauty library (libnauty.so) resides. In my ~/.bashrc, I had to set the environment variable LD_LIBRARY_PATH=<PATH>/backend/lib/:/usr/local/lib. After doing this, the application works fine.
After dockerizing the application, this gives problems. I get the error java.lang.UnsatisfiedLinkError: no nauty in java.library.path: "/home/backend/lib/".
I tried solving the issue by setting the environment variable in the docker container and using the same entrypoint for the application, but the issue remains.
My Dockerfile for the back-end looks like this.
FROM openjdk:17-oracle
ARG JAR_FILE=target/*.jar
EXPOSE 8080
ENV LD_LIBRARY_PATH=/home/backend/lib/:/usr/local/lib
COPY ${JAR_FILE} /home/backend/backend.jar
COPY . /home/backend
ENTRYPOINT ["java", "-Djava.library.path=\"/home/backend/lib/\"", "-jar", "/home/backend/backend.jar"]
Any ideas on how to solve this issue? Any pointers would be greatly appreciated!
I have a spring boot application that is dockerized. By default the application has spring.cloud.config.enabled=false hence the application doesn't pick up the application.properties from configserver. However while deploying in a separate env we need to integrate the application with configserver so we need to override and set the spring.cloud.config.enabled property to true.
To achieve this I am running the docker image using the following docker-compose file :
version: '3'
services:
my-app-new:
container_name: my-app
image: my-app:1.0.0-SNAPSHOT
ports:
- "8070:8070"
environment:
- SPRING_CLOUD_CONFIG_ENABLED=true
- SPRING_CLOUD_CONFIG_URI=http://localhost:14180
However, it just doesn't work. If I hard code the values in the property file then it integrates fine.
I also tried the following command but it still didn't work :
docker run -p 8070:8070 -e SPRING_CLOUD_CONFIG_ENABLED=true -e SPRING_CLOUD_CONFIG_URI=http://localhost:14180 my-app:1.0.0-SNAPSHOT
The spring boot version is 2.2.6.
Let me know what the problem is.
Update :
I cannot use profiles as there too many env in our company and even the VMs keep getting changed so cannot have hardcoded profiles. We want a solution where we can just pass certain variables from the outside.
As someone pointed out in the comments the above compose yml is not working as the environment variables need to read by the spring boot application. So did some research on the internet and instead we are now passing the JAVA_OPTS tomcat variable while running the image. Like so :
docker run --env JAVA_OPTS="-Dspring.cloud.config.uri=http://localhost:14180 -Dspring.cloud.config.enabled=true" -p 8080:8080 my-app-image
And in the docker file we have used the JAVA_OPTS while starting the jar like so
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
And this still doesnt work. Not sure what is going wrong.
I found the problem with my setup. I made a silly error. The config server is not in my docker network and I used localhost to communicate with the config server. Localhost would of course mean that I am referring to the app containers IP which only has the app running. Instead when I used the ip address or the hostname of my machine my application container could connect to the config server successfully.
Why you not run container --> go inside --> change configuration and commit to new images.
After that deploy to new env.
I want to run a Java Spring application inside of a docker container and this application should be able to deploy sibling containers. When I run the Java application directly on my machine it works fine and can deploy containers, but as soon as I try to run the application inside a container it does not work any more (im using supervisord to run the mongodb and Java Spring app in one container and I know that thats not best practice). The container starts up fine, but crashes as soon as my application tries to connect to the docker deamon without any stacktraces from Java, just the error WARN received SIGTERM indicating exit request. The supervisord logs dont contain additional info.
I tried mounting the docker socket from the host (Windows 10 Pro with Docker Desktop, also tried Ubuntu Server 18.04) into the container using -v /var/run/docker.sock:/var/run/docker.sock.
I also tried to use --net="host".
Both did not work, although with the second one the container does not crash but produces a different error ({}->unix://localhost:80: Connection refused) visible in the log of my java application, which indicates that it cant even find the right address for the deamon.
I also activated "Expose deamon on tcp://localhost:2375 without TLS".
I also tried to set the DOCKER_HOST environment variable inside the container to default values such as "tcp://localhost:2375" or "/var/run/docker.sock".
Here is the code that I use to initialize the docker client.
DockerClient docker = DefaultDockerClient.fromEnv().build();
The DefaultDockerClient.fromEnv().build(); should create a docker client that uses the DOCKER_HOST environment variable to connect to the host or the default adress ("/var/run/docker.sock" on *NIX).
Here is my DOCKERFILE:
FROM openjdk:8-jre-alpine
ENV PACKAGES mongodb supervisor
VOLUME /opt/server
VOLUME /data/db
WORKDIR /opt/accservermanager
ADD supervisord.conf /etc/supervisor.conf
ADD accservermanager.jar /opt/accservermanager/accservermanager.jar
ADD application.properties /opt/accservermanager/application.properties
RUN apk update && \
apk add --update $PACKAGES --no-cache && \
rm -rf /var/cache/apk/*
EXPOSE 8000
CMD ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisor.conf"]
And finally, my supervisord.conf
[supervisord]
user=root
stderr_logfile=/var/log/supervisord.err.log
stdout_logfile=/var/log/supervisord.out.log
loglevel=debug
[program:mongodb]
command=mongod --smallfiles
autostart=true
autorestart=true
stderr_logfile=/var/log/mongo.err.log
stdout_logfile=/var/log/mongo.out.log
[program:accservermanager]
directory=/opt/accservermanager/
command=java -jar accservermanager.jar
autostart=true
autorestart=true
stderr_logfile=/var/log/accservermanager.err.log
stdout_logfile=/var/log/accservermanager.out.log
Expected result: Application connects to the docker client from the host and is able to deploy/manage containers on the host
Actual result: Container crashes or outputs errors.
Turns out that there is a new version of spotify-docker that fixes my problem.
Updating from v8.15.1 to v8.15.2 solved my issue.
My project is moving from Spring Boot 2.0.4 with Java 8 to Spring Boot 2.1.0 with Java 11. When the application was built with Spring Boot 2.0.4 and Java 8 and run in Docker / Docker Compose, the #PostConstruct-annotated method was called, but after the move to Spring Boot 2.1.0 and Java 11 the #PreDestroy-annotated method is no longer called.
I have tried switching from annotations to implementing InitializingBean and DisposableBean as described here, but the DisposableBean.destroy method is not called.
I have also tried adding a dependency to javax.annotation-api version 1.3.2, with the same result.
How to reproduce:
Create a minimal Spring application with a lifecycle bean:
import org.springframework.beans.factory.DisposableBean;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.stereotype.Component;
#Component
public class Life implements InitializingBean, DisposableBean {
#Override
public void destroy() throws Exception {
System.out.println("--- Life.shutdown");
}
#Override
public void afterPropertiesSet() throws Exception {
System.out.println("--- Life.startup");
}
}
Start the Spring application from the target subfolder:
cd target
java -jar demo-0.0.1-SNAPSHOT.jar
When the application is stopped using Ctrl+C, the DisposableBean.destroy is called.
Return to the parent folder:
cd ..
Start the Spring application using Maven:
mvn spring-boot:run
When the application is stopped using Ctrl+C, the DisposableBean.destroy is called.
Dockerfile:
FROM openjdk:11.0.1-jre-slim
COPY target/*.jar app.jar
EXPOSE 8080
ENTRYPOINT java -jar /app.jar
Build, run, and stop Docker image:
docker build -t demo .
docker run -p 8080:8080 demo
docker ps
docker stats 3ca5b804ab13
docker stop 3ca5b804ab13
docker logs 3ca5b804ab13
When the application is stopped using docker stop, the DisposableBean.destroy is not called.
docker-compose.yml:
demo:
image: demo
ports:
- '8080:8080'
Run Docker image using Docker Compose (simulating OpenShift):
docker-compose up
docker-compose down
demo_demo_1 exited with code 137
When the application is stopped using docker-compose down, the DisposableBean.destroy is not called.
I suspect that Docker is trying a SIGTERM before it issues a SIGKILL, because there is a 10-second delay before the container is killed.
There are many places where the setup can go wrong.
First I suggest to identify whether java/spring part have some issues or it a docker/environment related issue. From the question it sounds like its java related, but in reality I suspect its not in java/spring.
So, mvn spring-boot:run works as expected, and I see that you package the spring boot application as jar (app.jar) likely with a spring boot plugin. This is also a place where things potentially can go wrong, because spring boot uses a special classloader to load things in runtime.
So in order to fully eliminate the "java/spring" part navigate to your target directory and run java -jar app.jar (make sure that java 11 is installed on your local machine of course). If it doesn't work - investigate java /spring part, otherwise proceed with docker part.
The chances are that the application will work as expected.
Now, as for docker setup.
After running docker compose and seeing that it fails,
You can use the following commands:
docker ps -a // -a flag to see container ids of containers that were stopped for whatever reason as well.
Now get find the Id of the java process that exited and examine its logs:
docker logs <ID_OF_THE_EXTED_CONTAINER_GOES_HERE>
Now the chances are that the application context fails to start (maybe network related issue or something, here it's really hard to tell without seeing an actual log) and hence the issue.
Another possible issue is that the application is "too heavy" (by this I mean that it exceeds some quotas imposed on docker container).
You can run docker stats <CONTAINER_ID> command to see its memory / cpu usage in real time or gather metrics from within the application.
I think I found the solution (in this blog entry): Use exec form instead of shell form in the Dockerfile so that the SIGTERM that Docker issues hits the java process instead of the bash process (which doesn't forward signals to any child processes).
Shell form (executed with a /bin/sh -c shell):
ENTRYPOINT java -jar /app.jar
Exec form (executed without a shell):
ENTRYPOINT ["java", "-jar", "/app.jar"]
I am new to containerizing apps using Docker. I could deploy a container including a war file. The war is basically a JAVA web application/servlet that sends back a video file upon receiving the request from end-user. The app deployment using docker was a success and app works fine. However I have some issues regarding its boot time.
From the moment that I create the container by hitting command docker run -it -d -p 8080:8080 surrogate, it takes about 5-6 minutes for the container to become operational, meaning that the first 5-6 minute of the container lifetime, it is not responding to end-user requests, and after that it works fine. Is there any way to accelerate this boot time?
Dockerfile includes:
FROM tomcat:7.0.72-jre7
ADD surrogate-V1.war /usr/local/tomcat/webapps/
CMD ["catalina.sh", "run"]
WORKDIR "/usr/local/tomcat/"
RUN wget https://www.dropbox.com/s/1r8awydc05ra8wh/a.mp4?dl=1
RUN cp a.mp4\?dl\=1 lego.mp4
(Posted on behalf of the OP).
First get rid of -d in the "docker run" command to see what is going on in the background. I noticed the war deployment phase is taking so long (around 15-20 minutes!!!)
The reason in my case was that the tomcat version in the Dockerfile was different from the Tomcat version in the environment from which I exported the web application as WAR. (How to check JRE version: in terminal enter: JAVA -version, checking the Tomcat version: from eclipse, when you are exporting, it shows the version).
In my case in Dockerfile, I had :
FROM tomcat:7.0.72-jre7
I changed it to:
FROM tomcat:6.0-jre7
It now takes less than 10 seconds!
In a nutshell, make sure that the Tomcat version and JRE versions in the Dockerfile are the same as the environment from which you exported the Java web application as WAR.