How to run a docker container from java code? I'm trying to make a SaaS using docker, once the user logs in, I should start a memcached container from java code, this solution doesn't work:
Process p = Runtime.getRuntime().exec("docker images");
Docker cmds run usually on git bash, not on cmd.
PS: I'm using docker on windows.
You can do it using https://github.com/docker-java/docker-java . It allows you to build a custom image and run it from java
I assume you are using Docker Toolbox for Windows.
The docker command does not take a capital D. Maybe try with
Process p = Runtime.getRuntime().exec("docker images");
but as you are probably running this code on Windows, that might work anyway.
Another thing to consider is the value of the DOCKER_HOST environment variable which must be set accordingly to instruct the docker client how to communicate with the docker engine.
In your case the docker client is run on Windows while the docker engine runs inside a virtual machine living in VirtualBox.
The shell provided by Runtime.getRuntime().exec() won't have the DOCKER_HOST environment variable set.
Another way is to use the --host or -H docker client option to specify how to connect to your docker engine:
Process p = Runtime.getRuntime().exec("docker --host=tcp://<some IP>:2376 images");
I'm sorry guys, when I open a new shell (client), I have to configure it in order to know how to connect to the docker daemon that is running in the virtualbox. I had to run cmds that set the shell environment, because the quickstart terminal does it automatically. So I had to run the following and then paste the output back into my cmd shell:
docker-machine env --shell cmd default
Now it works perfectly.
Update (thanks to #thaJeztah) : It's better to use Java libraries to connect directly to the docker daemon.
Link to API https://docs.docker.com/engine/reference/api/remote_api_client_libraries/
Related
I'm learning to deploy Spring Boot apps on AWS EC2. And I know how to automate app launch, when I start the EC2 instance, I don't need to manually use the command java -jar java-service.jar, I just add this command in the /etc/rc.local file and that is all. But I have 2 microservice, and I want to start both of them automatically, but if I try to add both commands in the /etc/rc.local it's not working, only the first service will start, the second service will not start.
So I have the commands added like this:
And after I start the EC2 instance only the first service is started:
Thank you!
I am not a unix expert, but I see the only issue in running 2 java commands from terminal is that unless the first command returns, the next command is not executed. So, I think the solution would be run the 1st command in some interactive mode so that the other commands can be executed simultaneously.
There are ways in unix shell to run a command in background. I found this useful link - https://www.maketecheasier.com/run-bash-commands-background-linux/
In bash terminal, a command can be made to run in background by appending it with &. So, I think you should be able to start both jars if you do something like -
java -jar /home/ec2-user/first.jar &
java -jar /home/ec2-user/second.jar
I recommend to use Systemd.
Create a Systemd unit file for every microservice, save it in /etc/systemd/system/my-app.service. Something like that:
[Unit]
Description=My Java app
After=syslog.target network.target
[Service]
EnvironmentFile=/etc/sysconfig/my-app-env
WorkingDirectory=/my/app/home
ExecStart=/usr/bin/java $JAVA_OPTS -jar my-app.jar
KillMode=process
User=my-app-user
Restart=on-failure
[Install]
WantedBy=multi-user.target
Then, run:
systemctl daemon-reload
systemctl enable --now my-app
After that, you can use:
systemctl status my-app
systemctl stop my-app
systemctl start my-app
Another solution is to bundle your jars into Docker images. This of course requires Docker runtime and adds an overhead, but it also has some benefits:
Complete separation of jar files. Easily use different java versions.
No need to worry about differences of local and ec2 environment.
Easily scale to 3 or more jars.
Use Docker Cli to build and start containers. Works great in a Devops Pipeline.
You can read here to learn how to create Spring Boot Docker images. After you build an image. You start it like this.:
docker run -p 8080:8080 springio/gs-spring-boot-docker
You can run as many docker run commands you need, one after another.
I am not sure which system you are using in starting application:
For linux base system, you can use crontab to schedule the task when the server reboot.
Follow this steps:
Download crontab
#apt-get install cron
Edit the file file to enable the task
crontab -e
(Choose Vim or nano to edit the task)
Add this code to your server
#reboot /usr/bin/java -jar XXXXX.jar
Save your file
Check the result
crontab -l
#systemctl status cron
This method works in my Debian system. For more details, you can refer to
How to automatically run program on Linux startup
If you are running from bash, then join two jar commands with "&" like below.
java -jar /home/ec2-user/first.jar&java -jar /home/ec2-user/second.jar
coupon service
Run the command 'java -jar /home/ec2-user/coupon-service-0.0.1-SNAPSHOT.JAR'
Press CTRL+Z, type bg, press Enter, type disown, press Enter.
product service
Run the command 'java -jar /home/ec2-user/product-service-0.0.1-SNAPSHOT.JAR'
Press CTRL+Z, type bg, press Enter, type disown, press Enter.
NOTE: Both services should have different ports.
I am new to Java. I have hapi fhir server running on AWS by cloning this repository (https://github.com/hapifhir/hapi-fhir-jpaserver-starter)
I run my server with follwing command: "sudo mvn -e jetty:run"
--
My Problem:
As soon as I log out of AWS, my server stops. When I am logged in to my AWS instance via the .pem file, AWS instance running with ubuntu 18.04 LTS with nginx server.
Thanks
The ideal approach to execute or setup a java application on AWS is to run it as a daemon by setting up systemd script or init in linux.
In your case the application stops as soon as you close the terminal, because you are starting it in the terminal without the nohup command, when the terminal is closed the application is also stopped since the controlling thread is stopped. If you just want to launch the application on a separate background thread without going through the hassle of actually setting it up as a service in linux , you can use the nohup command (setting up a systemd to register the java application as a service is the preferred approach) :
nohup java -jar yourjarName &
run it as daemon:
"sudo mvn -e jetty:run &"
The & makes the command run in the background.
From man bash:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
I want to run a Java Spring application inside of a docker container and this application should be able to deploy sibling containers. When I run the Java application directly on my machine it works fine and can deploy containers, but as soon as I try to run the application inside a container it does not work any more (im using supervisord to run the mongodb and Java Spring app in one container and I know that thats not best practice). The container starts up fine, but crashes as soon as my application tries to connect to the docker deamon without any stacktraces from Java, just the error WARN received SIGTERM indicating exit request. The supervisord logs dont contain additional info.
I tried mounting the docker socket from the host (Windows 10 Pro with Docker Desktop, also tried Ubuntu Server 18.04) into the container using -v /var/run/docker.sock:/var/run/docker.sock.
I also tried to use --net="host".
Both did not work, although with the second one the container does not crash but produces a different error ({}->unix://localhost:80: Connection refused) visible in the log of my java application, which indicates that it cant even find the right address for the deamon.
I also activated "Expose deamon on tcp://localhost:2375 without TLS".
I also tried to set the DOCKER_HOST environment variable inside the container to default values such as "tcp://localhost:2375" or "/var/run/docker.sock".
Here is the code that I use to initialize the docker client.
DockerClient docker = DefaultDockerClient.fromEnv().build();
The DefaultDockerClient.fromEnv().build(); should create a docker client that uses the DOCKER_HOST environment variable to connect to the host or the default adress ("/var/run/docker.sock" on *NIX).
Here is my DOCKERFILE:
FROM openjdk:8-jre-alpine
ENV PACKAGES mongodb supervisor
VOLUME /opt/server
VOLUME /data/db
WORKDIR /opt/accservermanager
ADD supervisord.conf /etc/supervisor.conf
ADD accservermanager.jar /opt/accservermanager/accservermanager.jar
ADD application.properties /opt/accservermanager/application.properties
RUN apk update && \
apk add --update $PACKAGES --no-cache && \
rm -rf /var/cache/apk/*
EXPOSE 8000
CMD ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisor.conf"]
And finally, my supervisord.conf
[supervisord]
user=root
stderr_logfile=/var/log/supervisord.err.log
stdout_logfile=/var/log/supervisord.out.log
loglevel=debug
[program:mongodb]
command=mongod --smallfiles
autostart=true
autorestart=true
stderr_logfile=/var/log/mongo.err.log
stdout_logfile=/var/log/mongo.out.log
[program:accservermanager]
directory=/opt/accservermanager/
command=java -jar accservermanager.jar
autostart=true
autorestart=true
stderr_logfile=/var/log/accservermanager.err.log
stdout_logfile=/var/log/accservermanager.out.log
Expected result: Application connects to the docker client from the host and is able to deploy/manage containers on the host
Actual result: Container crashes or outputs errors.
Turns out that there is a new version of spotify-docker that fixes my problem.
Updating from v8.15.1 to v8.15.2 solved my issue.
I checked the other posts but they are not what my issue represents.
I'm doing the Linuxacademy "Certified Jenkins Engineer" course and at the "Functional Testing" lesson, we add a docker agent with some steps in the Jenkinsfile but I am confused about the syntax used vs the syntax described in the official Jenkins Pipeline documentation Using Docker with Pipeline.
What the Jenkins tries to achieve is to run a test on a.jar file on a Jenkins node with CentOS but the test needs to be run on a Debian OS, in order to do that on the CentOS node, the Jenkinsfile has a stage with a Docker agent and a commmand that pulls the openjdk image from Dockerhub and run some command in it.
This is the syntax from the Lesson Repo:
stage("Test on Debian") {
agent {
docker 'openjdk:8u121-jre'
}
steps {
sh "wget http://brandon4231.mylabserver.com/rectangles/all/rectangle_${env.BUILD_NUMBER}.jar"
sh "java -jar rectangle_${env.BUILD_NUMBER}.jar 3 4"
Note that I simplified the file to match the work in progress, this one is the final version, but the focus is on the agent line.
My first question is that the Jenkins documentation syntax is different from the one used here but in the lesson video it runs with no issues, the correct syntax should be agent {
docker { image 'openjdk:8u121-jre' }
}
My second question is, wheter I use one or the other syntax, and also I used the openjdk:7u181-jre because the one from the lesson is not longer available, I get this error in the console log output:
If go to the node terminal and manually run
docker run openjdk:7u181-jre
it works fine, I run it as not sudo user.
Also I don't understand what the docker command does in the Jenkinsfile: Does it run the container after pulling it or just pull the container?
Any idea about what's going on?
Thanks.
1) Please install docker in the Jenkins server.
2) Please change the user-mode of Jenkins and docker same by running below command and restart both Jenkins and docker and now u will be able to access.
sudo usermod -aG docker jenkins-server-user-name
sudo systemctl restart docker
sudo systemctl restart jenkins
To install Jenkins: https://www.digitalocean.com/community/tutorials/how-to-install-jenkins-on-ubuntu-16-04
To install Docker: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04
Note: Verified on ubuntu server
I am trying to run java with docker image in gitlab.
Here is my docker file.
FROM java:latest
FROM perl
COPY . /
ENTRYPOINT ["/usr/bin/perl", "/myapp_entrypoint.pl"]
I was able to build docker image successfully and run perl commands but java commands are not working.
My application is a linux application and I am running 'java -version'. I am not getting any output completely blank output for version command.
What would be the issue? Do I need to add anything related linux as I am running 'java -version' as linux command?
You don't specify what OS you're running in your container, but the main issue is that you're blowing away your Java layer with another FROM directive.
From the documentation, emphasis mine:
Each FROM instruction clears any state created by previous instructions.
So I'd espouse a solution in which I install Perl (if I really needed to) after having my base Java image.
However, if you use the base OpenJDK images, Perl comes preinstalled, so that will simplify your Dockerfile significantly.
FROM openjdk:latest
COPY . /
ENTRYPOINT ["/usr/bin/perl", "/myapp_entrypoint.pl"]