Ignite running in Docker (is: General Java-Docker issue) - java

I am trying to run ignite in a Docker container (Mac OS X, Docker 1.9.1) as committed in git:
# Start from a Java image.
FROM java:7
# Ignite version
ENV IGNITE_VERSION 1.5.0-b1
WORKDIR /opt/ignite
ADD http://www.us.apache.org/dist/ignite/1.5.0-b1/apache-ignite-fabric-1.5.0-b1-bin.zip /opt/ignite/ignite.zip
# Ignite home
ENV IGNITE_HOME /opt/ignite/apache-ignite-fabric-1.5.0-b1-bin
RUN unzip ignite.zip
RUN rm ignite.zip
# Copy sh files and set permission
ADD ./run.sh $IGNITE_HOME/
RUN chmod +x $IGNITE_HOME/run.sh
CMD $IGNITE_HOME/run.sh
After building it locally to apache/ignite and running the image with following command, the container 'hangs'
docker run --expose=4700-4800 -it -p 47500-47600:47500-47600 -p 47100-47200:47100-47200 --net=host -e "CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-default.xml" apacheignite/ignite-docker
When connecting to the container (docker exec -ti apache/ignite /bin/bash) and running the command in verbose mode via bash, it hangs on org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator:
bash -x /opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/bin/ignite.sh https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-default.xml
Output where it hangs:
+ CP='/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-indexing/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-spring/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/licenses/*'
++ /usr/bin/java -cp '/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-indexing/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-spring/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/licenses/*' org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator
Looking at the code of CommandLineRandomNumberGenerator, I don't see anything special, just one line to generate a UUID. Are there other things that are automatically started somehow that causes locking a threat so that the application cannot exit?

This seems to be a docker issue with java in general, see also:
https://github.com/docker/docker/issues/18180
Several solutions possible:
create a docker machine as follows and run it in here (cfr. https://github.com/docker/docker/issues/18180#issuecomment-162568282):
docker-machine create -d virtualbox --engine-storage-driver overlay overlaymachine
Add System.exit(0) explicit at the end of each main method (cfr. https://github.com/docker/docker/issues/18180#issuecomment-161129296)
Wait for next docker patched version (https://github.com/docker/docker/issues/18180#issuecomment-170656525)
I think it would be good practice (for now) to add System.exit to all main methods in Ignite since this is independent of alternative hacks on docker or linux in general (linux kernel need AUFS upgrade and many machine may lag behind before that). This way future Ignite version can safely be installed on older kernels also.

Related

Docker stack - wait for it

I would like to start two services in a stack.
Mysql
Spring boot app
The main problem is that spring boot starts before database (or starts when connection to database is not allowed). Then in logs I could see: java.net.UnknownHostException: database.
We could use startup order:
https://docs.docker.com/compose/startup-order/
So what I do? I copy wait-for-it.sh to file with docker-compose, add line
command: ["./wait-for-it.sh", "database:3306", "--", "java -Dspring.profiles.active=prod -jar app.jar"]
The result is:
java.lang.IllegalArgumentException: Invalid argument syntax: --
My entrypoint in backend Dockerfile:
ENTRYPOINT ["java","-Dspring.profiles.active=prod", "-jar","app.jar"]
How to make that spring boot app will wait for MySQL database under docker stack?
When you run the container, the ENTRYPOINT and CMD are combined. In your example you've set ENTRYPOINT to run the Java process, but then override CMD in the docker-compose.yml: instead of actually running the wait-for-it.sh script, it just gets passed as extra parameters to the JVM.
A typical pattern for using both of these together is to have ENTRYPOINT be some sort of wrapper that does first-time setup, then takes CMD as additional parameters. For this to work CMD needs to be a complete shell command. Change the Dockerfile to look like:
COPY wait-for-it.sh entrypoint.sh .
# ENTRYPOINT _must_ be in JSON-array form
ENTRYPOINT ["./entrypoint.sh"]
# CMD may be either string or JSON-array form
# (This is exactly what you originally had as ENTRYPOINT)
CMD ["java", "-Dspring.profiles.active=prod", "-jar", "app.jar"]
The entrypoint script can be very simple:
#!/bin/sh
# Wait for the database to be up
if [ -n "$MYSQL_HOST" ]; then
./wait-for-it.sh "$MYSQL_HOST:3306"
fi
# Run the CMD
exec "$#"
The important detail here is that I've configured the database host to be passed as an environment variable. This requires a shell to run to expand it, which is tricky to do in the JSON-array ENTRYPOINT syntax, so I've moved it into a separate script.
Finally, in the docker-compose.yml, do not override command: (or entrypoint:), but do make sure to set the environment variable for the script to be able to find the database.
version: '3.8'
services:
database: { ... }
application:
environment:
MYSQL_HOST: database
depends_on:
- database
# no command: override
The wrapper here will run whenever the container starts up, so if you docker-compose run application bash to get an interactive shell based on the image, it will still wait for the database to be up.
If you control both the Dockerfile and the docker-compose.yml, you shouldn't usually need to override command: in the Compose settings. I find the entrypoint-wrapper pattern useful enough that I generally default to using CMD in my Dockerfiles (there is no requirement to have an ENTRYPOINT).

Testcontainers does not run FTP container

I am trying to deploy a testcontainer in Java with an FTP,
private final static ImageFromDockerfile imageDockerFile = new ImageFromDockerfile().withFileFromClasspath("Dockerfile",
"ftp_container/Dockerfile")
.withFileFromClasspath("vsftp.conf",
"ftp_container/vsftp.conf")
.withFileFromClasspath("start.sh",
"ftp_container/start.sh");
public FtpContainer()
{
super(imageDockerFile);
withCopyFileToContainer(MountableFile.forClasspathResource("ftp_container/vsftp.conf"),
"etc/vsftp/vsftp.conf");
withCopyFileToContainer(MountableFile.forClasspathResource("ftp_container/start.sh"),
"/start.sh");
withCommand("ftp");
}
And it starts without problems, but the container does not appear and I can't connect to the container
The Dockerfile is:
FROM centos
RUN yum -y install openssl vsftpd && rm -rf /var/cache/yum/*
RUN useradd -ms /bin/bash admin && echo 'admin:admin' | chpasswd
COPY vsftp.conf /etc/vsftp/vsftp.conf
COPY start.sh /
RUN chmod +x /start.sh
RUN mkdir -p /home/vsftpd/
RUN chown -R ftp:ftp /home/vsftpd/
VOLUME /home/admin
VOLUME /var/log/vsftpd
EXPOSE 21
ENTRYPOINT ["/start.sh"]
EDIT: I add my start.sh:
#!/bin/sh
CONF_FILE="/etc/vsftp/vsftp.conf"
echo "Launching vsftp on ftp protocol"
&>/dev/null /usr/sbin/vsftpd $CONF_FILE
If container "does not appear" it can mean three things:
There is some issue with configuration of testcontainers (in the code). In this case the chances are that you'll see an exception in Java, I assume its not your case
The start.sh was triggered but for some reason the process has died and the docker container itself died. In this case, Place a breakpoint in the right after the test container code (before the code that actually uses it) and once you get to that breakpoint Run docker ps. The chances are that the container won't be there. Then try to understand from docker ps -a whether there was a "recently-died" container and what do its logs show? (you can try to put an echo hello in start.sh to check that it gets printed to stdout and fetched by docker logs command)
The start.sh starts some kind of server but it takes time till it gets started. I saw this happening for some kinds of containers (it pretty much depends how the start.sh is written). In this case do the same trick with a breakpoint but this time when you run docker ps you should see the docker process up and running.
Try to wait for a while and "continue" the test execution. If you see that the test works now, you can either rewrite the startup script so that it won't return will the service is ready (port can receive connections) or add an "artificial" wait in the code (Yeah, I know this sounds bad, but sometimes you can't tough a container so its your only choice).

Install a custom package in a docker container that keeps exiting

I have a docker container which runs a springboot java application. Dockerfile:
# Create container with java preinstalled
FROM openjdk:8-jdk-alpine
# Create app directory
VOLUME /tmp
# Handle Arguments
ARG JAR_FILE
ARG ENV_NAME
ENV SPRING_PROFILES_ACTIVE=${ENV_NAME}
RUN echo ${ENV_NAME}
# Bundle app source
COPY ${JAR_FILE} app.jar
COPY application.yml application.yml
# Run the server
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dspring.config.location=application.yml","-jar","app.jar"]
Now, I have a custom library I need to install in that container. I'll need to copy the installation, extract it, run the install script and answer prompts (Y/n)
I understood the easiest way to do this is to connect to the container, install the package and commit the changes.
First - I start the container using:
docker run --name local-jdk8 -d openjdk:8-jdk-alpine
The next step is to copy the data and run the install script, but the container keeps on exiting since the run command is empty ("/bin/sh") which means I can't run
docker exec -it local-jdk8 bash
Any ideas on how I can modify such a container?
Solved it using expect library
My dockerfile :
# Create container with java preinstalled
FROM openjdk:8
# Create app directory
VOLUME /tmp
# Handle Arguments
ARG JAR_FILE
ARG ENV_NAME
ARG DRIVER_FILE
# Environment
ENV SPRING_PROFILES_ACTIVE=${ENV_NAME}
RUN echo ${ENV_NAME}
# Fingerprint Driver
RUN apt-get update -y
RUN apt-get install -y expect
COPY ${DRIVER_FILE} driver.tar.gz
COPY driver-install.exp driver-install.exp
RUN tar -xzf driver.tar.gz
RUN /driver-install.exp
# Copy app source
COPY ${JAR_FILE} app.jar
COPY application.yml application.yml
# Run the server
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-Dspring.config.location=application.yml","-jar","app.jar"]
driver-install.exp is the expect scripts that automatically interacts with the package installation
For what it's worth, here is a little trick that allows you to keep your container running to modify and commit it:
docker run --name local-jdk8 -d openjdk:8-jdk-alpine tail -f /dev/null
Furthermore, there is no bash installed on the container so sh will have to do:
docker exec -it local-jdk8 sh
Nevertheless, modifying Dockerfile is the better approach, since your change is persisted in code, rather than done on an potentially ephemeral container.

Restarted service, getting killed in debian postinst script.

I have written jenkins job for deploying my package into one of my servers. Am using debian package management system. Am updating all the packages of machine by sudo apt-get update command and installing the required package by sudo apt-get install package_name in a deployment_script (where we make .deb file and specify servers to install). Also am copying the script am using to start / stop package to /etc/init.d/package_name. This script can take parameters start / stop. In my debian postinst script I have mentioned /etc/init.d/package_name start to start the package. For deploying I'll just trigger the jenkins job and give deployment_script to the job. It can install package, then calling postinst script where it restart service properly as well in the intended machine. But while exiting postinst script the restarted service getting killed. Any help on finding the reason and how to fix it?
Am starting my service like sudo -u user_name java -server some_vm_options with jar of the package, configs > /dev/null &.
I just changed it to sudo -u user_name nohup java -server some_vm_options with jar of the package, configs > /dev/null &. Now my started service doesn't get killed.

How to keep a GlassFish server's domain active while disconnected from SSH to an EC2 instance? [duplicate]

I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit

Categories

Resources