Testcontainers does not run FTP container - java

I am trying to deploy a testcontainer in Java with an FTP,
private final static ImageFromDockerfile imageDockerFile = new ImageFromDockerfile().withFileFromClasspath("Dockerfile",
"ftp_container/Dockerfile")
.withFileFromClasspath("vsftp.conf",
"ftp_container/vsftp.conf")
.withFileFromClasspath("start.sh",
"ftp_container/start.sh");
public FtpContainer()
{
super(imageDockerFile);
withCopyFileToContainer(MountableFile.forClasspathResource("ftp_container/vsftp.conf"),
"etc/vsftp/vsftp.conf");
withCopyFileToContainer(MountableFile.forClasspathResource("ftp_container/start.sh"),
"/start.sh");
withCommand("ftp");
}
And it starts without problems, but the container does not appear and I can't connect to the container
The Dockerfile is:
FROM centos
RUN yum -y install openssl vsftpd && rm -rf /var/cache/yum/*
RUN useradd -ms /bin/bash admin && echo 'admin:admin' | chpasswd
COPY vsftp.conf /etc/vsftp/vsftp.conf
COPY start.sh /
RUN chmod +x /start.sh
RUN mkdir -p /home/vsftpd/
RUN chown -R ftp:ftp /home/vsftpd/
VOLUME /home/admin
VOLUME /var/log/vsftpd
EXPOSE 21
ENTRYPOINT ["/start.sh"]
EDIT: I add my start.sh:
#!/bin/sh
CONF_FILE="/etc/vsftp/vsftp.conf"
echo "Launching vsftp on ftp protocol"
&>/dev/null /usr/sbin/vsftpd $CONF_FILE

If container "does not appear" it can mean three things:
There is some issue with configuration of testcontainers (in the code). In this case the chances are that you'll see an exception in Java, I assume its not your case
The start.sh was triggered but for some reason the process has died and the docker container itself died. In this case, Place a breakpoint in the right after the test container code (before the code that actually uses it) and once you get to that breakpoint Run docker ps. The chances are that the container won't be there. Then try to understand from docker ps -a whether there was a "recently-died" container and what do its logs show? (you can try to put an echo hello in start.sh to check that it gets printed to stdout and fetched by docker logs command)
The start.sh starts some kind of server but it takes time till it gets started. I saw this happening for some kinds of containers (it pretty much depends how the start.sh is written). In this case do the same trick with a breakpoint but this time when you run docker ps you should see the docker process up and running.
Try to wait for a while and "continue" the test execution. If you see that the test works now, you can either rewrite the startup script so that it won't return will the service is ready (port can receive connections) or add an "artificial" wait in the code (Yeah, I know this sounds bad, but sometimes you can't tough a container so its your only choice).

Related

How to use docker ENTRYPOINT to exec java and then exec touch once java process completes

I am trying to run a pod with 2 containers. Main container executes a docker java image which runs the cron job and the other sidecar(logstash) container reads the log and ship it to the Kibana server. As main container completes once the cron job is finished but sidecar container keeps running as it doesn't gracefully shutdown. I tried https://github.com/karlkfi/kubexit and preStop container lifecycle hook but didn't work.
Now, I want to try to stop sidecar container using dockerfile. Docker ENTRYPOINT exec java image jar and once java process is finished then I wanted to create / touch a file inside the main container which my other sidecar container looks for that touched file.
This is my current dockerfile
FROM mcr.microsoft.com/java/jre:11-zulu-alpine
ARG projectVersion
WORKDIR /opt/example/apps/sync
COPY /target/sync-app-${projectVersion}.jar app.jar
ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar
I wanted to add following once the java process execution completes
echo "Going to kill sync-app container..."
trap 'touch /opt/example/apps/sync/container/done' EXIT
I really wanted to know if it's possible execute the above command once ENTRYPOINT execution is finished? (may be adding these commands together in a script and pass that script to the ENTRYPOINT). But, I am not sure what is required to add those command to execute it properly.
The code I have for the logstash container which would look for the file (which is created by the first container)
- args:
- |
dockerd-entrypoint.sh &
while ! test -f /opt/example/apps/sync/container/done; do
echo "Waiting for sync-app to finish..."
sleep 5
done
echo "sync-app finished, exiting"
exit 0
command:
- /bin/sh
- -c
Please let me know if any more information is required. Thanks for all the help in advance!
You can wait for the process to finish by the process PID. The bash interpreter has a build in command called wait where if called with a child PID as argument, it will wait until the child had finished.
So you can create the following entrypoint.sh and add it to your java container.
entrypoint.sh
#!/bin/bash
echo "Cleaning the file signal and running the entrypoint"
rm -rf /opt/example/apps/sync/container/done
exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar &
child_pid=$!
echo "Process started, waiting for the exit"
wait $child_pid
echo "Entrypoint exited, signaling the file"
touch /opt/example/apps/sync/container/done
You can mount an emptyDir in both your main and sidecar containers.
Main container writes a file in a given location where this emptydir is mounted as the last statement in your entrypoint just before it exits, and the sidecar has a loop checking for existence of this same file.
Once it’s detected, the sidecar container does exit 0;
It’s a bit hacky but should do the work.
On a side note, Docker and Kubernetes have some defaults as to how to handle logs. They collect them from each container’s STDOUT and store the output in a file on the node running the container (/var/lib/docker/containers). Logstah could be setup as a daemonset running on each of these nodes with the containers logs location mounted as a hostPath and tail the files mentioned above. This way your containers don’t require any sodecar and in my opinion simplifies the overall setup, leaving to logstash the responsibility to do whatever is required with logging.

"Activemq not found" error after running custom Docker image

We have a legacy application that I am trying to dockerize. The jar of the application has both the application and an activemq bundled together. (We cannot change the way it is built). And has certain installation steps. I created the following initial Dockerfile for this however I am facing an issue (mentioned after the Dockerfile) when I run the image.
The Dockerfile looks like this :
FROM registry:4000/openjdk:8-jre-alpine
RUN addgroup -S appuser && adduser -S -G appuser appuser
ADD ./fe.jar /home/appuser
RUN chmod +x /home/appuser/fe.jar \
&& chown appuser:appuser /home/appuser/fe.jar
USER appuser
RUN ["java", "-jar", "/home/appuser/fe.jar", "-i"]
WORKDIR /home/appuser/fe/activemq/bin
CMD ["/bin/sh", "-c", "activemq"]
The RUN command extracts the application and the activemq at the location into folder called fe.
The WORKDIR seems to be setting the working directly to activemq/bin. I confirmed this by using sh script which triggers when the image is run. In the sh script I trigger an ls and pwd command to see the contents and the location.
However when I run the image which triggers the CMD command I get the error that :
/bin/sh: activemq: not found
What can be the possible issue here?
If activemq is an executable in your bin directory (and not in PATH) then you need to edit your CMD:
CMD ["/bin/sh", "-c", "./activemq"]
Also make sure that your script is executable.
Found the problem. The activemq script starts with #!/bin/bash and I am trying to run it using sh. I need to first install bash in the image and then run the activemq script using one.
I got the hint from this answer : docker alpine /bin/sh script.sh not found
Now it moved ahead however the container dies after running immediately. Not sure what the issue is. Doesn't even give any error.

mkdir command with docker

Inside my docker container, this command mkdir -p -m 755 directoryName creates a directory (Blue File) at the given path. However, outside docker, when I attempt to create a directory with the same command mkdir -p -m 755 ContainerID:/root/.../directoryName it seems to be creating an Executable (Green File).
This is causing trouble because with my "create directory" command i'm copying stuff to it, and the command is failing when I do it outside of docker.
This is what my full command will be, when I execute outside docker:
mkdir -p -m 755 ContainerID:/root/../dirName && docker cp someImage.jpg ContainerID:/root/../dirName
Any thoughts on how to to make this work?
To be honest, I have never heard of such mkdir syntax, referencing a different host, but in any case (even if it was supported) I would not use it. You should execute anything you want to to inside a docker container as docker exec ContainerID mkdir -p -m 755 /root/../dirName
If you want to put several commands inside the same docker exec call you can do it by executing docker exec ContainerID bash -c "whatever && whatever2 && ... whateverX"
Have in mind that these commands will be executed as the user referenced in the Dockerfile with an USER clause, defaulting to root. There are some images in which the user is set to something different, leading to permission issues while doing stuff like this. The right approach to follow would depend on whatever you want to achieve.
Hope that helps! :)

How to keep a GlassFish server's domain active while disconnected from SSH to an EC2 instance? [duplicate]

I connect to the linux server via putty SSH. I tried to run it as a background process like this:
$ node server.js &
However, after 2.5 hrs the terminal becomes inactive and the process dies. Is there anyway I can keep the process alive even with the terminal disconnected?
Edit 1
Actually, I tried nohup, but as soon as I close the Putty SSH terminal or unplug my internet, the server process stops right away.
Is there anything I have to do in Putty?
Edit 2 (on Feb, 2012)
There is a node.js module, forever. It will run node.js server as daemon service.
nohup node server.js > /dev/null 2>&1 &
nohup means: Do not terminate this process even when the stty is cut
off.
> /dev/null means: stdout goes to /dev/null (which is a dummy
device that does not record any output).
2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). You may replace &1 with a file path to keep a log of errors, e.g.: 2>/tmp/myLog
& at the end means: run this command as a background task.
Simple solution (if you are not interested in coming back to the process, just want it to keep running):
nohup node server.js &
There's also the jobs command to see an indexed list of those backgrounded processes. And you can kill a backgrounded process by running kill %1 or kill %2 with the number being the index of the process.
Powerful solution (allows you to reconnect to the process if it is interactive):
screen
You can then detach by pressing Ctrl+a+d and then attach back by running screen -r
Also consider the newer alternative to screen, tmux.
You really should try to use screen. It is a bit more complicated than just doing nohup long_running &, but understanding screen once you never come back again.
Start your screen session at first:
user#host:~$ screen
Run anything you want:
wget http://mirror.yandex.ru/centos/4.6/isos/i386/CentOS-4.6-i386-binDVD.iso
Press ctrl+A and then d. Done. Your session keeps going on in background.
You can list all sessions by screen -ls, and attach to some by screen -r 20673.pts-0.srv command, where 0673.pts-0.srv is an entry list.
This is an old question, but is high ranked on Google. I almost can't believe on the highest voted answers, because running a node.js process inside a screen session, with the & or even with the nohup flag -- all of them -- are just workarounds.
Specially the screen/tmux solution, which should really be considered an amateur solution. Screen and Tmux are not meant to keep processes running, but for multiplexing terminal sessions. It's fine, when you are running a script on your server and want to disconnect. But for a node.js server your don't want your process to be attached to a terminal session. This is too fragile. To keep things running you need to daemonize the process!
There are plenty of good tools to do that.
PM2: http://pm2.keymetrics.io/
# basic usage
$ npm install pm2 -g
$ pm2 start server.js
# you can even define how many processes you want in cluster mode:
$ pm2 start server.js -i 4
# you can start various processes, with complex startup settings
# using an ecosystem.json file (with env variables, custom args, etc):
$ pm2 start ecosystem.json
One big advantage I see in favor of PM2 is that it can generate the system startup script to make the process persist between restarts:
$ pm2 startup [platform]
Where platform can be ubuntu|centos|redhat|gentoo|systemd|darwin|amazon.
forever.js: https://github.com/foreverjs/forever
# basic usage
$ npm install forever -g
$ forever start app.js
# you can run from a json configuration as well, for
# more complex environments or multi-apps
$ forever start development.json
Init scripts:
I'm not go into detail about how to write a init script, because I'm not an expert in this subject and it'd be too long for this answer, but basically they are simple shell scripts, triggered by OS events. You can read more about this here
Docker:
Just run your server in a Docker container with -d option and, voilá, you have a daemonized node.js server!
Here is a sample Dockerfile (from node.js official guide):
FROM node:argon
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 8080
CMD [ "npm", "start" ]
Then build your image and run your container:
$ docker build -t <your username>/node-web-app .
$ docker run -p 49160:8080 -d <your username>/node-web-app
Always use the proper tool for the job. It'll save you a lot of headaches and over hours!
another solution disown the job
$ nohup node server.js &
[1] 1711
$ disown -h %1
nohup will allow the program to continue even after the terminal dies. I have actually had situations where nohup prevents the SSH session from terminating correctly, so you should redirect input as well:
$ nohup node server.js </dev/null &
Depending on how nohup is configured, you may also need to redirect standard output and standard error to files.
Nohup and screen offer great light solutions to running Node.js in the background. Node.js process manager (PM2) is a handy tool for deployment. Install it with npm globally on your system:
npm install pm2 -g
to run a Node.js app as a daemon:
pm2 start app.js
You can optionally link it to Keymetrics.io a monitoring SAAS made by Unitech.
$ disown node server.js &
It will remove command from active task list and send the command to background
I have this function in my shell rc file, based on #Yoichi's answer:
nohup-template () {
[[ "$1" = "" ]] && echo "Example usage:\nnohup-template urxvtd" && return 0
nohup "$1" > /dev/null 2>&1 &
}
You can use it this way:
nohup-template "command you would execute here"
Have you read about the nohup command?
To run command as a system service on debian with sysv init:
Copy skeleton script and adapt it for your needs, probably all you have to do is to set some variables. Your script will inherit fine defaults from /lib/init/init-d-script, if something does not fits your needs - override it in your script. If something goes wrong you can see details in source /lib/init/init-d-script. Mandatory vars are DAEMON and NAME. Script will use start-stop-daemon to run your command, in START_ARGS you can define additional parameters of start-stop-daemon to use.
cp /etc/init.d/skeleton /etc/init.d/myservice
chmod +x /etc/init.d/myservice
nano /etc/init.d/myservice
/etc/init.d/myservice start
/etc/init.d/myservice stop
That is how I run some python stuff for my wikimedia wiki:
...
DESC="mediawiki articles converter"
DAEMON='/home/mss/pp/bin/nslave'
DAEMON_ARGS='--cachedir /home/mss/cache/'
NAME='nslave'
PIDFILE='/var/run/nslave.pid'
START_ARGS='--background --make-pidfile --remove-pidfile --chuid mss --chdir /home/mss/pp/bin'
export PATH="/home/mss/pp/bin:$PATH"
do_stop_cmd() {
start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 \
$STOP_ARGS \
${PIDFILE:+--pidfile ${PIDFILE}} --name $NAME
RETVAL="$?"
[ "$RETVAL" = 2 ] && return 2
rm -f $PIDFILE
return $RETVAL
}
Besides setting vars I had to override do_stop_cmd because of python substitutes the executable, so service did not stop properly.
Apart from cool solutions above I'd mention also about supervisord and monit tools which allow to start process, monitor its presence and start it if it died. With 'monit' you can also run some active checks like check if process responds for http request
For Ubuntu i use this:
(exec PROG_SH &> /dev/null &)
regards
Try this for a simple solution
cmd & exit

Ignite running in Docker (is: General Java-Docker issue)

I am trying to run ignite in a Docker container (Mac OS X, Docker 1.9.1) as committed in git:
# Start from a Java image.
FROM java:7
# Ignite version
ENV IGNITE_VERSION 1.5.0-b1
WORKDIR /opt/ignite
ADD http://www.us.apache.org/dist/ignite/1.5.0-b1/apache-ignite-fabric-1.5.0-b1-bin.zip /opt/ignite/ignite.zip
# Ignite home
ENV IGNITE_HOME /opt/ignite/apache-ignite-fabric-1.5.0-b1-bin
RUN unzip ignite.zip
RUN rm ignite.zip
# Copy sh files and set permission
ADD ./run.sh $IGNITE_HOME/
RUN chmod +x $IGNITE_HOME/run.sh
CMD $IGNITE_HOME/run.sh
After building it locally to apache/ignite and running the image with following command, the container 'hangs'
docker run --expose=4700-4800 -it -p 47500-47600:47500-47600 -p 47100-47200:47100-47200 --net=host -e "CONFIG_URI=https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-default.xml" apacheignite/ignite-docker
When connecting to the container (docker exec -ti apache/ignite /bin/bash) and running the command in verbose mode via bash, it hangs on org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator:
bash -x /opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/bin/ignite.sh https://raw.githubusercontent.com/apache/ignite/master/examples/config/example-default.xml
Output where it hangs:
+ CP='/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-indexing/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-spring/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/licenses/*'
++ /usr/bin/java -cp '/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-indexing/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/ignite-spring/*:/opt/ignite/apache-ignite-fabric-1.5.0-b1-bin/libs/licenses/*' org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator
Looking at the code of CommandLineRandomNumberGenerator, I don't see anything special, just one line to generate a UUID. Are there other things that are automatically started somehow that causes locking a threat so that the application cannot exit?
This seems to be a docker issue with java in general, see also:
https://github.com/docker/docker/issues/18180
Several solutions possible:
create a docker machine as follows and run it in here (cfr. https://github.com/docker/docker/issues/18180#issuecomment-162568282):
docker-machine create -d virtualbox --engine-storage-driver overlay overlaymachine
Add System.exit(0) explicit at the end of each main method (cfr. https://github.com/docker/docker/issues/18180#issuecomment-161129296)
Wait for next docker patched version (https://github.com/docker/docker/issues/18180#issuecomment-170656525)
I think it would be good practice (for now) to add System.exit to all main methods in Ignite since this is independent of alternative hacks on docker or linux in general (linux kernel need AUFS upgrade and many machine may lag behind before that). This way future Ignite version can safely be installed on older kernels also.

Categories

Resources