Vert.x based application crashes on docker container - java

I'm trying to run a Vert.x Java based application on a Docker container. My application runs few verticles which it initiates from within itself.
I've put the jar file on a folder and created a Dockerfile with the following content:
FROM vertx/vertx3
ENV VERTICLE_FILE Medical-1.0-SNAPSHOT.jar
ENV VERTICLE_HOME /performit/web/vertx/verticles/
COPY $VERTICLE_FILE $VERTICLE_HOME/
WORKDIR $VERTICLE_HOME
ENTRYPOINT ["sh", "-c"]
EXPOSE 8080
CMD ["java -jar $VERTICLE_FILE"]
USER daemon
I create an image with the command
$ sudo docker build -t medical-main .
I then attempt to create a container with the following line:
sudo docker run --name medical-main -p 8080:8080 -d medical-main
This fails and the log shows the following:
java.lang.IllegalStateException: Failed to create cache dir
at io.vertx.core.impl.FileResolver.setupCacheDir(FileResolver.java:257)
at io.vertx.core.impl.FileResolver.<init>(FileResolver.java:79)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:138)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:114)
at io.vertx.core.impl.VertxImpl.<init>(VertxImpl.java:110)
at io.vertx.core.impl.VertxFactoryImpl.vertx(VertxFactoryImpl.java:34)
at io.vertx.core.Vertx.vertx(Vertx.java:79)
What am I missing?
Izhar

Judging by FileResolver.java, vert.x tries to create a ".vertx" directory in the current working directory by default. You have configured a user called "daemon", are you sure that this user has write access to the working dir in the docker image? If not, change the permissions as outlined in docker-image-author-guidance, or revert to using the root user.

This directory is used to serve files contained in jar files (for example web assets packaged in a fat jar). If you are not using this feature, you can disable the creation of this directory by setting the vertx.disableFileCPResolving system property to true. You can also change the location using the vertx.cacheDirBase system property.
Reference:
https://groups.google.com/forum/#!topic/vertx/7cBbKrjYfeI

This exception is caused when Vert.x tries to create .vertx (cache dir) so that it can copy and read a file from the classpath or file that's on the classpath. It's possible, the $user doesn't have permission to create the cache directory.
The reason behind cache dir is simple: reading a file from a jar or from an input stream is blocking. So to avoid to pay the price every time, Vert.x copies the file to its cache directory and reads it from there every subsequent read. This behavior can be configured.
vertx run my.Verticle -Dvertx.cacheDirBase=/tmp/vertx-cache
# or
java -jar my-fat.jar -Dvertx.cacheDirBase=/tmp/vertx-cache
Otherwise, you can completely avoid this behavior, launch your application with -Dvertx.disableFileCaching=true. With this setting, Vert.x still uses the cache, but always refresh the version stored in the cache with the original source. So if you edit a file served from the classpath and refresh your browser, Vert.x reads it from the classpath, copies it to the cache directory and serves it from there. Do not use this setting in production, it can kill your performances.
link to documentation

For me, this same issue was coming for while trying to run a jar file. It started coming suddenly and then I was forced to run the jar file as ROOT for sometime until I finally got fed up and started looking for reason thoroughly.
It happened because I accidentally ran jar file once in SUDO
privileges and the .vertx folder was create as ROOT account.
I could not figure this out initially as I was trying ll alias
command in amazon linux and sadly it does not display hidden folders
So when I was thoroughly investigating the issue next time, I also tried ls -al which showed in .vertx folder and I figured out that issue was it being created as SUDO user.
Deleted .vertx folder and jar file started working normally again as
normal user.

vert.x tries to create a cache-dir (.vertx/file-cache-someuuid) in the current directory. The given exception will be thrown if the mkdirs() call fails.
Has the user daemon sufficient rights in the workdir?

Related

Java documentation says that hs_err_pid will be written to /tmp if current dir is not writable, but it is not

I work on a bunch of Java services that run in containers in k8s. I think we're not properly configured to integrate any "hs_err_pid" files that get written, so I'm experimenting with this.
We're still running Java 8. I'm reading through https://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html#BGBCIEFC , and I find the following statement:
If the file cannot be created in the specified directory (due to insufficient space, permission problem, or another issue), then the file is created in the temporary directory for the operating system. The temporary directory is /tmp.
So, I went into a container running a Java process. I looked at /tmp, and I saw that it was writable. I looked at the command line of the Java process, verifying that it doesn't set the ErrorFile location. I checked the "ps" output to get the pid. I executed "kill -11 ". I then checked the "ps" output again, and I saw that a new Java process was running. I then checked "/tmp" for a "hs_err_pid" file. It was not there.
Is there something I'm missing here, or is this documentation not correct?
It sounds like systemd PrivateTmp strikes again!
http://0pointer.de/blog/projects/security.html
Service-Private /tmp Another very simple but powerful configuration switch is PrivateTmp=:
...
[Service]
ExecStart=...
PrivateTmp=yes
... If enabled this option will ensure that the /tmp directory the service will see is private and isolated from the host system's /tmp.
/tmp traditionally has been a shared space for all local services and
users.
Over the years it has been a major source of security problems for a
multitude of services. Symlink attacks and DoS vulnerabilities due to
guessable /tmp temporary files are common.
By isolating the service's /tmp from the rest of the host, such
vulnerabilities become moot.
SUGGESTION: Try searching for "hs_err_pid" elsewhere. For example:
find / name "hs_err_pid*" -print 2> /dev/null
You might find it in a subdirectory like this:
/tmp/systemd-private-nABCDE/tmp/hs_err_pid
If you wish, you can disable it on a per-service basis in your systemd configuration.

How to load custom xml files during docker run?

I have a two custom xml property files that are environment specific used in my spring boot project. Is it possible to use mount or volume tag to get the files from a location specified during docker run? The xml files are required to successfully connect to a db server.
Also if I specify an env-file command in the docker run can i put my sh files in any location on the docker server and specify the path there in the run command?
Yes, you can do that by mounting a volume. It will swap your in-container location with chosen server location. Inside container there will be no difference between this shared location and any other. Use flag -v "SERVER_LOCATION:CONTAINER_LOCATION":
docker run -v /etc/xmlsFolder:/etc/appConfig/destinationFolder your_image
Yes, you can specify run command in script anywhere on server.

Does Tomcat (as a Service) not load native libraries for a Java web application?

I am using Tomcat7 and Ubuntu. I have a Java web application which uses some native libraries. When I run the web application within Eclipse it works through Eclipse internal Tomcat server during debugging. However, when I deploy the applcation to a hosted Tomcat service, the application fails when it reaches the point of loading these libraries.
I put the native libraries in /home/me/my_shared_libs, and gave
folder and file ownership to user "Tomcat7" -- sudo chown
I give all permissions of the native libraries to "Tomcat7" user
-- sudo chmod
In do sudo vi /usr/share/tomcat7/bin/setenv.sh, and put the following
in the file export CATALINA_OPTS="-Djava.library.path=/home/me/my_shared_libs"
Then I restart Tomcat -- sudo service tomcat7 restart And, whenever
refernce to load native libraries is reached, I get an error about InvocationTargetException.
I am also open to the option of adding the native libraries as part of the the application's .WAR file. (Although I am not sure how to do this in Eclipse).
Log of /var/log/tomcat7/catalina.out-->
Jun 30, 2016 8:11:50 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 3643 ms
Load my_native_lib_called. libmachoman.so: cannot open shared object file: No such file or directory
EDIT:
I found out something very interesting. Tomcat does pick up the library location that I set above. What happens is I have two types of libraries (.so) files in the location. The first library (libcore.so) calls/loads the the second library (libmachoman.so). libcore.so is found and loaded both libmachoman.so is not, even though both are in the same location.
It can be done from the code itself. That should give you confidence in the loading of the library.
To do this you can use System.load()
It takes absolute path of the library for example
System.load("/PATH/TO/.so");
Run this piece of code one once when the application starts up before calling the library functions.
You'll have to place your library in a custom location on all the servers.
System.loadLibrary() is also used for the same purpose but the difference is that it would load library by name and would look into locations specified by the Java environment variable java.library.path.
But that can be a pain to set as you have mentioned, this change would have to be done in all the instances of tomcat.
After days of headache, I have a solution.
Edit file
sudo vi /etc/ld.so.conf
Append the location of native libs in file
include /etc/ld.so.conf.d/*.conf
/home/me/my_shared_lib
load config
sudo ldconfig
View the new change
ldconfig -p | grep my_shared_lib
This tells the dynamic linker where to look for native libraries.
My problem is solved.
There are other alternative solutions here, which may or may not have some cons.
Alternatively (also found out), you can can export LD_LIBRARY_PATH in the setenv.sh file under /usr/share/tomcat7/bin/ instead of the above steps. Settings become part of Tomcat; cleaner approach.

Selenium Server: Windows Service: The Hard Disk Filled Up After a Few Days

I have used NSSM to create a Windows Service for my Selenium Server instance (v 2.48.2 of the Selenium Server Standalone JAR).
I have also set the service to log on as a Local System Account, and have allowed the service to interact with the desktop. When I have used a particular account for the service, instead of the local system account, Internet Explorer would not launch.
I noticed that after a few days, the hard disk would start filling up with temporary internet files at the following location:
C:\Windows\SysWOW64\config\systemprofile\AppData\Local\Microsoft\Windows\Temporary Internet Files\Content.IE5
After a few days, I saw that the size of this folder was ~30 GB. I have had to manually clear out this folder.
I've used the following command to create the service:
nssm install seleniumhub java -jar C:\selenium-server\selenium-server-standalone-2.48.2.jar -Dwebdriver.chrome.driver=C:\selenium-server\chromedriver.exe -Dwebdriver.ie.driver=C:\selenium-server\IEDriverServer.exe
Has anyone else run into this issue?
It's a common occurrence that you clean the tmp directory because everytime a new profile is made, a folder is put in by Chrome/IE/FF that stores information about that session.
You should write a simple script to clear this folder after every test suite is run. Something along this pseudo-code will work:
for $dir in %TMP%:
rm -fr $dir
So in order to avoid this problem, running the Selenium Server as a startup process, instead of as a service, seems to not fill the hard disk up.

What is the minimal set of files for a jetty 9.x base directory for a war file?

According to http://www.eclipse.org/jetty/documentation/current/quickstart-running-jetty.html it is possible to manage web applications in base directories in jetty 9.x. The guide explains what can be put inside those and gives an example by pointing to the demo-base directory in the binary distribution. However it would have been useful to point out what actually needs to be in such a jetty base in order to make deployment successful, e.g. so that
cd /path/to/my-base/
java -jar ~/jetty-distribution-9.2.3.v20140905/start.jar jetty.home=~/jetty-distribution-9.2.3.v20140905/ jetty.base=.
succeeds. Putting a minimal valid war file (with only one jsf file) into /path/to/my-base or /path/to/my-base/webapps/ fails with WARNING: Nothing to start, exiting ..., although it would make sense to deploy a minimal application or display a helpful warning what needs to be added.
What needs to be added to be able to deploy an application from a separate base directory?
Jetty can make this for you through flags to the start.jar
There's an exmample in the docs, here: http://www.eclipse.org/jetty/documentation/9.3.0.v20150612/quickstart-running-jetty.html
"The following commands: create a new base directory; enables a HTTP connector and the web application deployer; copies a demo webapp to be deployed.
Simplified:
mkdir /home/me/mybase
cd /home/me/mybase
java -jar $JETTY_HOME/start.jar --add-to-startd=http,deploy
Then copy your war, if you use ROOT.war, it will map to /, and start jetty:
cp my.war webapps/ROOT.war
java -jar $JETTY_HOME/start.jar
Alternatively, if you have docker installed, you can also get the official setup by copying it, like so:
First, have docker download and run jetty, map a directory on the host to the docker container. I was already mapping webapps, so I just continued to use that mapping. This removes the container when done (-rm) so it's clean and it starts an interactive bash shell, logging you right into an official barebones jetty container that is ready to deploy wars plopped into the webapps directory (just what we want!)
sudo docker run --rm -it -v /home/myuser/jetty/webapps:/var/lib/jetty/webapps jetty:latest /bin/bash
If you run and env on the container, you'll see:
JETTY_BASE=/var/lib/jetty
Just tar this base up, copy the tar to the webapps directory, which is mapped back to the localhost, and exit. (feel free to map
root#f99cc00c9c77:/var/lib# tar -czvf ../jetty-base.tar.gz .
root#f99cc00c9c77:/var/lib# cp ../jetty-base.tar.gz jetty/webapps/
root#f99cc00c9c77:/var/lib# exit
Back on the localhost, you have a tar of the official jetty base! The docker container should have stopped on exit, you can test this with sudo docker ps, which should show an empty list (just headers).
Just to finish this off, back on the host, create a base directory (as myuser, not root, of course):
mkdir ~/jetty/localbase
cp ~/jetty/jetty-base.tar.gz ~/jetty/localbase/
cd ~/jetty/localbase/
tar xvzf jetty-base.tar.gz
Then start it up like before:
java -jar $JETTY_HOME/start.jar

Categories

Resources