I recently run into docker and deploy my java application into a tomcat docker container. But I met a very specific error about NIO memory mapping a file:
File mark = new File("/location/to/docker/mounted/file");
m_markFile = new RandomAccessFile(mark, "rw");
MappedByteBuffer m_byteBuffer = m_markFile.getChannel().map(MapMode.READ_WRITE, 0, 20);
And the last function call failed as:
Caused by: java.io.IOException: Invalid argument
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:906)
at com.dianping.cat.message.internal.MessageIdFactory.initialize(MessageIdFactory.java:127)
at com.dianping.cat.message.internal.DefaultMessageManager.initialize(DefaultMessageManager.java:197)
... 34 more
I don't know what happened. I tested it in my local Mac environment, it's ok. And within the tomcat docker container, I change the file location to a normal file path, it's ok too. Seems which happens only on docker mounted file.
Other information:
root#4520355ed3ac:/usr/local/tomcat# uname -a
Linux 4520355ed3ac 4.4.27-boot2docker #1 SMP Tue Oct 25 19:51:49 UTC 2016 x86_64 GNU/Linux
Mounted a folder in Mac Users to /data
root#4520355ed3ac:/usr/local/tomcat# df
Filesystem 1K-blocks Used Available Use% Mounted on
none 18745336 6462240 11292372 37% /
tmpfs 509832 0 509832 0% /dev
tmpfs 509832 0 509832 0% /sys/fs/cgroup
Users 243924992 150744296 93180696 62% /data
/dev/sda1 18745336 6462240 11292372 37% /etc/hosts
shm 65536 0 65536 0% /dev/shm
docker version
huanghaideMacBook-Pro:cat huanghai$ docker --version
Docker version 1.12.3, build 6b644ec
huanghaideMacBook-Pro:cat huanghai$ docker-machine --version
docker-machine version 0.8.2, build e18a919
Related
Been digging on this for a while. I reviewed multiple articles on this issue. This one was the closest:
Tomcat 8 on CentOS 7 does not start as service (but it starts manually ....)
The difference being that I am running Tomcat 9.0.33. Here are the particulars:
java version "1.8.0_121"\
Java(TM) SE Runtime Environment (build 1.8.0_121-b13)\
Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)\
Tomcat 9.0.33
NAME="CentOS Linux"\
VERSION="7 (Core)"\
ID="centos"\
ID_LIKE="rhel fedora"\
VERSION_ID="7"\
PRETTY_NAME="CentOS Linux 7 (Core)"\
ANSI_COLOR="0;31"\
CPE_NAME="cpe:/o:centos:centos:7"\
HOME_URL="https://www.centos.org/"\
BUG_REPORT_URL="https://bugs.centos.org/"\
CENTOS_MANTISBT_PROJECT="CentOS-7"\
CENTOS_MANTISBT_PROJECT_VERSION="7"\
REDHAT_SUPPORT_PRODUCT="centos"\
REDHAT_SUPPORT_PRODUCT_VERSION="7"\
As a side note, everything was starting normally with no issues until recently. As far as I know there haven't been any major changes to the environment. But, when I ran the "systemctl restart" command recently, the startup began to fail. There are 5 instances of Tomcat 9.0.33 running at different ports and paths and those have not changed. I have not restarted two of the instance (afraid they won't start) the other three flat out won't start. Details below:
Systemd unit file for tomcat\
[Unit]\
Description=Apache Tomcat Web Application Container in Liferay 7.32 TEST for UAT\
After=syslog.target network.target
[Service]\
Type=forking
Environment=JAVA_HOME=/opt/jdk1.8.0_121/jre\
Environment=CATALINA_PID=/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33/temp/tomcat.pid\
Environment=CATALINA_HOME=/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33\
Environment=CATALINA_BASE=/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33\
Environment='CATALINA_OPTS=-Xms1024m -Xmx2048m -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=20 -XX:ParallelGCThreads=8 -server -Xdebug -Xrunjdwp:transport=dt_socket,address=5000,server=y,suspend=n'\
Environment='JAVA_OPTS=-Djava.awt.headless=true -Djava.security.egd=file:/dev/./urandom -Duser.timezone=GMT -Dfile.encoding=UTF-8'
ExecStart=/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33/bin/startup.sh\
ExecStop=/bin/kill -15 $MAINPID
User=tomcat\
Group=tomcat\
UMask=0007
[Install]\
WantedBy=multi-user.target\
Results when running systemctl start liferayuat
● liferayuat.service - Apache Tomcat Web Application Container in Liferay 7.32 TEST for UAT\
Loaded: loaded (/etc/systemd/system/liferayuat.service; enabled; vendor preset: disabled)\
Active: failed (Result: exit-code) since Sat 2020-12-05 08:44:08 CST; 3s ago\
Process: 10891 ExecStop=/bin/kill -15 $MAINPID (code=exited, status=1/FAILURE)\
Process: 10851 ExecStart=/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33/bin/startup.sh \(code=exited, status=0/SUCCESS)\
Main PID: 10861 (code=exited, status=0/SUCCESS)
Dec 05 08:44:08 systemd[1]: Starting Apache Tomcat Web Application Container in Liferay 7.32 TEST for UAT...\
Dec 05 08:44:08 startup.sh[10851]: Existing PID file found during start.\
Dec 05 08:44:08 startup.sh[10851]: Removing/clearing stale PID file.\
Dec 05 08:44:08 startup.sh[10851]: Tomcat started.\
Dec 05 08:44:08 systemd[1]: Started Apache Tomcat Web Application Container in Liferay 7.32 TEST for UAT.\
Dec 05 08:44:08 systemd[1]: liferayuat.service: control process exited, code=exited status=1\
Dec 05 08:44:08 systemd[1]: Unit liferayuat.service entered failed state.\
Dec 05 08:44:08 systemd[1]: liferayuat.service failed.
Then the ONLY thing in catalina.out:
Listening for transport dt_socket at address: 5000\
java.lang.ClassNotFoundException: org.apache.catalina.startup.Catalina\
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)\
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)\
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)\
at org.apache.catalina.startup.Bootstrap.init(Bootstrap.java:261)\
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:443)\
So, when I start the instance with systemctl start it will fail. But if I run this command (as root...) then it will start:
/opt/liferay/uatapi/liferay-ce-portal-7.3.2-ga3/tomcat-9.0.33/bin/startup.sh
If I run that full commmand AS tomcat it doesn't start with the same error. So, it appears that the issue is permissions. The tomcat user and group are owners of all files and folders. But, somehow, the tomcat user either doesn't have permissions or the path gets jacked up so that the class files can't be found. I followed the suggestions in the article I referenced above but the changes had no impact.
I tripped across one article on SELINX that seemed to point to an issue there. This are the SELINUX settings:
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31\
The workaround to keep the instances running is just to manually start them but what is causing systemctl start NOT to work? I suspect permissions but I sure as heck can't see why since everything is owned by tomcat:tomcat
So, this is self-inflicted as most "mysteries" are. I still cannot account for some of the differences I see when looking into SELinux contexts between the instances but the REAL cause was subtle (to me). Permissions on the {tomcat root}/lib and {tomcat root}/lib/ext had no execute permissions. That may have been due to a jar that was added recently and then needed to be updated by owner and permissions. In any case, the original issue resulted in many trial and error attempts to fix it which complicated matters further.
I discovered the solution by doing a folder by folder, file by file comparison between working and non-working instances. Apparently the new jar and the owner/permission changes were applied to all but the production version.
Thanks for the suggestions.
i have a server with tomcat with some webapps in.
If i do free -h i view about 50% used:
With TOP:
i view that this process:
21603 azureus+ 20 0 7816556 1.104g 18200 S 0.0 45.1 0:36.41
/usr/lib/jvm/java-8-oracle/bin/java -Djava.util.logging.config
.file=/home/azureuser/tomcat/apache-tomcat-9.0.8/conf/logging.properties -Djava.util.loggi
ng.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.
protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.Secu
rityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /home/azureuser/tomcat/apache-t
omcat-9.0.8/bin/bootstrap.jar:/home/azureuser/tomcat/apache-tomcat-9.0.8/bin/tomcat-juli.j
ar -Dcatalina.base=/home/azureuser/tomcat/apache-tomcat-9.0.8 -Dcatalina.home=/home/azureu
ser/tomcat/apache-tomcat-9.0.8 -Djava.io.tmpdir=/home/azureuser/tomcat/apache-tomcat-9.0.8
/temp org.apache.catalina.startup.Bootstrap start
Use 45% of memory.
i want undstand if is a problem of my code in some projects or is a problem of TOMCAT.
Some Tips?
Thanks all
I am deploying a Java SE app to Elastic Beanstalk and want to ensure that the option_settings specified in my .ebextensions/otions.config file are applied as described in the docs. I want to adjust the ELB and ASG settings:
option_settings:
- namespace: aws:elb:policies
option_name: ConnectionSettingIdleTimeout
value: 120
- namespace: aws:autoscaling:launchconfiguration
option_name: InstanceType
value: t2.nano
I include this file in the artifact I am deploying to beanstalk. I am deploying a app.zip file with the following structure:
$ unzip -l app.zip
...
72 12-17-15 18:17 Procfile
83199508 12-17-15 18:17 app.jar
0 12-17-15 18:17 .ebextensions/
209 12-17-15 18:17 .ebextensions/options.config
I am using the eb CLI to create/terminate/update my EBS app:
$ eb terminate
$ # .. create app.zip file
$ eb create
$ eb deploy
However, when I create the EBS environment and/or update it, neither the ELB nor the ASG configuration get applied. Is the file at the wrong place? Do you need to change the way I am deploying this to EBS to apply the config properly? My app it
I was following previous posts but still not able to resolve the issue. I am trying to install zookeeper and start it to run summing-bird which is run to provide bolts/spouts to storm for online and batch. I installed zookeeper version 3.4.6 first and was getting class not found exception. After looking at the post
ClassNotFoundException for Zookeeper while building Storm
I downgraded the version to 3.3.6 and now I am not even able to start the zookeeper server. Any help will be really appreciated.
root#cp-1:/users/username/zookeeper-3.3.6/bin# ./zkServer.sh start
JMX enabled by default
Using config: /users/username/zookeeper-3.3.6/bin/../conf/zoo.cfg
Starting zookeeper ... ./zkServer.sh: 93: [: /tmp/zookeeper/: unexpected operator
./zkServer.sh: 103: ./zkServer.sh: cannot create /tmp/zookeeper/
The number of snapshots to retain in dataDir/zookeeper_server.pid: Directory nonexistent
FAILED TO WRITE PID
This is how my zoo.cfg file looks like
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper/
dataLogDir=/tmp/logs/zookeeper/
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.11.10.3:2888:3888
server.2=10.11.10.4:2888:3888
This is how access looks like
drwxr-xr-x 2 username oppts-PG0 4096 Nov 25 14:35 zookeeper
drwxr-xr-x 3 root root 4096 Nov 25 14:46 logs
drwxr-xr-x 2 root root 4096 Nov 25 14:46 logs/zookeeper
As stated in the contents of zoo.cfg, you’d not better to set the dataDir to /tmp/zookeeper.
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
You can try to set dataDir to other directory that you created. And then restart zkServer.sh.
I have two dockerized applications and additional volume container:
dbdata volume container has one volume: /home/git/data
dockerized Gitlab - runned with --volumes-from dbdata to mount volume from volume container,
my own application https://github.com/tunguski/gitlab-java-event-listener - runned with --volumes-from dbdata to mount volume from volume container (Dockerfile).
My application tries to clone git repository from /home/git/data/repositories. This directory is owned by user git from gitlab container. But after adding
RUN groupadd --gid 1000 git && usermod -a -G git jetty
to Dockerfile, user jetty sees files in it. I've tested it executing su - jetty and ls -la /home/git/data/repositories.
JGit reports that /home/git/data/repositories/<user>/<repo.git> does not exist. I modified app to log what parent directory actually exist, and it's /home/git/data/repositories. My java application does not see any subdirectories.
But when I connect to my app's container using:
docker exec -it my-app bash
change user to jetty:
su - jetty
then listing directory:
ls -la /home/git/data/repositories/<user>/<repo.git>
shows that repository exist!
To confirm that server runs on this particular user:
$ ps auxwf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 72 0.0 0.0 21920 3556 ? Ss 13:54 0:00 bash
root 77 0.0 0.0 46432 2920 ? S 13:54 0:00 \_ su - jetty
jetty 78 0.0 0.0 4328 700 ? S 13:54 0:00 \_ -su
jetty 81 0.0 0.0 17488 2072 ? R+ 13:54 0:00 \_ ps auxwf
jetty 1 0.5 5.7 5627056 464960 ? Ssl 12:56 0:18 /usr/bin/java -Djetty.logging.dir=/usr/local/jetty/logs -Djetty.home=/usr/local/jetty -Djetty.base=/var/lib/jetty -Djava.io.tmpdir=/tmp/jett[...]
I really don't know what is the problem here.
Do you know what may be the cause of this?