OutOfMemoryError while deploying a war from Jenkins to Tomcat - java

I deploy my application from Jenkins my using the following shell command:
rm -rf E:/FooTomcat/apache-tomcat-7.0.55/webapps/foo-web;
rm -rf E:/FooTomcat/apache-tomcat-7.0.55/webapps/foo-web.war;
sleep 2;
cp foo-web/target/foo-web-0.0.1-SNAPSHOT.war E:/FooTomcat/apache-tomcat-7.0.55/webapps/foo-web.war
Often when the periodic build is started the Tomcat runs out of memory giving the following error.
Feb 13, 2015 3:59:58 PM org.apache.catalina.startup.HostConfig deployWAR
INFO: Deploying web application archive E:\FooTomcat\apache-tomcat-7.0.55\webapps\foo-web.war
Feb 13, 2015 4:00:06 PM org.apache.catalina.startup.HostConfig deployWARs
SEVERE: Error waiting for multi-thread deployment of WAR files to complete
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: PermGen space
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:818)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:488)
at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1658)
I have a windows environment and executing the shell through cygwin.
Is someting wrong with the shell command, or is it due to some memory leak? I introduced the sleep to give some time for the application to undeploy. I have never tried by increasing the sleep duration.
Thanks.

As the exception indicates your problem comes from filling up the PermGen space. It is a part of memory where Java stores the metadata about your classes (let's say the code of your loaded classes). When you deploy a new web-app you add new classes and increase the load of PermGen. What is worse, when you re-deploy the same app in 90% of the cases, the previous versions of the used classes stays in the memmory so you do not release the 'old' memory from PermGen but just add to it (the 90% estimation comes from using DataBase drivers, frameworks which uses ThreadLocal, Scheduler threads ... in reality it happens almost all the time).
The default permgen is too small (like 64M or something as ridiculous). Start tomcat with higher values, you do it by passing to tomcat the JVM options, for example:
JAVA_OPTS = -XX:MaxPermSize=512m -Xmx4024m
(first set the perm size to 512MB second the heap size to 4G, which is fine for modern system).
You set this variable before you start tomcat, or (I prefer it that way, you can modify your /bin/catalina script to always have them set up there, that way you will not 'forget' them if you start your tomcat again).
From tomact 7, when you undeploy app, the tomcat logs show warnings of apps that stays in memmory (and fill up your PermGen), you may want to check them and try to fix some problems (updating frameworks, shuting down properly your threads pool ...)

Related

How to fix heap space when increasing node size on a spot instance?

We have been using EC2 Windows spot instance as Jenkins slaves, however, when increasing the size from M5d2XLarge to M5d4XLarge (in an attempt to overcome memory issues in some builds), those fail to be created.
I attempted to set Java Opts Xms and Xmx as part of the init script, however, the error is unchanged. I've read that WinRM may be the issue, but I have a hard time connecting that with the node size increase.
This is the expected log results (from M5d2XLarge)
WinRM service responded. Waiting for WinRM service to stabilize on EC2 (Dev-US-East-1) - Windows Server 2019 Docker (i-xxxxxxxxxx)
WinRM should now be ok on EC2 (Dev-US-East-1) - Windows Server 2019 Docker (i-xxxxxxxxxx)
Connected with WinRM.
Creating tmp directory if it does not exist
Executing init script
C:\Users\Administrator>if not exist C:\Jenkins mkdir C:\Jenkins
init script ran successfully
remoting.jar sent remotely. Bootstrapping it
Launching via WinRM:java -jar C:\Windows\Temp\remoting.jar -workDir C:\Jenkins
Both error and output logs will be printed to C:\Jenkins\remoting
However, right after "Launching via WinRM", it fails due to a garbage collector issue (from M5d4XLarge):
init script ran successfully remoting.jar sent remotely. Bootstrapping it
Launching via WinRM:java -jar C:\Windows\Temp\remoting.jar -workDir C:\Jenkins
Error occurred during initialization of VM
Unable to allocate 512384KB bitmaps for parallel garbage collection for the requested 16396288KB heap.
Ouch:
java.io.EOFException: unexpected stream termination
Try updating the script that runs the remoting.jar. Modify this line that runs jar to
java -Xmx1024m -jar C:\Windows\Temp\remoting.jar -workDir C:\Jenkins

HDFS Datanode crashes with OutOfMemoryError

I´m having repeated crashes in my Cloudera cluster HDFS Datanodes due to an OutOfMemoryError:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to /tmp/hdfs_hdfs-DATANODE-e26e098f77ad7085a5dbf0d369107220_pid18551.hprof ...
Heap dump file created [2487730300 bytes in 16.574 secs]
#
# java.lang.OutOfMemoryError: Java heap space
# -XX:OnOutOfMemoryError="/usr/lib64/cmf/service/common/killparent.sh"
# Executing /bin/sh -c "/usr/lib64/cmf/service/common/killparent.sh"...
18551 TS 19 ? 00:25:37 java
Wed Aug 7 11:44:54 UTC 2019
JAVA_HOME=/usr/lib/jvm/java-openjdk
using /usr/lib/jvm/java-openjdk as JAVA_HOME
using 5 as CDH_VERSION
using /run/cloudera-scm-agent/process/3087-hdfs-DATANODE as CONF_DIR
using as SECURE_USER
using as SECURE_GROUP
CONF_DIR=/run/cloudera-scm-agent/process/3087-hdfs-DATANODE
CMF_CONF_DIR=/etc/cloudera-scm-agent
4194304
When analyzing the heap dump, the apparent biggest suspects are millions of instances of ScanInfo apparently quequed in the ExecutorService of the class org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.
When I inspect the content of each ScanInfo runnable object, I don´t see anything weird:
Apart from this and a bit high block count in HDFS, I don´t get any other information apart from the different DataNodes crashing randomly in my cluster.
Any idea why these objects keep queueing up in the DirectoryScanner thread pool?
You can try once below command.
$ hadoop dfsadmin -finalizeUpgrade
The -finalizeUpgrade command removes the previous version of the NameNode’s and DataNodes’ storage directories.

Extracting information from a a java core dump with jmap(1.5)

Long story short, some coworkers are running a pretty old setup(oc4j jdk1.5.6 in x86_64) with an application which happens to be mission critical. They recently have tried to deploy a new version of the application, but as soon as they do the java process(es) throw a core dump and die.
The problem is, the core dumps seem to be fine, gdb can open them, but jmap and other tools refuse to process them:
# /usr/java/jdk1.5.0_06/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Error attaching to core file: Can't attach to the core file
And newer versions throw a exception:
# jdk1.6.0_45/bin/jmap /usr/java/jdk1.5.0_06/bin/java core
Attaching to core core from executable /usr/java/jdk1.5.0_06/bin/java, please wait...
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at sun.tools.jmap.JMap.runTool(JMap.java:179)
at sun.tools.jmap.JMap.main(JMap.java:110)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 20.45-b01. Target VM is 1.5.0_06-b05
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:224)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:287)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:357)
at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:594)
at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494)
at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:348)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:169)
at sun.jvm.hotspot.tools.PMap.main(PMap.java:67)
... 6 more
gdb offers little information without symbols:
Reading symbols from /usr/java/jdk1.5.0_06/bin/java...(no debugging symbols found)...done.
[New Thread 9841]
[New Thread 31442]
[New Thread 31441]
...
Core was generated by `/usr/java/jdk1.5.0_06/bin/java -server -XX:+UseConcMarkSweepGC -XX:MaxHeapFreeR'.
Program terminated with signal 6, Aborted.
#0 0x0000003bbf030285 in ?? ()
(gdb) bt
#0 0x0000003bbf030285 in ?? ()
#1 0x0000003bbf031d30 in ?? ()
#2 0x0000000000000000 in ?? ()
The only valuable information I've gathered from the core is that most threads
are blocked(I'm far from being a gdb guru):
35 Thread 10093 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
34 Thread 10097 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
33 Thread 10099 0x0000003bbfc0b1c0 in pthread_cond_timedwait##GLIBC_2.3.2 ()
from /lib64/libpthread.so.0
Besides, I don't know if it's really relevant. The app is almost always heavily loaded, and my bet is that there were some lock contention already but since it's another's team app my knowledge about it it's pretty shallow.
I guess this is a long shot, but is there something that we can do to get a java thread dump or something like that? Do Sun used to offer debuginfo of the jdk as I guess is avalaible now with openjdk?
Thanks in advance.
UPDATE: The other team has resolved the issue without getting info from the core dump, just by trial and error after successfully replicating the problem in a test system. I'm still intrigued about the thing: how to debug an ancient java core dump which jmap can't process, it might be valuable info for the future, althought it seems is that there is no solution to that problem. Probably the JVM memory got corrupted and that's why jmap can't process it.
You can add the following JVM option when starting your application, that will allow you to run any command you specify if a fatal JVM error occurs:
-XX:OnError="<cmd args>"
For instance, you could run a command (or a script) that will perform certain actions like get a heap or thread dump.
Jmap and other JVM utilities are extremely version sensitive. From your error, it is self explanatory that hopefully the same jvm is not used in your case.
Java VisualVM can load core dumps directly. But you must use the same jvm that created the core file.
Resource Link:
https://stackoverflow.com/a/9981498/2293534
Suggestion#1:
kjkoster has given a solution here in this tutorial.
You need to use the jmap that comes with the JVM. From your error
message I gather that you are using a different version of jmap than
of the JVM.
Please check what JVMs are installed on your machine and ensure than
when you run jmap, you use the right version.
To solve such issues I never rely on path. Instead I set JAVA_HOME to
be the one that the JVM uses and then invoke both the JVM and jmap
like so:
Code:
$ JAVA_HOME=/usr/local/jdk1.6.0
$ export JAVA_HOME
$ ${JAVA_HOME}/bin/java ...
...
$ ${JAVA_HOME}/bin/jmap ...
Hope this helps.
Suggestion#2:
It is a full solution step by step is given by Chamilad. Hope it will clarify your root cause and solution procedure.
Almost every Java developer knows about jmap and jstack tools that come with the JDK. These provide functionality to extract heap and thread information of a running JVM instance. Easy.
What if there’s a running JVM that has produced a deadlock and you want to take a thread dump while the process is running? You go in and run the following.
jstack pid >> thread_dump.txt
Turns out the system doesn’t know what jstack is. You don’t panic, but you get a tiny sensation at the back of your head saying you’re not leaving early this Friday.
What has happened is the running JVM is based on a JRE and not a JDK. The JRE is a minimal runtime that doesn’t pack the monitoring and analysis tools that the JDK packs.
So what are our options here?
Stop the process. Download JDK, start the process again on top of
JDK and hope the deadlock happens again. Nope.
Start JVisualVM on your laptop and hope the process has JMX enabled. Nope.
tools.jar TO THE RESCUE!
Functionalities such as jstack are implemented in the tools.jar file which is packed inside <JDK_HOME>/lib folder. We can use this to invoke the JStack class and get a thread dump of the running process.
So we march on to download and extract the JDK, and then to run the following.
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
..and come across the following error.
Exception in thread "main" java.lang.UnsatisfiedLinkError: no attach in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at sun.tools.attach.LinuxVirtualMachine.<clinit>(LinuxVirtualMachine.java:342)
at sun.tools.attach.LinuxAttachProvider.attachVirtualMachine(LinuxAttachProvider.java:63)
at com.sun.tools.attach.VirtualMachine.attach(VirtualMachine.java:208)
at sun.tools.jstack.JStack.runThreadDump(JStack.java:163)
at sun.tools.jstack.JStack.main(JStack.java:116)
Darn it! Spoiled again!
How do we solve this? The above error is caused when the process can’t find libattach.so file which is related to the Dynamic Attach function related to JStack. Setting the following environment variable will help the JVM to find the libattach.so file.
export LD_LIBRARY_PATH=<JDK_HOME>/jre/lib/amd64/
Now let’s run JStack again, this time with results!
java -classpath <JDK_HOME>/lib/tools.jar sun.tools.jstack.JStack <pid> >> thread_dump.txt
Now that we have the thread dump, we move on to the heap dump. The tool we normally use is jmap but that too is not available on the JRE. So what? We can use the binary in the JDK’s bin directory right? right?
root#snowflake1 latest]# <JDK_HOME>/bin/jmap -heap <pid>
Attaching to process ID <pid>, please wait…
Error attaching to process: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
sun.jvm.hotspot.debugger.DebuggerException: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:435)
at sun.jvm.hotspot.HotSpotAgent.go(HotSpotAgent.java:305)
at sun.jvm.hotspot.HotSpotAgent.attach(HotSpotAgent.java:140)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:185)
at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.tools.jmap.JMap.runTool(JMap.java:201)
at sun.tools.jmap.JMap.main(JMap.java:130)
Caused by: sun.jvm.hotspot.runtime.VMVersionMismatchException: Supported versions are 25.102-b14. Target VM is 25.91-b14
at sun.jvm.hotspot.runtime.VM.checkVMVersion(VM.java:227)
at sun.jvm.hotspot.runtime.VM.<init>(VM.java:294)
at sun.jvm.hotspot.runtime.VM.initialize(VM.java:370)
at sun.jvm.hotspot.HotSpotAgent.setupVM(HotSpotAgent.java:431)
… 11 more
Nope! Unless you match the JDK version with the exact version the JRE is at you get the above issue (which is pretty self-explanatory). So we download the JDK of the JRE of on top our process is running and run jmap again.
<JDK_HOME>/bin/jmap -dump:file=heap_dump.hprof <pid>
Resource Link:
Extracting memory and thread dumps from a running JRE based JVM
jmap is not going to help you debug a core dump. The JVM dumps core when either it has a bug or you have JNI code with a problem. Organizations with mission-critical applications should, sadly, see upgrading from unsupported versions of the JVM as mission-critical, or be prepared to pay Oracle a fortune for help.

Apache Tomcat Exception in Console on Startup

I'm using tomcat to run a web application. Without any projects added to my Tomcat v6.0 Server at localhost, I start the server and an exception is being thrown. It says there isn't a mapping for class sun.awt.AppContext I'm using jre6 as my runtime. I have my JAVA_HOME environmental variation set to the jre6 folder and I have Eclipse set to the same one. Do you know why I am getting this exception and how to resolve?
Here is my stack trace:
Nov 1, 2011 5:21:36 PM org.apache.catalina.core.AprLifecycleListener init
INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jre6\bin;.;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files/Java/jre6/bin/client;C:/Program Files/Java/jre6/bin;C:\oracle\Ora11g\BIN\;C:\Program Files\Serena\Dimensions 2009 R1\CM\prog;C:\Program Files\Serena\Dimensions 2009 R1\CM\prog\Microsoft.VC80.MFC;C:\Program Files\Serena\Dimensions 2009 R1\CM\prog\Microsoft.VC80.CRT;C:\Program Files\Serena\Dimensions 2009 R1\CM\prog\Microsoft.VC80.ATL;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Common Files\OTG;C:\Program Files\Windows Imaging\;C:\Program Files\IBM\Personal Communications\;C:\Program Files\IBM\Trace Facility\;C:\Program Files\HP\QuickTest Professional\bin;c:\PROGRA~1\IBM\SQLLIB\BIN;c:\PROGRA~1\IBM\SQLLIB\FUNCTION;c:\PROGRA~1\IBM\SQLLIB\SAMPLES\REPL;C:\Program Files\Eclipse 3.7\apache-maven-3.0.3\bin; C:\Program Files\Java\jre6\bin;C:\Program Files\HP\QuickTest Professional\bin
Letting agent QTJA do the transformation
Letting agent QTOR do the transformation
java.util.NoSuchElementException: No mapping for class sun.awt.AppContext
at com.mercury.bcel.TransformerXmlFile$MappingLocator.setMapping(TransformerXmlFile.java:96)
at com.mercury.bcel.TransformerFactory.createTransformer(TransformerFactory.java:56)
at com.mercury.bcel.TransformerMainImpl.transform(TransformerMainImpl.java:33)
at com.mercury.bcel.TransformerMain.transform(TransformerMain.java:35)
at com.mercury.javashared.transform.TransformersChain.transform(TransformersChain.java:32)
at com.mercury.javashared.transform.CommunicationThread.processTransformRequest(CommunicationThread.java:61)
at com.mercury.javashared.transform.CommunicationThread.run(CommunicationThread.java:38)
Thank you for your thoughts!
I'm pretty sure your problem is HP QuickTest.
HP QuickTest sets up JVM options which modify the java bootclasspath - nasty.
I'm not sure exactly how it works, but I'm guessing that it sets an environment variable JAVA_OPTS (which is picked up by Tomcat's startup scripts)
3 options (first is definite, second & third are based on my guess above):
Try uninstalling HP QuickTest
Open up bin\catalina.bat inside your Tomcat installation, and try resetting JAVA_OPTS at the start of the script.
Something like this:
echo "Current JAVA_OPTS (resetting to ''):"
echo %JAVA_OPTS%
set JAVA_OPTS=""
3: Or, try setting JAVA_OPTS variable (to empty string) in your Eclipse server startup dialog
i also faced the same issue, but my Apache Tomcat loaded through Cargo Container wrapper.
So I removed these 2 env. variable to fix the issue
JAVA_TOOL_OPTIONS -agentlib:jvmhook
_classload_hook jvmhook

Tomcat 6 freezes at startup

When I start tomcat 6 it freezes in certain point of the startup and stays there forever (I've waited 3 hours and nothing happened - not even an out of memory error). I don't have any clue of what could cause a behavior like that.
I'm runnig tomcat with Jira and Confluence, and the problem seem to be when tomcat tries to load confluence:
******************************************************************************************************
JIRA 3.13.3 build: 344 (Enterprise Edition) started. You can now access JIRA through your web browser.
******************************************************************************************************
2009-06-02 19:38:21,272 JiraQuartzScheduler_Worker-1 INFO [jira.action.admin.DataExport] Export took 387ms
2009-06-02 19:38:21,291 JiraQuartzScheduler_Worker-1 INFO [jira.action.admin.DataExport] Wrote 392 entities to export
2009-06-02 19:38:21,606 INFO [main] [com.atlassian.confluence.lifecycle] contextInitialized Starting Confluence 2.10.3 (build #1519)
2009-06-02 19:38:21,711 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from class path resource [bootstrapContext.xml]
2009-06-02 19:38:22,236 INFO [main] [beans.factory.xml.XmlBeanDefinitionReader] loadBeanDefinitions Loading XML bean definitions from class path resource [setupContext.xml]
After this line above nothing more happens.
I thought it could be a problem with permGem or something like that, so to avoid permGem limitations, I configured catalina.sh with:
CATALINA_OPTS="$CATALINA_OPTS -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true"
JAVA_OPTS="-Djava.awt.headless=true -Dfile.encoding=UTF-8 -server -Xms1536m -Xmx1536m -XX:PermSize=256m -XX:MaxPermSize=640m -XX:+DisableExplicitGC"
I incresed a lot jvm's space to see if it works, but it didn't help.
Tomcat version: 6.0.18
Jira version: 3.13.3
Confluence Version: 2.10.3
So, anyone have already had this problem before?
Could it be a memory(RAM) problem?
A problem with Spring and Tomcat6?
Or any other kind of problem?
Do you have any errors in your log?
Have you checked if confluence is maybe waiting for the database or network?
Get a thread dump for the application and check for threads which are BLOCKED, WAITING or TIMED_WAITING.
Also beware of threads in RUNNABLE but doing network I/O, e.g InputStream.read().
I checked my database, it was not working at all, but that was not the problem that was making my tomcat freeze.
I had a lack of RAM issue. In that place where tomcat got stuck there was a memory peak to load a lot of stuff from confluence.
I am using a virtual machine(VMware) to run my confluence, jira and svn inside a server with 3 other virtual machines.
To solve the problem I had to increase the memory(RAM) my virtual machine could use, from 2Gb to 4Gb.
please check whether $CATALINA_BASE/common/lib/javaee.jar exists

Categories

Resources