Don't know which way of investigation should select.
After some time (near 4 weeks) on continouous running web application got run slowly.
Checking:
catalina.out - no full gc.
Regular gc takes near 0.2s and executed one time in near 10 - 30 seconds.
On the same server another tomcat with web application running well.
So the problem not in the host.
I am really confused. What should be checked ?
you can use a visualvm to track you tomcat performance, you have to add the JMX parameters to allow you to connect the visualvm to you your tomcat.
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.port=8484
-Dcom.sun.management.jmxremote.ssl=false
In the visualvm >> Threads Tab you will see a button in the above right corner Thread Dump, this button generates a file called thread dump, this file contains the whole threads that currently live, generate a multiple thread dumps and trace you application threads, you will clearly see what does the slow thread is currently doing, which exactly the method caused that slowness.
I hope this help you to get the root cause for your slowness.
Related
I am using Tomcat 9.0.0.M22 with jdk1.8.0_131 on Windows Server 2012 R2 and I have a Spring Boot web application deployed on it, the issue is that every 10 seconds the commons daemon service runner spikes the cpu to 50% although my deployed web application is idle then decreases to 0% and this behavior continue to happen every 10 seconds.
In my application I don't have any job that runs every 10 seconds, and also when I run my web application on Tomcat from Eclipse I didn't notice the same behavior, so I am guessing that this is a Tomcat built in thread.
do a series of thread dumps as soon as cpu load spikes
you can use jdk/bin/jvisualvm to connect to your tomcat and repeatedly press thread dump button on upper right of threads tab or if you prefer command line (e.g. via script), you can also use jdk/bin/jcmd <pid-of-your-tomcat> Thread.Print >> dumps.txt
each dump shows all threads existing at that moment and a stack trace for each thread showing what is being executed
this should give you some hints what's involved creating that load
Without more information this is just guessing but this could be the garbage collector trying to do it's job every ten seconds but not being able to evict any items because they are all still needed. You could try to increase the memory for Tomcat (-Xmx).
With that much info it's pretty tough, couple of points you can think of:
As #jorg pointed out, you can take thread dumps that will give you
insights about any blocking threads.
You said it's working fine on local system, it doesn't necessarily mean that code is optimal for the server platform. Double check configurations, maxThreads etc.
Optimize Server JVM by eliminating any excessive garbage collection. Starting the JVM with a higher maximum heap memory (-Xmx) will decrease the frequency with which garbage collection occurs.
Existing monitoring tools can leverage your analysis (jVisualVM) .
I was able to stop this behavior completely by setting reloadable="false" in context.xml.
I work on very large web project which is written in java.
when I click some button or do other actions it is hard to me to understand what methods called in application code(because I am new in project and application is really really big). So I would like to know is there a tool which will allow to get stacktrace of some threads with given interval (say every 100 milliseconds ).
I know about VisualVm but it does not allow to do this, I can get thread dumb only at one point of time( there is no way to get stack trace continuously).
Can someone suggest tool or any technique which will allow me to monitor methods call at run-time.?
Thanks
For such cases I use Java Mission Control. The full features works on Oracle JDK, for OpenJDK not everything works properly. More info
From the website:
Starting with the release of Oracle JDK 7 Update 40 (7u40), Java Mission Control is bundled with the HotSpot JVM.
You need to add the following parameters in your JVM to be able to use it. note: I normally add also the debug options.
JAVA_DEBUG="-Xdebug -Xrunjdwp:transport=dt_socket,address=4000,server=y,suspend=n"
JAVA_JMC="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.port=3614 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -XX:+UnlockCommercialFeatures -XX:+FlightRecorder"
Then you'll need to remote attach to port 3614 and you'll be able to see inside the JVM. There you'll be able to profile CPU, check allocation and deadlock detection + select the thread and see what is currently executing. And some other graphs and valuable information.
There are multiple ways in which you could check stacktrace:
Using jconsole's thread tab where you will get to see which all threads are alive and what state they are at.
Using JvisualVm (which comes fee with jdk installation) or you could use any of profilers like jprofiler/yourkit etc. to view stack trace when you run in development mode.
You could get stack trace say every minute by running kill -3 pid in unix or control + break on windows
You could use jstack command to get trace.
You could debug the code using say IDE (eclipse/netbeans/Intellij etc.)and after each and every method call trace the method call.
Many tools for Java VM monitoring and application monitoring exist. For instance, see the eG Java Application Monitor:
...the eG Java Monitor gives you a comprehensive view of the
activities within a JVM:
It lets you see which threads are running in the JVM and what state
they are in (such as runnable, blocked, waiting, timed waiting,
deadlocked, or high CPU).
You also have access to a stack trace for
each thread showing class, method, and line of code (to troubleshoot
problems down to the line of code level).
And you can monitor the
performance of garbage collection processes, CPU and memory usage, and
JVM restarts.
We are trying to access an application from the tomcat which is on a different host, but it is not loading even though the tomcat is running. It was running fine for the past 3 months. We restarted the tomcat now it is working fine.
But, we could not able to zero in on what happened.
Any idea how to trace / what might have caused this?
The CPU usage was normal and the tomcat memory was 1205640.
the memory setting of tomcat are 1024- 2048(min-max)
We are using tomcat 7.
Help much appreciated....thanks in advance.....cheers!!
...also - not sure on Windows - you may be running out of file descriptors. This typically happens when streams are not properly closed in finally blocks.
In addition, check with netstat if you have a lot of sockets remaining open or accumulating in wait state.
Less likely, the application is creating threads and never releasing them.
The application is leaking something (memory, file descriptors, sockets, threads,...) and running over a limit.
There are different ways to track this. A profiler may help or more simply, running JVM dumps at regular intervals and checking what is accumulating. The excellent MAT will help you analyze the dumps.
Memory leak problems are not uncommon. If your Tomcat instance was running for three months and suddenly the contained application became unresponsive maybe that was the case. One solution (and if your resources allow you to do so) could be monitoring that Tomcat instance though JMX using jconsole to see how it behaves
Trying to diagnose some bizarre Tomcat (7.0.21) and/or JVM errors on a 64-bit linux (CentOS) machine.
I'm load testing our server application and tried hitting it with 100K messages. Launched jvisualvm and kept my eye on the heap the whole time. Everything was looking great* (see below) until I got to about 93K processed messages and then Tomcat just died. Ran a ps on Tomcat's PID number to confirm it was dead.
Up until this crash:
Load test had been running for about 90 minutes; should have finished shortly thereafter since we were at 93K/100K)
CPU was holding strong around 45%
Used heap was around 2GB (plus or minus a bunch after GCs) but heap size grew from 4GB to MAX_HEAP after about 30 minutes
Class loading/unloading was cycling normally
Thread dumps were normal
Nowhere in the server code are any calls to System.exit() - so we can rule that right out (and yes I've double-checked!!!).
I'm not sure if this is Tomcat crashing or the JVM (how do I tell?). And even if I did know, I can't seem to find any indication of what went wrong:
All of the server app's logs just stop without any ERROR messages (even though we have logging universally set to DEBUG and higher)
Tomcat's catalina.out and respect localhost_access_* files just stop without any info
I've heard it is possible to have Tomcat log a coredump when it does but not sure how to do that and online examples aren't helping much.
How would SO go about diagnosing this? What steps should I take to start ruling out all of the possible factors?
Thanks in advance!
If the JVM crashes, you should have a hs_err_pidNNN.log file; you don't have to do anything to enable this. Its location depends on your OS and how you are running Tomcat. On Windows, they can show up on your desktop, unless you are running as a service. Otherwise, they should be in the current working directory of the crashed process.
Your operating system probably provides additional tools for process monitoring; you could describe your environment more, or perhaps ask at serverfault.com.
It's also possible that jvisualvm is actually causing the crash.
I'd try reproducing the problem, and progressively simplify the scenario to help isolate the cause.
Another possibility is that the OS is running out of memory and the OOM Killer is killing your process. In this case, the JVM wouldn't get an opportunity to write a heap dump, or an hs_err_pid file.
You can use the option java -XX:+HeapDumpOnOutOfMemoryError to create a heap dump for jvm crash due to out of memory error.
More details here Using HeapDumpOnOutOfMemoryError parameter for heap dump for JBoss.
Sorry I had to remove the green check from #erickson. I finally figured out what was killing Tomcat.
It looks like a profiler plugin is not configured correctly with VisualVM and attempting to run a profile on the Tomcat process killed it.
Investigating why right now, and will update this answer once I know more.
Short description of my problem: I start up Tomcat with my deployed Wicket application. When I want to shut down tomcat I get this error message:
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.lang.ref.Reference.<clinit>(Reference.java:145)
I am running the following setup:
Ubuntu Linux: 10.04 (lucid) with a 2.6.18-028stab094.3 kernel
Java Version: "1.6.0_26" Java HotSpot(TM) 64-Bit Server VM
Tomcat Version: 7.0.23
jvm_args: -Xms512m -Xmx512m -XX:MaxPermSize=205m (these are added via CATALINA_OPTS, nothing else)
Wicket 1.5.1
Tomcat is configured with two virtual hosts on subdomains with ModProxy
My application is deployed as ROOT.war in the appbase directory (it makes no difference if I deploy one or both applications)
'''No application deployed does not result in OOM on shutdown''', unless I mess around with the jvm args
The size of the war is about 500k, all libraries are deployed in tomcat/common/lib (directory which I added to common.loader in conf/catalina.properties)
ulimit -u -> unlimited
When I check the Tomcat manager app it says the following about the JVM memory:
Free memory: 470.70 MB Total memory: 490.68 MB Max memory: 490.68 MB
(http connector) Max threads: 200 Current thread count: 6 Current thread busy: 1
'top' or 'free -m' is similar:
Mem: 2097152k total, 1326772k used, 770380k free, 0k buffers
20029 myuser 18 0 805m 240m 11m S 0 11.7 0:19.24 java
I tried to start jmap to get a dump of the heap, it also fails with an OutOfMemoryError. Actually as long as one or both of my applications are deployed any other java process fails with the same OOM Error (see top).
The problem occurs while the application is deployed. So something is seriously wrong with it. However the application is actually running smoothly for quite a while. But I have seen OOMs in the application as well, so I don't trust the calm.
My application is using a custom filter class? Could that be it?
For completeness (hopefully), here's the list of libraries in my common/lib:
activation-1.1.jar
antlr-2.7.6.jar
antlr-runtime-3.3.jar
asm-3.1.jar
asm-commons-3.1.jar
asm-tree-3.1.jar
c3p0-0.9.1.1.jar
commons-collections-3.1.jar
commons-email-1.2.jar
dependencies-provided.tgz
dom4j-1.6.1.jar
ejb3-persistence-1.0.2.GA.jar
geronimo-annotation_1.0_spec-1.1.1.jar
geronimo-jaspic_1.0_spec-1.0.jar
geronimo-jta_1.1_spec-1.1.1.jar
hibernate-annotations-3.4.0.GA.jar
hibernate-commons-annotations-3.1.0.GA.jar
hibernate-core-3.3.0.SP1.jar
hibernate-entitymanager-3.4.0.GA.jar
hibernate-search-3.1.0.GA.jar
javassist-3.4.GA.jar
joda-time-1.6.2.jar
jta-1.1.jar
log4j-1.2.16.jar
lombok-0.9.3.jar
lucene-core-2.4.0.jar
mail-1.4.1.jar
mysql-connector-java-5.1.14.jar
persistence-api-1.0.jar
quartz-2.1.1.jar
servlet-api-2.5.jar
slf4j-api-1.6.1.jar
slf4j-log4j12-1.6.1.jar
stringtemplate-4.0.2.jar
wicket-auth-roles-1.5.1.jar
wicket-core-1.5.1.jar
wicket-datetime-1.5.1.jar
wicket-extensions-1.5.1.jar
wicket-request-1.5.1.jar
wicket-util-1.5.1.jar
xml-apis-1.0.b2.jar
I appreciate any hint or even speculation that gives me additional ideas what to try.
Update: I tested some more and found that this behaviour only occurs while one or both of my applications are deployed. The behaviour does not occur on "empty" tomcat (that was a mistake on my part messing with jvm args)
Update2: I am currently experimenting trying to reproduce this behaviour in a virtual box, I want to debug this with a profiler. I am still not convinved that it should be impossible to run my setup on 2GB RAM.
Update3 (10/01/12): I am trying to run jenkins instead of my own application. Same behaviour, so it is definitely server configuration issues. Jenkins jobs fail when maven is called, so I need not even try the shutdown hack suggested below because I need a second java process while running Jenkins. It was suggested to me that because this is a Virtual Server ulimits may be imposed from outside and I would not be able to see them. I think I'll ask a new question regarding this. Thx all.
Update4 (02/05/12): see below for the answer that contains the hint. I'll clarify some more up here: I am now 95% sure that the errors occur because I am reaching my thread limit. However because this is a virtual server the method described below would not work to check this value because it is not visible with ulimit, that was what was confusing me and only today I found out that this is the "numproc" value that I can see in the Parallels Power Panel that I can log into for my virtual server. There were Resource Alerts for numproc but I did not see those either until just now. The value has a hard limit of 96 which I cannot change of course. The current value of numproc corresponds to the number of processes I see with "top" after toggling "H" to see threads. I had a very hard time finding this because this numproc value is hidden deep inside the panel. Sadly 96 is a rather low number if you want to run a tomcat with apache and mysql. I am also very sad that I cannot even find this value in the small print of my hosting contract and it is rather relevant to my application I dare say. So I guess I'll need a server upgrade.
Thanks all for your helpful answers in the end everyone helped me a bit to find out what the problem was.
The tomcat shutdown procedure consits of sending an command/word via a tcp port to the running tomcat VM. This port is configured in the server.xml (if I remember corretly, writting on my phone right now). So far so good.
Unfortunately, the shutdown script does this by starting a 2. VM using the same java options used for the tomcat. Your system simply has not enough memory for this.
As a sollution you could write your own stop script using telnet or something.
I could help with later if needed.
Hope that helps.
Viele grüsse Bert
Seems you have too many threads open.
Use this command :
ulimit -u
What is the result ?
Should be something like :
max user processes (-u) 100
If this is correct, you can edit this file :
/etc/security/limits.conf
and the the following modifications :
#<domain> <type> <item> <value>
user soft nproc 10000
user hard nproc 10000
You can probably survive for a while like this. All you need to do is kill the tomcat process whenever you need to restart it. It is not a nice approach, but the main concern is that your application runs correctly.
It seems to me though, that on the long run, you might need to order a hosting plan with more RAM available.
I was having a similar problem with a tomcat installation just last week. I managed to fix it by giving tomcat a smaller heap. Something like this:
export CATALINA_OPTS=”-Xms256m -Xmx512m”
Before starting Tomcat may help. In the meantime you'll have to kill it the old fashioned way, with a kill -9 ;)
EDIT: you could also take look here, it appears tomcat automatically creates a bunch of "spare" threads, but you can limit those as well as your max thread count in the config. Hope it helps.