Reload Tomcat webapp automatically? - java

Is it possible to configure web.xml to reload a specific tomcat webapp at a particular time automatically. If not, is it possible to do this programatically?

Programatically - an option is to write an Ant script for reload the webapp you want based on the example given on the Tomcat docs
So you'll be left with a command like
ant -Dpassword=secret reload
and put this into a cron tab on your server (if Unix/Linux) or Windows Task Scheduler for windows.
I notice you've tagged your question garbage-collection. If you are redeploying the web app due to excessive GC, then its better to tackle the root cause of the issue since this reload is only a workaround.
Run a profiler to identify memory leaks.
Related Reading on Memory Issues / GC
Java memory leak
When log shows a lot of GC hits, what code change shall we need?
Ways to reduce memory churn

Related

How to fix memory leak issue In tomcat and found what are the causes of memory leak

I have deployed the code on the tomcat server and doing frequently updates in war file.
when i click on the memory leak option i got this error(Error message is given below -). To Fix it I am restarting the server but it's not effective solution. so I want to know what i am doing wrong in code so that i can fix it. Using maven, Spring, JPA, java 8 .
The following web applications were stopped (reloaded, undeployed), but their
classes from previous runs are still loaded in memory, thus causing a memory
leak (use a profiler to confirm):
You can use jvisualVM.exe as find the path specified for the JAVA_HOME in tomcat server's catalina.bat/catalina.sh file.
Once you start the jvisualVM, in that go the process with PID which you tomcat is running on. After that, you can got to Monitor or Profiler tab, where you will get to know how much processing is your tomcat taking and how many internal processes are running within the JVM.

java pid keep increasing

We have Tomcat application running in a Debian 6.07 Server.
Lately CPU used were increasing gradually.
Using Top command I noticed that Java PID keep increasing everyday.
I need to restart the tomcat to make it back to normal.
After restart the tomcat, Java Cpu Used will be back to around 2 %.
From that moment it will increase everyday, and I will need to restart the tomcat every time it reach around 40 %.
Is there any way to fix this issues ?
Thank you
It looks like you have some memory leak or some thread which consumes memory or processing iteratively without freeing unused resources.
Also, you may use tools like Java Profiler (or any other java auditing and profiling tools) to analyze what resources are being used and by whom (classes, threads... etc.)
checkout the following links for Java profiling tools:
https://blog.idrsolutions.com/2014/06/java-performance-tuning-tools/
http://www.infoq.com/articles/java-profiling-with-open-source
(if you can share more info I'll edit my answer properly)

Application in Tomcat is not responding

We are trying to access an application from the tomcat which is on a different host, but it is not loading even though the tomcat is running. It was running fine for the past 3 months. We restarted the tomcat now it is working fine.
But, we could not able to zero in on what happened.
Any idea how to trace / what might have caused this?
The CPU usage was normal and the tomcat memory was 1205640.
the memory setting of tomcat are 1024- 2048(min-max)
We are using tomcat 7.
Help much appreciated....thanks in advance.....cheers!!
...also - not sure on Windows - you may be running out of file descriptors. This typically happens when streams are not properly closed in finally blocks.
In addition, check with netstat if you have a lot of sockets remaining open or accumulating in wait state.
Less likely, the application is creating threads and never releasing them.
The application is leaking something (memory, file descriptors, sockets, threads,...) and running over a limit.
There are different ways to track this. A profiler may help or more simply, running JVM dumps at regular intervals and checking what is accumulating. The excellent MAT will help you analyze the dumps.
Memory leak problems are not uncommon. If your Tomcat instance was running for three months and suddenly the contained application became unresponsive maybe that was the case. One solution (and if your resources allow you to do so) could be monitoring that Tomcat instance though JMX using jconsole to see how it behaves

Tomcat dies suddenly

Trying to diagnose some bizarre Tomcat (7.0.21) and/or JVM errors on a 64-bit linux (CentOS) machine.
I'm load testing our server application and tried hitting it with 100K messages. Launched jvisualvm and kept my eye on the heap the whole time. Everything was looking great* (see below) until I got to about 93K processed messages and then Tomcat just died. Ran a ps on Tomcat's PID number to confirm it was dead.
Up until this crash:
Load test had been running for about 90 minutes; should have finished shortly thereafter since we were at 93K/100K)
CPU was holding strong around 45%
Used heap was around 2GB (plus or minus a bunch after GCs) but heap size grew from 4GB to MAX_HEAP after about 30 minutes
Class loading/unloading was cycling normally
Thread dumps were normal
Nowhere in the server code are any calls to System.exit() - so we can rule that right out (and yes I've double-checked!!!).
I'm not sure if this is Tomcat crashing or the JVM (how do I tell?). And even if I did know, I can't seem to find any indication of what went wrong:
All of the server app's logs just stop without any ERROR messages (even though we have logging universally set to DEBUG and higher)
Tomcat's catalina.out and respect localhost_access_* files just stop without any info
I've heard it is possible to have Tomcat log a coredump when it does but not sure how to do that and online examples aren't helping much.
How would SO go about diagnosing this? What steps should I take to start ruling out all of the possible factors?
Thanks in advance!
If the JVM crashes, you should have a hs_err_pidNNN.log file; you don't have to do anything to enable this. Its location depends on your OS and how you are running Tomcat. On Windows, they can show up on your desktop, unless you are running as a service. Otherwise, they should be in the current working directory of the crashed process.
Your operating system probably provides additional tools for process monitoring; you could describe your environment more, or perhaps ask at serverfault.com.
It's also possible that jvisualvm is actually causing the crash.
I'd try reproducing the problem, and progressively simplify the scenario to help isolate the cause.
Another possibility is that the OS is running out of memory and the OOM Killer is killing your process. In this case, the JVM wouldn't get an opportunity to write a heap dump, or an hs_err_pid file.
You can use the option java -XX:+HeapDumpOnOutOfMemoryError to create a heap dump for jvm crash due to out of memory error.
More details here Using HeapDumpOnOutOfMemoryError parameter for heap dump for JBoss.
Sorry I had to remove the green check from #erickson. I finally figured out what was killing Tomcat.
It looks like a profiler plugin is not configured correctly with VisualVM and attempting to run a profile on the Tomcat process killed it.
Investigating why right now, and will update this answer once I know more.

Tomcat 6 Web Application Eating Up Memory Over Time

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

Categories

Resources