Similar questions are there but none answers by concern ..
Here it says that "One hack to get around this problem is that JDBC driver to be loaded by common class loader than application classloader and you can do this by transferring driver's jar into tomcat lib instead of bundling it on web application's war file
Did not understand what it means to load by common class loader and how is it different from application classloader.
This means that the ClassLoader loading the JDBCDriver class is the class loader of your application server which is a parent of your application classloader. Therefore, the driver is available for every application on your server and not reloaded on every restart of your application (which can lead to permgen trouble if you are not unregistering it properly)
Every time you deploy an application and load a class from there (to use it), it will be loaded by the application classloader. the more applications, the more "same" classes are loaded. If you use tomcats' "common" classloader, the class will only be loaded once per tomcat installation.
OutOfMemoryError: PermGen space is usually only a problem if your using the hot redeploy feature of Tomcat. It can also occur if you simply have a very large number of classes being used in your deployment.
Increasing the amount of PermGen available in the VM will solve the large number of classes problem. That can be done by adding -XX:MaxPermSize=128m or -XX:MaxPermSize=256m to the environment variable JAVA_OPTS or CATALINA_OPTS (this can usually be done in Tomcat launch script). If you are launching Tomcat directly you can export these environment variables in your shell.
Unfortunately this doesn't completely solve the redeploy issue it only make it so you can redeploy more times before running out of PermGen. To fix this issue you'll need to make sure your web app unloads correctly and completely. This involves making sure all threads started by your webapp stop, and JDBC drivers loaded are unregistered properly among other things. The other way to solve this is to not use hot redeploy and restart Tomcat when making changes to the application.
Related
We have a Spring Boot application running on Tomcat, it is a RESTful web service. The same WAR file is deployed on 3 Tomcat instances in our test environment as well as Production environment. While running performance test we noticed a peculiar problem with some servers. Some of the servers stop responding after processing about 2500 requests. The issue happens on 2 out of 3 Production servers and happens on 1 out of 3 Test servers.
On the servers that have the issue, we noticed on our JVM monitoring that the classes loaded count keeps increasing whenever we are running the performance test. The class loaded count goes from 20k to around 2 million. When the class count reaches close to 2million the JVM monitoring also shows that the GC is taking too long, more than 40 seconds. Once it reaches that point, the application would stop responding. The applications throws an OutOfMemoryException “Compressed class space”. If we continue sending more requests, we can see from the application logs that the service is still receiving requests but stops processing midway.
On the other servers without the issue, the class loaded count stays at a constant 20k. And the GC is normal too, taking less than 1 seconds.
Others testing and behaviors we have noticed -
The issue happens on local Tomcat instances installed on Windows PC. The servers are on Linux. The issue happens on both OpenJDK and Oracle JDK 1.8.
We verified the Tomcat instances are equal to each other - we even cloned from the working servers and put them on the bad servers.
Tested with different GC policies - PS, CMS and G1, and the issues happens on all three.
Tested by running the application as a standalone Spring Boot JAR and the issue goes away. The class count stays constant and GC behaves normally.
The application is currently using JAXB libraries to perform XML marshalling/unmarshalling and we found places in the code where we can optimize the code. Refactoring the code and moving to Jackson library is another option.
My questions are -
What would be causing the difference between multiple servers when we are deploying the same WAR file?.
What would be causing the difference between the application running as WAR deployed on Tomcat versus running as standalone Spring boot application?
If we take a heap dump of the JVM or do a profiling, what are the things to look out for?
So it turns out this was due to jaxb 2.1 jar in our classpath. Thanks to Mark for pointing out the known bug with jaxb.
Our application did not explicitly have the jaxb-impl as a dependency, so it was hard to see at first. Upon looking at the Maven dependency tree, we found out two different versions were being loaded from other project and libraries. Our application had jaxb-impl versions 2.1 and 2.2.6 in the classpath. We put the 2.1 version as an exclusion in our application's pom.xml and that fixed the issue.
My guess is that different servers were loading different versions upon the application startup. That could be why some servers were working fine and others that loaded the 2.1 version had issues. Similarly with running as a standalone Spring boot app, it might have loaded the 2.1 version.
I have deployed the code on the tomcat server and doing frequently updates in war file.
when i click on the memory leak option i got this error(Error message is given below -). To Fix it I am restarting the server but it's not effective solution. so I want to know what i am doing wrong in code so that i can fix it. Using maven, Spring, JPA, java 8 .
The following web applications were stopped (reloaded, undeployed), but their
classes from previous runs are still loaded in memory, thus causing a memory
leak (use a profiler to confirm):
You can use jvisualVM.exe as find the path specified for the JAVA_HOME in tomcat server's catalina.bat/catalina.sh file.
Once you start the jvisualVM, in that go the process with PID which you tomcat is running on. After that, you can got to Monitor or Profiler tab, where you will get to know how much processing is your tomcat taking and how many internal processes are running within the JVM.
I have a standalone application that is running in IBM Websphere 7.0.0.19. It is running in Java 6 and we pack an Axis2 JAR in our EAR. We have 'parent last' style class loading and we have disabled the Axis service that is packed with WAS7 by default.
Recently, after 6+ weeks of continuous functioning, the application experienced an OOM. Perplexing point is, the application is deployed seperately on 2 different machines. But only one machine went down. Second machine is still up.
We checked OS, server configuration like classloader policy using WAS console and they are similar in both machines.
When the application crashed, it generated a .phd file which we analysed using Eclipse Memory Analyser Tool (MAT). The analysis is shown in the screenshot.
If I'm correct the bootstrap class loader is repeatedly loading and holding on to references of AxisConfiguraiton and so GC is unable to collect them when it runs. But, if that is the case, then both servers must have come down. But only one server experienced an OOM. Memory allocated to JVM is same in both machines.
We are not sure whether the issue is with WAS 7 or with axis2-kernel-1.4.1.jar or with something else.
http://www.slideshare.net/leefs/axis2-client-memory-leak
https://issues.apache.org/jira/browse/AXIS2-3870
http://java.dzone.com/articles/12-year-old-bug-jdk-still-out
(Links may not refer to the current issue. But they are just pointers)
Has anyone experienced something similar ?
We saw memory growth and sockets left open on WebSphere 6.1 with Axis 2 1.4 in the past. It's been a long time, but my notes suggest it might be worth considering an upgrade to at least Axis 2 1.5.1 to fix this bug with the open sockets and also to ensure you're not creating new objects repeatedly where a singleton exists (e.g. the Service object).
I'd like to run a web container where each webapp runs in its own process (JVM). Incoming requests get forwarded by a proxy webapp running on port 80 to individual webapps, each (webapp) running on its own port in its own JVM.
This will solve three problems:
Webapps using JNI (where the JNI code changes between restarts) cannot be restarted. There is no way to guarantee that the old webapp has been garbage-collected before loading the new webapp, so when the code invokes System.loadLibrary() the JVM throws: java.lang.UnsatisfiedLinkError: Native Library x already loaded in another classloader.
Libraries leak memory every time a webapp is reloaded, eventually forcing a full server restart. Tomcat has made headway in addressing this problem but it will never be completely fixed.
Faster restarts. The mechanism I'm proposing would allow near-instant webapp restarts. We no longer have to wait for the old webapp to finish unloading, which is the slowest part.
I've posted a RFE here and here. I'd like to know what you think.
Does any existing web container do this today?
I'm closing this question because I seem to have run into a dead end: http://tomcat.10.n6.nabble.com/One-process-per-webapp-td2084881.html
As a workaround, I'm manually launching a separate Jetty instance per webapp.
Can't you just deploy one app per container and then use DNS entries and reverse proxies to do the exact same thing? I believe Weblogic has something like this in the form of managed domains.
No, AFAIK, none of them do, probably because Java web containers emphasize following the servlet API - which spins off a thread per http request. What you want would be a fork at the JVM level - and that simply isn't a standard Java idiom.
If I understand correctly you are asking for the standard features for enterprise quality servers such IBM's WebSphere Network Deployment (disclaimer I work for IBM) where you can distribute applications across many JVMs, and those JVMs can in fact be distributed across many physical machines.
I'm not sure that your fundamental premise is correct though. It's not necessary to restart a whole JVM in order to deploy a new version of an application. Many app servers will use a class-loader strategy that allows them to discard a version of an app and load a new one.
I have read that it is possible with Tomcat 5.5+ to deploy a war to a Tomcat server without a restart. That sounds fantastic but I guess I am too skeptical about this functionality and it's reliability. My previous experience (with Websphere) was that it was a best practice to restart the server to avoid memory problems, etc. So I wanted to get feedback as to what pitfalls might exist with Tomcat.
(To be clear about my experience, I developed java web apps for 5 years for a large company that partitioned the app developers from the app server engineers - we used Websphere - so I don't have a lot of experience with running/configuring any app servers myself)
In general, there are multiple type of leaks and they apply to redeploy-scenarios. For production systems, it's really the best to perform restarts if possible, as there are so many different components and libraries used in todays applications that it's very hard to find them all and even harder to fix them. Esp. if you haven't got access to all source code.
Memory leaks
Thread and ThreadLocal leaks
ClassLoader leaks
System resource leaks
Connection leaks
ClassLoader leaks are the ones which bite at redeployment.
They can be caused by everything. Really, i mean everything:
Timers: Timers have Threads and Threads created at runtime inherit the current context class loader, which means the WebappClassloader of Tomcat.
ThreadLocals: ThreadLocals are bound to the thread. App servers use Thread pools. When a ThreadLocal is bound to a Thread and the Thread is given back to the pool, the ThreadLocal will stay there if nobody removes() it properly. Happens quite often and very hard to find (ThreadLocals do not have a name, except the rarely used Spring NamedThreadLocal). If the ThreadLocal holds a class loaded by the WebappClassloader, you got a ClassLoader leak.
Caches: e.g. EhCache CacheManager
Reflection: JavaBeans Introspector (e.g. holding Class or Method caches)
JDBC Drivers: they shouldn't be in the .war file anyway. Leak due to static registry
Static libraries which cache ClassLoaders, such as Commons-Logging LogFactory
Specific to Tomcat, my experience is as follows:
For simple apps with "clean" libraries, it works fine in Tomcat
Tomcat tries very hard to clean up classes loaded by the WebappClassloader. For example, all static fields of classes are set to null when a webapp is undeployed. This sometimes leads to NullPointerExceptions when code is run while the undeployment is happening, e.g. background jobs using a Logger
Tomcat has a Listener which cleans up even more stuff. Its called org.apache.catalina.core.JreMemoryLeakPreventionListener and was submitted recently to Tomcat 6.x
I wrote a blog post about my experience with leaks when doing redeployment stresstesting - trying to "fix" all possible leaks of an enterprise-grade Java Web Application.
Hot deployment is very nice as it usually is much faster than bringing the server up and down.
mhaller has written a lot about avoiding leaks. Another issue is for active users to have their session survive the application "reboot". There are several things that must be taken care of, but which all in all means that their session must be serializable and THEN deserialize properly afterwards. This can be a bit tricky if you have stateful database connections etc, but if your code is robust against database hickups anyway that shouldn't be too bad.
Also note that some IDE's allow updating code inside the WAR (in the same way as applications) when saving a modified source file, instead of having to redeploy. MyEclipse does this rather nicely.