LDAP PermGen memory leak - java

Whenever I use LDAP in a web application it causes a classloader leak, and the strange thing is profilers don’t find any GC roots.
I’ve created a simple web application that demonstrates the leak, it only includes this class:
#WebListener
public class LDAPLeakDemo implements ServletContextListener {
public void contextInitialized(ServletContextEvent sce) {
useLDAP();
}
public void contextDestroyed(ServletContextEvent sce) {}
private void useLDAP() {
Hashtable<String, Object> env = new Hashtable<String, Object>();
env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
env.put(Context.PROVIDER_URL, "ldap://ldap.forumsys.com:389");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
env.put(Context.SECURITY_PRINCIPAL, "cn=read-only-admin,dc=example,dc=com");
env.put(Context.SECURITY_CREDENTIALS, "password");
try {
DirContext ctx = null;
try {
ctx = new InitialDirContext(env);
System.out.println("Created the initial context");
} finally {
if (ctx != null) {
ctx.close();
System.out.println("Closed the context");
}
}
} catch (NamingException e) {
e.printStackTrace();
}
}
}
The source code is available here. I’m using a public LDAP test server for this example, so it should work for everyone if you want to try it.
I tried it with the latest JDK 7 and 8 and Tomcat 7 and 8 with the same result – when I click on Reload in Tomcat Web Application Manager and then on Find leaks, Tomcat reports that there’s a leak and profilers confirm it.
The leak is barely noticeable in this example, but it causes OutOfMemory in a big web application. I didn’t find any open JDK bugs about it.
UPDATE 1
I've tried to use Jetty 9.2 instead of Tomcat and I still see the leak, so it's not Tomcat's fault. Either it's a JDK bug or I'm doing something wrong.
UPDATE 2
Even though my example demonstrates the leak, it doesn’t demonstrate the out of memory error, because it has very small PermGen footprint. I’ve created another branch that should be able to reproduce OutOfMemoryError. I just added Spring, Hibernate and Logback dependencies to the project to increase PermGen consumption. These dependencies have nothing to do with the leak and I could have used any others instead. The only purpose of those is to make PermGen consumption big enough to be able to get OutOfMemoryError.
Steps to reproduce OutOfMemoryError:
Download or clone the outofmemory-demo branch.
Make sure you have JDK 7 and any version of Tomcat and Maven (I used the latest versions - JDK 1.7.0_79 and Tomcat 8.0.26).
Decrease PermGen size to be able to see the error after the first reload. Create setenv.bat (Windows) or setenv.sh (Linux) in Tomcat’s bin directory and add set "JAVA_OPTS=-XX:PermSize=24m -XX:MaxPermSize=24m" (Windows) or export "JAVA_OPTS=-XX:PermSize=24m -XX:MaxPermSize=24m" (Linux).
Go to Tomcat’s conf directory, open tomcat-users.xml and add <role rolename="manager-gui"/><user username="admin" password="1" roles="manager-gui"/> inside <tomcat-users></ tomcat-users> to be able to use Tomcat Web Application Manager.
Go to project’s directory and use mvn package to build a .war.
Go to Tomcat’s webapps directory, delete everything except the manager directory and copy the .war here.
Run Tomcat’s start script (bin\startup.bat or bin/startup.sh) and open http://localhost:8080/manager/, use username admin and password 1.
Click on Reload and you should see java.lang.OutOfMemoryError: PermGen space in Tomcat's console.
Stop Tomcat, open project’s source file src\main\java\org\example\LDAPLeakDemo.java, remove the useLDAP(); call and save it.
Repeat steps 5-8, only this time there’s no OutOfMemoryError, because the LDAP code is never called.

First of all: Yes, the LDAP API provided by Sun/Oracle can trigger ClassLoader leaks. It is on my list of known offenders, because if system property com.sun.jndi.ldap.connect.pool.timeout is > 0 com.sun.jndi.ldap.LdapPoolManager will spawn a new thread running in the web app that first invoked LDAP.
That being said, I added your example code as a test case in my ClassLoader Leak Prevention library, so that I'd get an automatic heap dump of the leak. According to my analysis, there is in fact no leak in your code, however it does seem to take more than one Garbage Collector cycle to get the ClassLoader in question GC:ed (probably due to transient references - haven't dug into it that much). This probably tricks Tomcat into believing there is a leak, even if there is none.
However, since you say you eventually get an OutOfMemoryError, either I'm wrong or there is something else in your app causing these leaks. If you add my ClassLoader Leak Prevention library to your app, does it still leak/cause OOMEs? Does the Preventor log any warnings?
If you set up your application server to create a heap dump whenever there is an OOME, you can look for the leak using Eclipse Memory Analyzer. I've explained the process in detail here.

It's been a while since I posted this question. I finally found what really happened, so I thought I post it as the answer in case #MattiasJiderhamn or others are interested.
The reason profilers didn’t find any GC roots was because JVM was hiding the java.lang.Throwable.backtrace field as described in https://bugs.openjdk.java.net/browse/JDK-8158237. Now that this limitation is gone I was able to get the GC root:
this - value: org.apache.catalina.loader.WebappClassLoader #2
<- <classLoader> - class: org.example.LDAPLeakDemo, value: org.apache.catalina.loader.WebappClassLoader #2
<- [10] - class: java.lang.Object[], value: org.example.LDAPLeakDemo class LDAPLeakDemo
<- [2] - class: java.lang.Object[], value: java.lang.Object[] #3394
<- backtrace - class: javax.naming.directory.SchemaViolationException, value: java.lang.Object[] #3386
<- readOnlyEx - class: com.sun.jndi.toolkit.dir.HierMemDirCtx, value: javax.naming.directory.SchemaViolationException #1
<- EMPTY_SCHEMA (sticky class) - class: com.sun.jndi.ldap.LdapCtx, value: com.sun.jndi.toolkit.dir.HierMemDirCtx #1
The cause of this leak is the LDAP implementation in JDK. The com.sun.jndi.ldap.LdapCtx class has a static filed
private static final HierMemDirCtx EMPTY_SCHEMA = new HierMemDirCtx();
com.sun.jndi.toolkit.dir.HierMemDirCtx contains the readOnlyEx field that is assigned to an instance of javax.naming.directory.SchemaViolationException during the LDAP initialization that happens after the new InitialDirContext(env) call in the code from my question. The issue is java.lang.Throwable, which is the superclass of all exceptions including javax.naming.directory.SchemaViolationException, has the backtrace field. This field contains references to all classes in the stacktrace at the time the constructor was called, including my own org.example.LDAPLeakDemo class, which in turn holds a reference to the web application classloader.
Here's a similar leak that was fixed in Java 9 https://bugs.openjdk.java.net/browse/JDK-8146961

Related

How to fix memory leak issue In tomcat and found what are the causes of memory leak

I have deployed the code on the tomcat server and doing frequently updates in war file.
when i click on the memory leak option i got this error(Error message is given below -). To Fix it I am restarting the server but it's not effective solution. so I want to know what i am doing wrong in code so that i can fix it. Using maven, Spring, JPA, java 8 .
The following web applications were stopped (reloaded, undeployed), but their
classes from previous runs are still loaded in memory, thus causing a memory
leak (use a profiler to confirm):
You can use jvisualVM.exe as find the path specified for the JAVA_HOME in tomcat server's catalina.bat/catalina.sh file.
Once you start the jvisualVM, in that go the process with PID which you tomcat is running on. After that, you can got to Monitor or Profiler tab, where you will get to know how much processing is your tomcat taking and how many internal processes are running within the JVM.

java.lang.OutOfMemoryError: PermGen space for creating entity manager [duplicate]

Recently I ran into this error in my web application:
java.lang.OutOfMemoryError: PermGen space
It's a typical Hibernate/JPA + IceFaces/JSF application running on Tomcat 6 and JDK 1.6.
Apparently this can occur after redeploying an application a few times.
What causes it and what can be done to avoid it?
How do I fix the problem?
The solution was to add these flags to JVM command line when Tomcat is started:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
You can do that by shutting down the tomcat service, then going into the Tomcat/bin directory and running tomcat6w.exe. Under the "Java" tab, add the arguments to the "Java Options" box. Click "OK" and then restart the service.
If you get an error the specified service does not exist as an installed service you should run:
tomcat6w //ES//servicename
where servicename is the name of the server as viewed in services.msc
Source: orx's comment on Eric's Agile Answers.
You better try -XX:MaxPermSize=128M rather than -XX:MaxPermGen=128M.
I can not tell the precise use of this memory pool, but it have to do with the number of classes loaded into the JVM. (Thus enabling class unloading for tomcat can resolve the problem.) If your applications generates and compiles classes on the run it is more likely to need a memory pool bigger than the default.
App server PermGen errors that happen after multiple deployments are most likely caused by references held by the container into your old apps' classloaders. For example, using a custom log level class will cause references to be held by the app server's classloader. You can detect these inter-classloader leaks by using modern (JDK6+) JVM analysis tools such as jmap and jhat to look at which classes continue to be held in your app, and redesigning or eliminating their use. Usual suspects are databases, loggers, and other base-framework-level libraries.
See Classloader leaks: the dreaded "java.lang.OutOfMemoryError: PermGen space" exception, and especially its followup post.
Common mistakes people make is thinking that heap space and permgen space are same, which is not at all true. You could have lot of space remaining in the heap but still can run out of memory in permgen.
Common causes of OutofMemory in PermGen is ClassLoader. Whenever a class is loaded into JVM, all its meta data, along with Classloader, is kept on PermGen area and they will be garbage collected when the Classloader which loaded them is ready for garbage collection. In Case Classloader has a memory leak than all classes loaded by it will remain in memory and cause permGen outofmemory once you repeat it a couple of times. The classical example is Java.lang.OutOfMemoryError:PermGen Space in Tomcat.
Now there are two ways to solve this:
1. Find the cause of Memory Leak or if there is any memory leak.
2. Increase size of PermGen Space by using JVM param -XX:MaxPermSize and -XX:PermSize.
You can also check 2 Solution of Java.lang.OutOfMemoryError in Java for more details.
Use the command line parameter -XX:MaxPermSize=128m for a Sun JVM (obviously substituting 128 for whatever size you need).
Try -XX:MaxPermSize=256m and if it persists, try -XX:MaxPermSize=512m
I added -XX: MaxPermSize = 128m (you can experiment which works best) to VM Arguments as I'm using eclipse ide. In most of JVM, default PermSize is around 64MB which runs out of memory if there are too many classes or huge number of Strings in the project.
For eclipse, it is also described at answer.
STEP 1 : Double Click on the tomcat server at Servers Tab
STEP 2 : Open launch Conf and add -XX: MaxPermSize = 128m to the end of existing VM arguements.
I've been butting my head against this problem while deploying and undeploying a complex web application too, and thought I'd add an explanation and my solution.
When I deploy an application on Apache Tomcat, a new ClassLoader is created for that app. The ClassLoader is then used to load all the application's classes, and on undeploy, everything's supposed to go away nicely. However, in reality it's not quite as simple.
One or more of the classes created during the web application's life holds a static reference which, somewhere along the line, references the ClassLoader. As the reference is originally static, no amount of garbage collecting will clean this reference up - the ClassLoader, and all the classes it's loaded, are here to stay.
And after a couple of redeploys, we encounter the OutOfMemoryError.
Now this has become a fairly serious problem. I could make sure that Tomcat is restarted after each redeploy, but that takes down the entire server, rather than just the application being redeployed, which is often not feasible.
So instead I've put together a solution in code, which works on Apache Tomcat 6.0. I've not tested on any other application servers, and must stress that this is very likely not to work without modification on any other application server.
I'd also like to say that personally I hate this code, and that nobody should be using this as a "quick fix" if the existing code can be changed to use proper shutdown and cleanup methods. The only time this should be used is if there's an external library your code is dependent on (In my case, it was a RADIUS client) that doesn't provide a means to clean up its own static references.
Anyway, on with the code. This should be called at the point where the application is undeploying - such as a servlet's destroy method or (the better approach) a ServletContextListener's contextDestroyed method.
//Get a list of all classes loaded by the current webapp classloader
WebappClassLoader classLoader = (WebappClassLoader) getClass().getClassLoader();
Field classLoaderClassesField = null;
Class clazz = WebappClassLoader.class;
while (classLoaderClassesField == null && clazz != null) {
try {
classLoaderClassesField = clazz.getDeclaredField("classes");
} catch (Exception exception) {
//do nothing
}
clazz = clazz.getSuperclass();
}
classLoaderClassesField.setAccessible(true);
List classes = new ArrayList((Vector)classLoaderClassesField.get(classLoader));
for (Object o : classes) {
Class c = (Class)o;
//Make sure you identify only the packages that are holding references to the classloader.
//Allowing this code to clear all static references will result in all sorts
//of horrible things (like java segfaulting).
if (c.getName().startsWith("com.whatever")) {
//Kill any static references within all these classes.
for (Field f : c.getDeclaredFields()) {
if (Modifier.isStatic(f.getModifiers())
&& !Modifier.isFinal(f.getModifiers())
&& !f.getType().isPrimitive()) {
try {
f.setAccessible(true);
f.set(null, null);
} catch (Exception exception) {
//Log the exception
}
}
}
}
}
classes.clear();
The java.lang.OutOfMemoryError: PermGen space message indicates that the Permanent Generation’s area in memory is exhausted.
Any Java applications is allowed to use a limited amount of memory. The exact amount of memory your particular application can use is specified during application startup.
Java memory is separated into different regions which can be seen in the following image:
Metaspace: A new memory space is born
The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace; similar to the Oracle JRockit and IBM JVM's.
The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore using Java_8_Download or higher.
Alternatively, you can switch to JRockit which handling permgen differently then sun's jvm. It generally has better performance as well.
http://www.oracle.com/technetwork/middleware/jrockit/overview/index.html
1) Increasing the PermGen Memory Size
The first thing one can do is to make the size of the permanent generation heap space bigger. This cannot be done with the usual –Xms(set initial heap size) and –Xmx(set maximum heap size) JVM arguments, since as mentioned, the permanent generation heap space is entirely separate from the regular Java Heap space,
and these arguments set the space for this regular Java heap space. However, there are similar arguments which can be used(at least with the Sun/OpenJDK jvms) to make the size of the permanent generation heap bigger:
-XX:MaxPermSize=128m
Default is 64m.
2) Enable Sweeping
Another way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
Stuff like that worked magic for me in the past. One thing though, there’s a significant performance trade off in using those, since permgen sweeps will make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.
You can find the details of this error.
http://faisalbhagat.blogspot.com/2014/09/java-outofmemoryerror-permgen.html
I had the problem we are talking about here, my scenario is eclipse-helios + tomcat + jsf and what you were doing is making a deploy a simple application to tomcat. I was showing the same problem here, solved it as follows.
In eclipse go to servers tab double click on the registered server in my case tomcat 7.0, it opens my file server General registration information. On the section "General Information" click on the link "Open launch configuration" , this opens the execution of server options in the Arguments tab in VM arguments added in the end these two entries
-XX: MaxPermSize = 512m
-XX: PermSize = 512m
and ready.
The simplest answer these days is to use Java 8.
It no longer reserves memory exclusively for PermGen space, allowing the PermGen memory to co-mingle with the regular memory pool.
Keep in mind that you will have to remove all non-standard -XXPermGen...=... JVM startup parameters if you don't want Java 8 to complain that they don't do anything.
Open tomcat7w from Tomcat's bin directory or type Monitor Tomcat in start menu
(a tabbed window opens with various service information).
In the Java Options text area append this line:
-XX:MaxPermSize=128m
Set Initial Memory Pool to 1024 (optional).
Set Maximum Memory Pool to 1024 (optional).
Click Ok.
Restart the Tomcat service.
Perm gen space error occurs due to the use of large space rather then jvm provided space to executed the code.
The best solution for this problem in UNIX operating systems is to change some configuration on the bash file. The following steps solve the problem.
Run command gedit .bashrc on terminal.
Create JAVA_OTPS variable with following value:
export JAVA_OPTS="-XX:PermSize=256m -XX:MaxPermSize=512m"
Save the bash file. Run command exec bash on the terminal. Restart the server.
I hope this approach will work on your problem. If you use a Java version lower than 8 this issue occurs sometimes. But if you use Java 8 the problem never occurs.
Increasing Permanent Generation size or tweaking GC parameters will NOT help if you have a real memory leak. If your application or some 3rd party library it uses, leaks class loaders the only real and permanent solution is to find this leak and fix it. There are number of tools that can help you, one of the recent is Plumbr, which has just released a new version with the required capabilities.
Also if you are using log4j in your webapp, check this paragraph in log4j documentation.
It seems that if you are using PropertyConfigurator.configureAndWatch("log4j.properties"), you cause memory leaks when you undeploy your webapp.
I have a combination of Hibernate+Eclipse RCP, tried using -XX:MaxPermSize=512m and -XX:PermSize=512m and it seems to be working for me.
Set -XX:PermSize=64m -XX:MaxPermSize=128m. Later on you may also try increasing MaxPermSize. Hope it'll work. The same works for me. Setting only MaxPermSize didn't worked for me.
I tried several answers and the only thing what finally did the job was this configuration for the compiler plugin in the pom:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.3.2</version>
<configuration>
<fork>true</fork>
<meminitial>128m</meminitial>
<maxmem>512m</maxmem>
<source>1.6</source>
<target>1.6</target>
<!-- prevent PermGen space out of memory exception -->
<!-- <argLine>-Xmx512m -XX:MaxPermSize=512m</argLine> -->
</configuration>
</plugin>
hope this one helps.
jrockit resolved this for me as well; however, I noticed that the servlet restart times were much worse, so while it was better in production, it was kind of a drag in development.
The configuration of the memory depends on the nature of your app.
What are you doing?
What's the amount of transactions precessed?
How much data are you loading?
etc.
etc.
etc
Probably you could profile your app and start cleaning up some modules from your app.
Apparently this can occur after redeploying an application a few times
Tomcat has hot deploy but it consumes memory. Try restarting your container once in a while. Also you will need to know the amount of memory needed to run in production mode, this seems a good time for that research.
They Say that the latest rev of Tomcat (6.0.28 or 6.0.29) handles the task of redeploying servlets much better.
I run into exactly the same problem, but unfortunately none of the suggested solutions really worked for me. The problem did not happen during deployment, and I was neither doing any hot deployments.
In my case the problem occurred every time at the same point during the execution of my web-application, while connecting (via hibernate) to the database.
This link (also mentioned earlier) did provide enough insides to resolve the problem. Moving the jdbc-(mysql)-driver out of the WEB-INF and into the jre/lib/ext/ folder seems to have solved the problem. This is not the ideal solution, since upgrading to a newer JRE would require you to reinstall the driver.
Another candidate that could cause similar problems is log4j, so you might want to move that one as well
First step in such case is to check whether the GC is allowed to unload classes from PermGen. The standard JVM is rather conservative in this regard – classes are born to live forever. So once loaded, classes stay in memory even if no code is using them anymore. This can become a problem when the application creates lots of classes dynamically and the generated classes are not needed for longer periods. In such a case, allowing the JVM to unload class definitions can be helpful. This can be achieved by adding just one configuration parameter to your startup scripts:
-XX:+CMSClassUnloadingEnabled
By default this is set to false and so to enable this you need to explicitly set the following option in Java options. If you enable CMSClassUnloadingEnabled, GC will sweep PermGen too and remove classes which are no longer used. Keep in mind that this option will work only when UseConcMarkSweepGC is also enabled using the below option. So when running ParallelGC or, God forbid, Serial GC, make sure you have set your GC to CMS by specifying:
-XX:+UseConcMarkSweepGC
Assigning Tomcat more memory is NOT the proper solution.
The correct solution is to do a cleanup after the context is destroyed and recreated (the hot deploy). The solution is to stop the memory leaks.
If your Tomcat/Webapp Server is telling you that failed to unregister drivers (JDBC), then unregister them. This will stop the memory leaks.
You can create a ServletContextListener and configure it in your web.xml. Here is a sample ServletContextListener:
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.Enumeration;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import org.apache.log4j.Logger;
import com.mysql.jdbc.AbandonedConnectionCleanupThread;
/**
*
* #author alejandro.tkachuk / calculistik.com
*
*/
public class AppContextListener implements ServletContextListener {
private static final Logger logger = Logger.getLogger(AppContextListener.class);
#Override
public void contextInitialized(ServletContextEvent arg0) {
logger.info("AppContextListener started");
}
#Override
public void contextDestroyed(ServletContextEvent arg0) {
logger.info("AppContextListener destroyed");
// manually unregister the JDBC drivers
Enumeration<Driver> drivers = DriverManager.getDrivers();
while (drivers.hasMoreElements()) {
Driver driver = drivers.nextElement();
try {
DriverManager.deregisterDriver(driver);
logger.info(String.format("Unregistering jdbc driver: %s", driver));
} catch (SQLException e) {
logger.info(String.format("Error unregistering driver %s", driver), e);
}
}
// manually shutdown clean up threads
try {
AbandonedConnectionCleanupThread.shutdown();
logger.info("Shutting down AbandonedConnectionCleanupThread");
} catch (InterruptedException e) {
logger.warn("SEVERE problem shutting down AbandonedConnectionCleanupThread: ", e);
e.printStackTrace();
}
}
}
And here you configure it in your web.xml:
<listener>
<listener-class>
com.calculistik.mediweb.context.AppContextListener
</listener-class>
</listener>
"They" are wrong because I'm running 6.0.29 and have the same problem even after setting all of the options. As Tim Howland said above, these options only put off the inevitable. They allow me to redeploy 3 times before hitting the error instead of every time I redeploy.
In case you are getting this in the eclipse IDE, even after setting the parameters
--launcher.XXMaxPermSize, -XX:MaxPermSize, etc, still if you are getting the same error, it most likely is that the eclipse is using a buggy version of JRE which would have been installed by some third party applications and set to default. These buggy versions do not pick up the PermSize parameters and so no matter whatever you set, you still keep getting these memory errors. So, in your eclipse.ini add the following parameters:
-vm <path to the right JRE directory>/<name of javaw executable>
Also make sure you set the default JRE in the preferences in the eclipse to the correct version of java.
The only way that worked for me was with the JRockit JVM. I have MyEclipse 8.6.
The JVM's heap stores all the objects generated by a running Java program. Java uses the new operator to create objects, and memory for new objects is allocated on the heap at run time. Garbage collection is the mechanism of automatically freeing up the memory contained by the objects that are no longer referenced by the program.
I was having similar issue.
Mine is JDK 7 + Maven 3.0.2 + Struts 2.0 + Google GUICE dependency injection based project.
Whenever i tried running mvn clean package command, it was showing following error and "BUILD FAILURE" occured
org.apache.maven.surefire.util.SurefireReflectionException: java.lang.reflect.InvocationTargetException; nested exception is java.lang.reflect.InvocationTargetException: null
java.lang.reflect.InvocationTargetException
Caused by: java.lang.OutOfMemoryError: PermGen space
I tried all the above useful tips and tricks but unfortunately none worked for me.
What worked for me is described step by step below :=>
Go to your pom.xml
Search for <artifactId>maven-surefire-plugin</artifactId>
Add a new <configuration> element and then <argLine> sub element in which pass -Xmx512m -XX:MaxPermSize=256m as shown below =>
<configuration>
<argLine>-Xmx512m -XX:MaxPermSize=256m</argLine>
</configuration>
Hope it helps, happy programming :)

java.lang.OutOfMemoryError: PermGen space solution

Similar questions are there but none answers by concern ..
Here it says that "One hack to get around this problem is that JDBC driver to be loaded by common class loader than application classloader and you can do this by transferring driver's jar into tomcat lib instead of bundling it on web application's war file
Did not understand what it means to load by common class loader and how is it different from application classloader.
This means that the ClassLoader loading the JDBCDriver class is the class loader of your application server which is a parent of your application classloader. Therefore, the driver is available for every application on your server and not reloaded on every restart of your application (which can lead to permgen trouble if you are not unregistering it properly)
Every time you deploy an application and load a class from there (to use it), it will be loaded by the application classloader. the more applications, the more "same" classes are loaded. If you use tomcats' "common" classloader, the class will only be loaded once per tomcat installation.
OutOfMemoryError: PermGen space is usually only a problem if your using the hot redeploy feature of Tomcat. It can also occur if you simply have a very large number of classes being used in your deployment.
Increasing the amount of PermGen available in the VM will solve the large number of classes problem. That can be done by adding -XX:MaxPermSize=128m or -XX:MaxPermSize=256m to the environment variable JAVA_OPTS or CATALINA_OPTS (this can usually be done in Tomcat launch script). If you are launching Tomcat directly you can export these environment variables in your shell.
Unfortunately this doesn't completely solve the redeploy issue it only make it so you can redeploy more times before running out of PermGen. To fix this issue you'll need to make sure your web app unloads correctly and completely. This involves making sure all threads started by your webapp stop, and JDBC drivers loaded are unregistered properly among other things. The other way to solve this is to not use hot redeploy and restart Tomcat when making changes to the application.

Annoying tomcat warning about ThreadLocal [duplicate]

When I redeploy my application in tomcat, I get the following issue:
The web application [] created a ThreadLocal with key of type
[java.lang.ThreadLocal] (value [java.lang.ThreadLocal#10d16b])
and a value of type [com.sun.xml.bind.v2.runtime.property.SingleElementLeafProperty]
(value [com.sun.xml.bind.v2.runtime.property.SingleElementLeafProperty#1a183d2]) but
failed to remove it when the web application was stopped.
This is very likely to create a memory leak.
Also, am using ehcache in my application. This also seems to result in the following exception.
SEVERE: The web application [] created a ThreadLocal with key of type [null]
(value [com.sun.xml.bind.v2.ClassFactory$1#24cdc7]) and a value of type [java
.util.WeakHashMap...
The ehcache seems to create a weak hash map and I get the message that this is very likely to create a memory leak.
I searched over the net and found this,
http://jira.pentaho.com/browse/PRD-3616 but I dont have access to the server as such.
Please let me know if these warnings have any functional impact or can they be ignored? I used the "Find Memory leaks" option in tomcat manager and it says "No memory leaks found"
When you redeploy your application, Tomcat creates a new class loader. The old class loader must be garbage collected, otherwise you get a permgen memory leak.
Tomcat cannot check if the garbage collection will work or not, but it knows about several common points of failures. If the webapp class loader sets a ThreadLocal with an instance whose class was loaded by the webapp class loader itself, the servlet thread holds a reference to that instance. This means that the class loader will not be garbage collected.
Tomcat does a number of such detections, see here for more information. Cleaning thread locals is difficult, you would have to call remove() on the ThreadLocal in each of the threads that is was accessed from. In practice this is only important during development when you redeploy your web app multiple times. In production, you probably do not redeploy, so this can be ignored.
To really find out which instances define the thread locals, you have to use a profiler. For example the heap walker in JProfiler (disclaimer: my company develops JProfiler) will help you to find those thread locals. Select the reported value class (com.sun.xml.bind.v2.runtime.property.SingleElementLeafProperty or com.sun.xml.bind.v2.ClassFactory) and show the cumulated incoming references. One of those will be a java.lang.ThreadLocal$ThreadLocalMap$Entry. Select the referenced objects for that incoming reference type and switch to the allocations view. You will see where the instance has been allocated. With that information you can decide whether you can do something about it or not.
Mattias Jiderhamn has an excellent
6-part article that explains very clearly the theory and practice about classloader leaks. Even better, he also released a jar file that we can include in our war files. I tried it on my web apps, and the jar file worked like a charm! The jar file is called classloader-leak-prevention.jar. To use it is as simple as just adding this to our web.xml
<listener>
<listener-class>se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor</listener-class>
</listener>
and then adding this to our pom.xml
<dependency>
<groupId>se.jiderhamn</groupId>
<artifactId>classloader-leak-prevention</artifactId>
<version>1.15.2</version>
</dependency>
For more information, please refer to the
project home page hosted on GitHub
or
Part 6 of his article
Creating Threads without cleaning them up correctly will eventually run you out of memory - been there, done that.
Those who are still wondering for quick solution/workaround, can go for below:
If running the standalone tomcat, kill javaw.exe or the process bearing it.
If running from eclipse, kill eclipse.exe and java.exe or enclosing process.
Still not resolved, Check for the task manager, it is likely that the process which is causing this will be shown with highest memory
usage - Do your analysis and kill that.
You should be good to redeploy the stuff and proceed without memory issues.
I guesses you probably seen this but just in case ehcache doc recommends to put the lib in tomcat and not in WEB-INF/lib.
I recommend initializing thread locals, in a ServletRequestListener.
ServletRequestListener has 2 methods: one for initialization and one for destruction.
This way, you can cleanup your ThreadLocal. Example:
public class ContextInitiator implements ServletRequestListener {
#Override
public void requestInitialized(ServletRequestEvent sre) {
context = new ThreadLocal<ContextThreadLocal>() {
#Override
protected ContextThreadLocal initialValue() {
ContextThreadLocal context = new ContextThreadLocal();
return context;
}
};
context.get().setRequest(sre.getServletRequest());
}
#Override
public void requestDestroyed(ServletRequestEvent sre) {
context.remove();
}
}
web.xml:
<listener>
<listener-class>ContextInitiator</listener-class>
</listener>

How to catch OutOfMemory errors on Amazon EBS (Elastic BeanStalk)

Here's a tricky one for ya - We have a Java web application, deployed on Tomcat web servers on Amazon ElasticBeanStalk. and we believe we have a memory leak b/c it seems that the JVM crashes every night with OutOfMemory exception.
The problem is that after the crash, EBS automatically scraps the old EC2 instance and starts a fresh one. all the logs and info get scrapped too...
I am now developing a custom CloudWatch metric to monitor the memory of the JVM (you would think there should be a prepared one...) but that won't help me generate heap dumps
Has anyone gone through a similar problem and knows how to catch these errors on EBS?
This certainly sounds like unusual EC2 (not EBS) instance behaviour. It's interesting that if Tomcats falls over then the machine instance gets affected (in terms of stopping or terminating).
This is what I would suggest to diagnose:
get a running instance read to examine / play with
take a look at the "Termination Protection" - is this set to "enabled" or not - that could explaing the "scrapping" part of your problem (if by scrapping you mean the instance terminates and is removed). This you can find in the properties of your EC2 instance using the AWS console.
take a look at the Java memory settings your Tomcat server is configured with. Perhaps the max is (Xmx) bigger that the virtual machine has!? If so perhaps Tomcat is literally running the machine out of memory which could explain some of the EC2-response to your out of memory. I assume you mean "stopped" rather than "scrapped" otherwise how would you know your are getting an out of memory error?
if you manually kill the tomcat/java process on a working instance, does the instance stay operational (or do you get booted off and the instance gets stopped)? If something happens simply because you stop tomcat, it means some monitoring process is kicking in and taking down the machine explicitly.
use the -XX:-HeapDumpOnOutOfMemoryError to produce a dump file - this will help you work out where your leak is and hopefully fix the root cause.
Good luck. Hope that helps.
Consider a log collection service like Sumologic. The log files you specify are collected and available for analysis online. So even if your EC2 instances get replaced you can do forensics to see what happened to them

Categories

Resources