Memory overflow in org.hibernate.internal.SessionImpl - java

I have WebSphere 8.0.0.7 application server with Spring(3.2.1)/Hibernate(4.1.9) application installed.
After several weeks of continuous work PROD stage failed due to Java heap overflow.
Analysis of PHD dominator tree shows the following:
Looking at Hibernate sources I can't really understand where those char sequences may take place.
Internet gave me several similar leaks for older version of WebSphere, but they seems to be fixed the version I use.
Does anyone may help to understand a root cause?

Don't use MAT to analyze IBM dumps. IBM HeapAnalyzer shows better tree without strange memory consumers definition.

Related

Direct buffer memory OutOfMemoryError after updating to wildfly 18

After updating the environment from Wildfly 13 to Wildfly 18.0.1 we experienced an
A channel event listener threw an exception: java.lang.OutOfMemoryError: Direct buffer memory
at java.base/java.nio.Bits.reserveMemory(Bits.java:175)
at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:118)
at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:317)
at org.jboss.xnio#3.7.3.Final//org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:57)
at org.jboss.xnio#3.7.3.Final//org.xnio.BufferAllocator$2.allocate(BufferAllocator.java:55)
at org.jboss.xnio#3.7.3.Final//org.xnio.ByteBufferSlicePool.allocateSlices(ByteBufferSlicePool.java:162)
at org.jboss.xnio#3.7.3.Final//org.xnio.ByteBufferSlicePool.allocate(ByteBufferSlicePool.java:149)
at io.undertow.core#2.0.27.Final//io.undertow.server.XnioByteBufferPool.allocate(XnioByteBufferPool.java:53)
at io.undertow.core#2.0.27.Final//io.undertow.server.protocol.http.HttpReadListener.handleEventWithNoRunningRequest(HttpReadListener.java:147)
at io.undertow.core#2.0.27.Final//io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:136)
at io.undertow.core#2.0.27.Final//io.undertow.server.protocol.http.HttpReadListener.handleEvent(HttpReadListener.java:59)
at org.jboss.xnio#3.7.3.Final//org.xnio.ChannelListeners.invokeChannelListener(ChannelListeners.java:92)
at org.jboss.xnio#3.7.3.Final//org.xnio.conduits.ReadReadyHandler$ChannelListenerHandler.readReady(ReadReadyHandler.java:66)
at org.jboss.xnio.nio#3.7.3.Final//org.xnio.nio.NioSocketConduit.handleReady(NioSocketConduit.java:89)
at org.jboss.xnio.nio#3.7.3.Final//org.xnio.nio.WorkerThread.run(WorkerThread.java:591)
Nothing was changed on the application side. I looked at the Buffer pools and it seems that some resources are not freed. I triggered several manual GCs but nearly nothing happens. (Uptime 2h)
Before in the old configuration it looked like this (Uptime >250h):
Now I did a lot of research and the closest thing I could find is this post here on SO. However this was in combination with websockets but there are no websockets in use.
I read several (good) articles (1,2,3,4,5,6) and watched this video about the topic.
The following things I tried but nothing had any effect:
The OutOfMemoryError occurred at 5GB since the heap is 5GB => I reduced the MaxDirectMemorySize to 512m and then 64m but then the OOM just occurs quicker
I set -Djdk.nio.maxCachedBufferSize=262144
I checked the number of IO workers: 96 (6cpus*16) which seems reasonable. The system has usually short lived threads (largest pool size was 13). So it could not be the number of workers I guess
I switched back to ParallelGC since this was default in Java8. Now when doing a manual GC at least 10MB are freed. For GC1 nothing happens at all. But still both GCs cannot clean up.
I removed the <websockets> from the wildfly configuration just to be sure
I tried to emulate it locally but failed.
I analyzed the heap using eclipseMAT and JXRay but it just points to some internal wildfly classes.
I reverted Java back to version 8 and the system still shows the same behavior thus wildfly is the most probable suspect.
In eclipseMAT one could also find these 1544 objects. They all got the same size.
The only thing what did work was to deactivate the bytebuffers in wildfly completely.
/subsystem=io/buffer-pool=default:write-attribute(name=direct-buffers,value=false)
However from what I read this has a performance drawback?
So does anyone know what the problem is? Any hints for additional settings / tweaks? Or was there a known Wildfly or JVM bug related to this?
Update 1: Regarding the IO threads - maybe the concept is not 100% clear to me. Because there is the ioThreads value
And there are the threads and thread pools.
From the definition one could think that per worker thread the number of ioThreads is created (in my case 12)? But still the number of threads / workers seems quite low in my case...
Update 2: I downgraded java and it still shows the same behavior. Thus I suspect wildfly to be cause of the problem.
Probably its a Xnio problem. Look at this issue https://issues.redhat.com/browse/JBEAP-728
After lots of analyzing, profiling etc. I draw the following conclusion:
The cause of the OOM is caused by wildfly in version 18.0.1. It also exists in 19.1.0 (did not test 20 or 21)
I was able to trigger the OOM fairly quickly when setting the -XX:MaxDirectMemorySize to values like 512m or lower. I think many people don't experience the problem since by default this value is equals the Xmx value which can be quite big. The problems occurs when using the ReST API of our application
As Evgeny indicated XNIO is a high potential candidate since when profiling it narrowed down to (or near) that area...
I didn't have the time to investigate further so I tried wildfly 22 and there it is working. This version is using the latest xnio package (3.8.4)
the DirectMemory remains quite low in WF 22. It is around 10mb. One can see the count rising and falling which wasn't the case before
So the final fix is to update to wildfly version 22.0.1 (or higher)

Permgen out of memory

Running Tomcat for an Enterprise level App. Been getting "Permgen out of memory" messages.
I am running this on:
Windows 2008 R2 server,
Java 1.6_43,
Running Tomcat as a service.
No multiple deployments. Service is started, and App runs. Eventually I get Permgen errors.
I can delay the errors by increasing the perm size, however I'd like to actually fix the problem. The vendor is disowning the issue. I don't know if it is a memory leak, as the vender simply say "runs fine with Jrockit". Ofc, that would have been nice to have in the documentation, like 3mos ago. Plus, some posts suggest that Jrockit just expands permspace to fit, up to 4gb if you have the mem (not sure that is accurate...).
Anyway, I see some posts for a potential fix in Java 1.5 with the options
"-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled"
However, these seem to have been deprecated in Java 1.6, and now the only GC that seems to be available is "-XX:+UseG1GC".
The best link I could find, anywhere, is:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html#G1Options
Does anyone know if the new G1 garbage collector includes the permspace? or am I missing an option or 2 in the new Java6 GC settings that maybe i am not understanding?
Any help appreciated!
I wouldn't just increase the permgen space as this error is usually a sign of something wrong in software/setup. Is their a specific webapp that causes this? Without more info, I can only give basic advice.
1) Use the memory leak detector (Tomcat 6+) called Find Leaks
2) Turn off auto-deployment
3) Move JDBC drivers and logging software to the java classpath instead of tomcat per this blog entry
In earlier versions of Sun Java 1.6, the CMSPermGenSweepingEnabled option is functional only if UseConcMarkSweepGC is also set. See these answers:
CMSPermGenSweepingEnabled vs CMSClassUnloadingEnabled
What does JVM flag CMSClassUnloadingEnabled actually do?
I don't know if it's functional in later versions of 1.6 though.
A common cause for these errors/bugs in the past was dynamic class generation, particularly for libraries and frameworks that created dynamic proxies or used aspects. Subtle misuse of Spring and Hibernate (or more specifically cglib and/or aspectj) were common culprits. The underlying issue was that new dynamic classes were getting created on every request, eventually exhausting permgen space. The CMSPermGenSweepingEnabled option was a common workaround/fix. Recent versions of those frameworks no longer have the problem.

Java memory leak while using Axis2 and WAS 7

I have a standalone application that is running in IBM Websphere 7.0.0.19. It is running in Java 6 and we pack an Axis2 JAR in our EAR. We have 'parent last' style class loading and we have disabled the Axis service that is packed with WAS7 by default.
Recently, after 6+ weeks of continuous functioning, the application experienced an OOM. Perplexing point is, the application is deployed seperately on 2 different machines. But only one machine went down. Second machine is still up.
We checked OS, server configuration like classloader policy using WAS console and they are similar in both machines.
When the application crashed, it generated a .phd file which we analysed using Eclipse Memory Analyser Tool (MAT). The analysis is shown in the screenshot.
If I'm correct the bootstrap class loader is repeatedly loading and holding on to references of AxisConfiguraiton and so GC is unable to collect them when it runs. But, if that is the case, then both servers must have come down. But only one server experienced an OOM. Memory allocated to JVM is same in both machines.
We are not sure whether the issue is with WAS 7 or with axis2-kernel-1.4.1.jar or with something else.
http://www.slideshare.net/leefs/axis2-client-memory-leak
https://issues.apache.org/jira/browse/AXIS2-3870
http://java.dzone.com/articles/12-year-old-bug-jdk-still-out
(Links may not refer to the current issue. But they are just pointers)
Has anyone experienced something similar ?
We saw memory growth and sockets left open on WebSphere 6.1 with Axis 2 1.4 in the past. It's been a long time, but my notes suggest it might be worth considering an upgrade to at least Axis 2 1.5.1 to fix this bug with the open sockets and also to ensure you're not creating new objects repeatedly where a singleton exists (e.g. the Service object).

Tomcat 6 Web Application Eating Up Memory Over Time

I have a Grails application that is deployed on a Tomcat 6 server. The application runs fine for a while ( a day or two), but slowly eats up more and more memory over time until it grinds to a halt and then surpasses the maximum value. Once I restart the container, everything is fine. I have been verifying this with the grails JavaMelody plugin as well as the Application Info plugin, but I need help in determining what I should be looking for.
It sounds like an application leak, but to my knowledge there is no access to any unmanaged resources. Also, the Hibernate cache seems to be in check. It looks like if I run the garbage collector I get a decent chunk of memory back, but I don't know how to do this sustainably.
So:
How can I use these (or other) monitoring tools to figure out where the problem is?
Is there any other advice that could help me?
Thanks so much.
EDIT
I am using Grails 1.3.7 and I am using the Quartz plugin.
You can use the VisualVM application in the Oracle JDK to attach to the Tomcat instance while running (if using Oracle JVM already) to inspect what goes on. The memory profiler can tell you quite a bit and point you in the right direction. You most likely look for either objects that grow or types of objects that get allocated more and more.
If you need more than the free VisualVM application can tell you, a commercial profiler may be useful.
Depending on your usage of Quartz it may be directly related to a know memory leak with the Quartz plugin with persistence and thread-local. You may want to double check and see if this applies to your situation.

Java profiler for IBM JVM 1.4.2 (WebSphere 6.0.2)

I'm looking for a Java profiler that works well with the JVM coming with WebSphere 6.0.2 (IBM JVM 1.4.2). I use yourkit for my usual profiling needs, but it specifically refuses to work with this old jvm (I'm sure the authors had their reasons...).
Can anybody point to a decent profiler that can do the job? Not interested in a generic list of profilers, BTW, I've seen the other stackoverflow theread, but I'd rather not try them one by one.
I would prefer a free version, if possible, since this is a one-off need (I hope!) and I would rather not pay for another profiler just for this.
Old post, but this may help someone. You can use IBM Health Center which is free. It can be downloaded standalone or as part of the IBM Support Assistant. I suggest downloading ISA since it has a ton of other useful tools such as Garbage Collection and Memory Visualizer and Memory Analyzer.
What are you looking to profile? Is it stuff in the JVM or the App Server? If it's the latter, there's loads of stuff in WAS 6 GUI to help with this. Assuming you really want to see stuff like the heap etc, then the IBM HeapAnalyzer might help. There are other tools listed off the bottom of this page.
Something else I've learned, ideally, youll be able to connect your IDE's profiler to the running JVM. Some let you do this to a remote one as well as the local one you are developing on. Is the JVM you wish to profile in live or remote? If so, you might have to force dumps and take them out of the live environment to look at at your leisure. Otherwise, set up something local and get the info from it that way.
Update: I found out that JProfiler integrates smoothly with WAS 6.0.2 (IBM JDK 1.4).

Categories

Resources