JBoss not releasing memory - java

I am running a Java application in JBoss which periodically undertakes a memory-intensive task. The amount of memory used by org.jboss.Main rises during this time, but does not fall once the task is completed.
I've been profiling the process using VisualVM and have forced a garbage collection. The memory used by org.jboss.Main remains at about 1GB. The basic info of the heap dump is as follows.
Total bytes: 53,795,695
Total classes: 5,696
Total instances: 732,292
Classloaders: 324
GC roots: 2,540
Number of objects pending for finalization: 0
I've also looked at the classes using the memory and found that char[] accounts for most of the memory.
Could anyone tell me why VisualVM shows 20 times less memory being used by JBoss than my activity monitor? And if anyone could provide any advice or directions to further reading on how to get JBoss to release memory after the job is complete that would be greatly appreciated.

VisualVM is showing the size of the heap after a Full GC. This is the amount of data you can use before the heap has to grow.
The heap dump is showing you how much of the heap is used at present.

Related

Java Heap Dump : How to find the objects/class that is taking memory by 1. io.netty.buffer.ByteBufUtil 2. byte[] array

I found that one of my spring boot project's memory (RAM consumption) is increasing day by day. When I uploaded the jar file to the AWS server, it was taking 582 MB of RAM (Max Allocated RAM is 1500 MB), but each day, the RAM is increasing by 50MB to 100 MB and today after 5 days, it's taking 835 MB. Right now the project is having 100-150 users and with normal usage of Rest APIs.
Because of this increase in the RAM, couple of times the application went down with the following error (error found from the logs):
Exception in thread "http-nio-3384-ClientPoller" java.lang.OutOfMemoryError: Java heap space
So to resolve this, I found that by using JAVA Heap Dump, I can find the objects/classes that are taking the memory. So by using Jmap in the command line, I've created a heap dump and uploaded it to Heap Hero and Eclipse Memory Analyzer Tool. In both of them I found the following:
1. Total Waste memory is: 64.69MB (73%) (check below screenshot)
2. Out of these, 34.06MB is taken by Byte [] array and LinkedHashmap[] (check below screenshot), which I have never used in my whole project. I searched for it in my project but didn't found.
3. Following 2 large objects taking 32 MB and 20 MB respectively.
1. Java Static io.netty.buffer.ByteBufUtil.DEFAULT_ALLOCATOR
2. Java Static com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.connectionFinalizerPhantomRefs`
So I tried to find this netty.buffer. in my project, but I don't find anything which matched with netty or buffer.
Now my question is how can I reduce this memory leak or how can I find the exact memory consumption objects/class/variable so that I can reduce the heap size.
I know few of the experts will ask for the source code or anything similar to that but I believe that from the heap dump we can find the memory leak or live objects that are available in the memory. I am looking for that option or anything that reduces this heap dump!
I am working on this issue for the past 3 weeks. Any help would be appreciated.
Thank you!
Start with enabling the JVM native memory tracker to get an idea which part of the memory is increasing by adding the flag -XX:NativeMemoryTracking=summary. There is some performance overhead according to the documentation (5-10%), but if this isn't a issue I would recommend running the JVM with this flag enabled even in production.
Then you can check the values using jcmd <PID> VM.native_memory (there's a good writeup in this answer: Java native memory usage)
If there is indeed a big chunk of native memory allocated, it's likely this is allocated by Netty.
How do you run your application in AWS? If it's running in a Docker image, you might have stumbled upon this issue: What would cause a java process to greatly exceed the Xmx or Xss limit?
If this is the case, you may need to set the environment variable MALLOC_ARENA_MAX if your application is using native memory (which Netty does) and running on a server with a large number of cores. It's perfectly possible that the JVM allocates this memory for Netty but doesn't see any reason to release it, so it will appear to only continue to grow.
If you want to control how much native memory can be allocated by Netty, you can use the JVM flag -XX:MaxDirectMemorySize for this (I believe the default is the same value as Xmx) and lower it in case you application doesn't require that much memory.
JVM memory tuning is a complex process and it becomes even more complex when native memory is involved - as the linked answer shows it's not as easy as simply setting the Xms and Xmx flag and expecting that no more memory will be used.
Heap dump is not enough to detect memory leaks.
You need to look at the difference of two consecutive heaps snapshots both taken after calling the GC.
Or you need a profiling tool that can give the generations count for each class.
Then you should only look at your domain objects (not generic objects like bytes or strings ...etc) that survived the GC and passed from the old snapshot to the new one.
Or, if using the profiling tool, look for old domain objects that still alive and growing for many generations.
Having objects lived for many generations and keeps growing means those objects are still refernced and the GC is not able to reclaim them. However, living for many generations alone is not enough to cause a leak because cached or static Objects may stay for many generations. The other important factor is that they keep growing.
After you detected what object is being leaked, you may use heap dumb to analyse those objects and get the references.

java process consumes more memory over time but no memory leak [duplicate]

This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.

OutOfMemoryError heap dump

I have a java.lang.OutOfMemoryError:GC Overhead limit exceeded.
There is no HeapDumpOnOutOfMemoryError command line option for my application.
I need a heap dump but when I try to capture the dump with jmap or jcmd tools they are not responding:
jmap
D:\program>jmap -dump:live,format=b,file=d:\dump4.hprof 8280
Dumping heap to D:\dump4.hprof ...
jcmd
D:\program>jcmd 8280 GC.heap_dump d:\dump6.hprof
8280:
Processes are not completing but dump files are created. When I open them with VisualVM, they are loading infinitely.
If I capture a heap dump of e.g. VisualVM, Tools complete successfully and dumps are created and opened.
Could you please explain why jmap and jcmd are not completing? And how can I capture a dump of the application with OutOfMemoryError exception? Application is still working but there are only a few live threads.
One possibility is that the heap size you intend to dump is too large in size.
Please specify the size of the heap and RAM.
It is not due to your intended heap size is more than allocated heap size. This error occurs when the JVM spent too much time performing Garbage Collection and was only able to reclaim very little heap space. Your application might have ended up using almost all the RAM and Garbage collector has spent too much time trying to clean it and failed repeatedly.
Your application's performance will be slow comparatively, This is because the CPU is using its entire capacity for Garbage Collection and hence cannot perform any other tasks.
Following questions need to be addressed:
What are the objects in the application that occupy large portions of the heap?
In which parts of the source code are these objects being allocated?
You can also use automated graphical tools such as JConsole which helps to detect performance problems in the code including java.lang.OutOfMemoryErrors.

JVM heap not released

I am new to analyzing memory issues in Java. So pardon me if this question seems naive
I have application running with following JVM parameters set:
-Xms3072m -Xmx3072m
-XX:MaxNewSize=1008m -XX:NewSize=1008m
-XX:PermSize=224m -XX:MaxPermSize=224m -XX:SurvivorRatio=6
I am using visualVM to monitor the usage : Here is what I see
The problem is, even when the application is not receiving any data for processing, the used memory doesn't go down. When the application is started, the used space starts low (around 1GB) but grows as the application is running. and then the used memory never goes down.
My question is why the used heap memory doesn't go down even when no major processing happening in application and what configurations can be set to correct it.
My understanding is if application is not doing any processing then the heap used should be less and heap memory available ( or max heap) should remain the same (3GB) in this case.
This is a totally normal trend, even if you believe that it is not used there are probably threads running doing tasks that create objects that are unreferenced once the tasks are done, those objects are eligible for the next GC but as long as there is no minor/major GC they take more and more room in your heap so it goes up until a GC is triggered then you get the normal heap size and so on.
An abnormal trend will be the same thing but after a GC the heap size would be higher than the heap size just after the previous GC which is not the case here.
Your real question is more what my application is doing when is not receiving any data to process? For that a thread dump should help, you can launch jcmd to get the PID then launch jstack $pid to get the thread dump.
Here is an example of a typical trend in case of memory leak:
As you can see the starting heap size has changed between two GC, the new starting heap size is higher than the previous one which may be due to a memory leak.

Memory dump much smaller than available memory

I have a Tomcat Application Server that is configured to create a memory dump on OOM, and it is started with -Xmx1024M, so a Gigabyte should be available to him.
Now I found one such dump and it contains only 260MB of unretained memory. How is it possible that the dump is so much smaller than the size he should have available?
Permgen space is managed independently of the heap and can run out even when there's plenty of free memory overall. Some web frameworks (especially JSF) are real hogs and while easily cause the default config to run out. It can be increased with -XX:MaxPermSize=###m
Remember the system space is constrained by the sum of heap and permgen, so you can consume fewer total resources before you start getting the Cannot Create Native Thread OOM exception if you don't decrease heap by the amount PermGen is increased.
Only information about the usage of allocated memory will be dumped to a file.
A heap dump isn't a binary image of your heap, it contains information about data types etc. and may exceed your available memory.
Text (classic) Heapdump file format

Categories

Resources