I have a Tomcat Application Server that is configured to create a memory dump on OOM, and it is started with -Xmx1024M, so a Gigabyte should be available to him.
Now I found one such dump and it contains only 260MB of unretained memory. How is it possible that the dump is so much smaller than the size he should have available?
Permgen space is managed independently of the heap and can run out even when there's plenty of free memory overall. Some web frameworks (especially JSF) are real hogs and while easily cause the default config to run out. It can be increased with -XX:MaxPermSize=###m
Remember the system space is constrained by the sum of heap and permgen, so you can consume fewer total resources before you start getting the Cannot Create Native Thread OOM exception if you don't decrease heap by the amount PermGen is increased.
Only information about the usage of allocated memory will be dumped to a file.
A heap dump isn't a binary image of your heap, it contains information about data types etc. and may exceed your available memory.
Text (classic) Heapdump file format
Related
I found that one of my spring boot project's memory (RAM consumption) is increasing day by day. When I uploaded the jar file to the AWS server, it was taking 582 MB of RAM (Max Allocated RAM is 1500 MB), but each day, the RAM is increasing by 50MB to 100 MB and today after 5 days, it's taking 835 MB. Right now the project is having 100-150 users and with normal usage of Rest APIs.
Because of this increase in the RAM, couple of times the application went down with the following error (error found from the logs):
Exception in thread "http-nio-3384-ClientPoller" java.lang.OutOfMemoryError: Java heap space
So to resolve this, I found that by using JAVA Heap Dump, I can find the objects/classes that are taking the memory. So by using Jmap in the command line, I've created a heap dump and uploaded it to Heap Hero and Eclipse Memory Analyzer Tool. In both of them I found the following:
1. Total Waste memory is: 64.69MB (73%) (check below screenshot)
2. Out of these, 34.06MB is taken by Byte [] array and LinkedHashmap[] (check below screenshot), which I have never used in my whole project. I searched for it in my project but didn't found.
3. Following 2 large objects taking 32 MB and 20 MB respectively.
1. Java Static io.netty.buffer.ByteBufUtil.DEFAULT_ALLOCATOR
2. Java Static com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.connectionFinalizerPhantomRefs`
So I tried to find this netty.buffer. in my project, but I don't find anything which matched with netty or buffer.
Now my question is how can I reduce this memory leak or how can I find the exact memory consumption objects/class/variable so that I can reduce the heap size.
I know few of the experts will ask for the source code or anything similar to that but I believe that from the heap dump we can find the memory leak or live objects that are available in the memory. I am looking for that option or anything that reduces this heap dump!
I am working on this issue for the past 3 weeks. Any help would be appreciated.
Thank you!
Start with enabling the JVM native memory tracker to get an idea which part of the memory is increasing by adding the flag -XX:NativeMemoryTracking=summary. There is some performance overhead according to the documentation (5-10%), but if this isn't a issue I would recommend running the JVM with this flag enabled even in production.
Then you can check the values using jcmd <PID> VM.native_memory (there's a good writeup in this answer: Java native memory usage)
If there is indeed a big chunk of native memory allocated, it's likely this is allocated by Netty.
How do you run your application in AWS? If it's running in a Docker image, you might have stumbled upon this issue: What would cause a java process to greatly exceed the Xmx or Xss limit?
If this is the case, you may need to set the environment variable MALLOC_ARENA_MAX if your application is using native memory (which Netty does) and running on a server with a large number of cores. It's perfectly possible that the JVM allocates this memory for Netty but doesn't see any reason to release it, so it will appear to only continue to grow.
If you want to control how much native memory can be allocated by Netty, you can use the JVM flag -XX:MaxDirectMemorySize for this (I believe the default is the same value as Xmx) and lower it in case you application doesn't require that much memory.
JVM memory tuning is a complex process and it becomes even more complex when native memory is involved - as the linked answer shows it's not as easy as simply setting the Xms and Xmx flag and expecting that no more memory will be used.
Heap dump is not enough to detect memory leaks.
You need to look at the difference of two consecutive heaps snapshots both taken after calling the GC.
Or you need a profiling tool that can give the generations count for each class.
Then you should only look at your domain objects (not generic objects like bytes or strings ...etc) that survived the GC and passed from the old snapshot to the new one.
Or, if using the profiling tool, look for old domain objects that still alive and growing for many generations.
Having objects lived for many generations and keeps growing means those objects are still refernced and the GC is not able to reclaim them. However, living for many generations alone is not enough to cause a leak because cached or static Objects may stay for many generations. The other important factor is that they keep growing.
After you detected what object is being leaked, you may use heap dumb to analyse those objects and get the references.
This question already has answers here:
Java using much more memory than heap size (or size correctly Docker memory limit)
(5 answers)
Growing Resident Size Set in JVM
(1 answer)
Closed 2 years ago.
my java service is running on a 16 GB RAM host with -Xms and -Xmx set to 8GB.
The host is running a few other processes.
I noticed that my service consuming more memory over time.
I ran this command ps aux | awk '{print $6/1024 " MB\t\t" $11}' | sort -n on the host and recorded the memory usage by my java service.
When the service started, it used about 8GB memory (as -Xms and -Xmx set to 8GB) but after a week, it used about 9GB+ memory. It consumed about 100MB more memory per day.
I took a heap dump. I restarted my service and took another heap dump. I compared those two dumps but there were not much difference in the heap usage. The dump shows that the service used about 1.3GB before restart and used about 1.1 GB after restart.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I set the -Xms and -Xmx to 8GB. Host has 16GB RAM. Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
OK so you have told the JVM that it can use up to 8GB for the heap, and you are observing a total memory usage increasing from 1.1GB to 1.3GB. That's not actually an indication or problem per se. Certainly, the JVM is not using anywhere like as much memory as you have said it can do.
The second thing to note is that it is unclear how you are measuring memory usage. You should be aware that a JVM uses a lot of memory that is NOT Java heap memory. This includes:
The memory used by the java executable itself.
Memory used to hold native libraries.
Memory used to hold bytecodes and JIT compiled native code (in "metaspace")
Thread stacks
Off-heap memory allocations requested by (typically) native code.
Memory mapped files and shared memory segments.
Some of this usage is reported (if you use the right tools).
The third thing is that the actually memory used by the Java heap can vary a lot. The GC typically works by copying live objects from one "space" to another, so it needs a fair amount of free space to do this. Then once it has finished a run the GC looks at how much space is (now) free as a ratio with the space used. If that ratio is too small, it requests more memory from the OS. As a consequence, there can be substantial step increases in total memory usage even though the actual usage (in non-garbage objects) is only increasing gradually. This is quite likely for a JVM that has only started recently, due to various "warmup" effects.
Finally, the evidence you have presented does not say (to me!) that there is no memory leak. I think you need to take the heap dumps further apart. I would suggest taking one dump 2 hours after startup, and the second one 2 or more hours later. That would give you enough "leakage" to show up in a comparison of dumps.
From the process memory usage, my service is consuming more memory over time but that's not reported in the heap dump. How do I identify the increase in the memory usage in my service?
I don't think you need to do that. I think that the increase from 1.1GB to 1.3GB in overall memory usage is a red herring.
Instead, I think you should focus on the memory leak that the other evidence is pointing to. See my suggestion above.
Do I set the min/max heap too high (50% of the total memory on the host)? would that cause any issues?
Well ... a larger heap is going to have more pronounced performance degradation when the heap gets full. The flipside is that a larger heap means that it will take longer to fill up ... assuming that you have a memory leak ... which means it could take longer to diagnose the problem, or be sure that you have fixed it.
But the flipside of the flipside is that this might not be a memory leak at all. It could also be your application or a 3rd-party library caching things. A properly implemented cache could use a lot of memory, but if the heap gets too close to full, it should respond by breaking links1 and evicting cached data. Hence, not a memory leak ... hypothetically.
1 - Or if you use SoftReferences, the GC will break them for you.
Problem
We are trying to find the culprit of a big memory leak in our web application. We have pretty limited experience with finding a memory leak, but we found out how to make a java heap dump using jmap and analyze it in Eclipse MAT.
However, with our application using 56/60GB memory, the heap dump is only 16GB in size and is even less in Eclipse MAT.
Context
Our server uses Wildfly 8.2.0 on Ubuntu 14.04 for our java application, whose process uses 95% of the available memory. When making the dump, our buffers/cache used space was at 56GB.
We used the following command to create the dump: sudo -u {application user} jmap -dump:file=/mnt/heapdump/dump_prd.bin {pid}
The heap dump file size is 16,4GB and when analyzing it with Eclipse MAT, it says there are around 1GB live objects and ~14,8GB unreachable/shallow heap.
EDIT: Here is some more info about the problem we see happening. We monitor our memory usage, and we see it grow and grow, until there is ~300mb free memory left. Then it stays around that amount of memory, until the process crashes, unfortunately without error in the application log.
This makes us assume it is a hard OOM error because this only happens when the memory is near-depleted. We use the settings -Xms25000m -Xmx40000m for our JVM.
Question
Basically, we are wondering why the majority of our memory isn't captured in this dump. The top retained size classes don't look too suspicious, so we are wondering if there is something heap dump-related what we are doing wrong.
When dumping its heap, the JVM will first run a garbage collection cycle to free any unreachable objects.
How can I take a heap dump on Java 5 without garbage collecting first?
In my experience, in a true OutOfMemoryError where your application is simply demanding more heap space than is available, this GC is a fool's errand and the final heap dump will be the size of the max. heap size.
When the heap dump is much smaller, that means the system was not truly out of memory, but perhaps had memory pressure. For example, there is the java.lang.OutOfMemoryError: GC overhead limit exceeded error, which means that the JVM may have been able to free enough memory to service some new allocation request, but it had to spend too much time collecting garbage.
It's also possible that you don't have a memory problem. What makes you think you do? You didn't mention anything about heap usage or an OutOfMemoryError. You've only mentioned the JVM's memory footprint on the operating system.
In my experience, having a heap dump much smaller than the real memory used can be due to a leak in the JNI.
Despite you don't use directly any native code, there are certain libraries that use it to speed up.
In our case, it was a Deflater and Inflater not properly ended.
One of our sap systems(PI ABAP+JAVA stack) was giving performance issue. The entire 64GB configured for the machine gets hogged up(and the 8 cores as well). Every one is suspecting the java part,but i think different.
The java server nodes where getting restarted with Out Of Memory error. Looking at the hprof files, i found that they where only 1.2G(avg of 3 server nodes) in size , when 3GB(both -Xms and Xmx) of heap is configured for the server nodes. This observation lead to the following doubt.
I have read that when Xms and Xmx are set to the same value, the jvm is allocated the entire heap when it starts. If its the case the server nodes would have 3GB of heap from the start. If so why does it doesn't reflect in the hprof file or if the hprof contains only the memory allocated to objects during runtime, the the size clearly idicates that the heap memory was free(more than 50%), so how OOM error...!!..??
I also know that linux does something called as memory over-commit. ie memory is not actually given when its requested but when its actually used. Is this contributing to the out of memory exception. Like when the JVM starts the os says to it that you have been allocated 3GB of memory but actually defers it until its actually required. By the time the jvm actually tries to allocate the memory to objects , some other applications might have exhausted the memory. Is this possible...??
Even if the java nodes had memory leak issue, wouldn't it be confined to the 3GB of heap. How can it hog up the entire 64G of physical memory....???
One more thing i observed was the swap space was only 50% used.
Any light on this...!
The hprof doesn't show actual heap size, and its size depends on many things, like enabled/disabled compressed references, field layouting (the object size isn't just sum of sizes of its fields, but also headers and some gaps between fields) and so on.
About memory reserving. The JVM does reserve memory for the heap, but OS doesn't allocate memory until it's needed.
I would recommend you to use memory profilers (I highly recommend YourKit profiler) to connect to running virtual machine and then analyze memory usage.
I am running a Java application on a Linux-Cluster with SLURM as resource manager. To run my application I have to specify for SLURM the amount of memory I will need. SLURM will run my application in a kind of VM with the specified amount of memory. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
My problem is that I am exceeding the amount of memory I have chosen on SLURM and it terminates my application. It seems that the JVM uses about 1GB of memory, probably for things like GC or so.
Is there a possibility to restrict the size of the JVM or at least to tame it.
Cheers,
Markus
The maximum heap setting only limited the maximum heap. There are other memory regions which you have not limited such as
thread stacks
perm gen
shared libraries
native memory used by libraries
direct memory
memory mapped files.
If you want to limit the over all memory usage you need to be clear about whether you are limiting virtual memory or resident memory. Often monitoring tools make the mistake of monitoring virtual memory which shows a surprising lack of understanding of how applications work, or even why you monitor an application in the first place.
You want to monitor resident memory usage which means you need to know how much memory your application uses over time apart from the heap, then work out how much heap you can have plus some margin for error.
. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
At a guess I would start with 1/2 GB with -Xmx512m and see what is the peak resident memory and increase it if you find there is always a few hundred MB head room.
BTW 1 GB of memory doesn't cost that much these days (as little as $5). Your time could be worth much more than the resources you are trying to save.