java memory leak, visualvm showing wrong data - java

I've a java application running, after few hours it fulfills memory.
I've tried to detect memory leak with visualvm but it shows wrong data (have no idea how that can happen).
In the screenshot you can see task manager showing memory usage of 700Mb and visualvm showing 225...
Does anyone know whats going on here?
Regards

Beware that your OS is only aware of the total amount of memory java has reserved over the time (and java will not return that amount of memory easily AFAIK). However java may not be using all that memory at a given moment, so you can see differences between those two numbers.
For example, if you launch your program like this
java -Xmx512m -Xms256m ...
Then your JVM will take 256 MB as soon as it starts (and the OS will tell you so, more or less). However, if you open your memory peek tool (be it visualvm, jconsole, etc.), it may show that you are using less than that (it is just you have not needed to use the whole of your reserved heap).

What Java gets it doesn't return. Allocating memory takes quite a lot of effort, so Java doesn't usually return any of the memory the system ever granted it. So if your program ever used 760 MB RAM this is what it sticks with.
And then there are two other factors that play an important role. The heap size is only the amount of memory your program uses or can use. But between your program and the OS is the Java-VM which might take a good bit of memory as well. The task manager shows you the amount of memory that is used by your program plus the vm.
The other factor is memory fragmentation. Some data structures (e.g. arrays) have to be in a consecutive chunk of the memory. array[i+1] has to be in the memory slot after array[i]. This now means that if you have e.g. 10 MB memory allocated, and the the middle 2 MB memory are used and you want to create an 6 MB array the Java-VM has to allocate new memory, so it can fit the array in one piece.
This increases the memory usage in the task manager to go up, but not the heap size, since the heap size only shows actually used memory.

Memory spaces in Java are defined by 3 metrics: used, available, max available (see JMX values). Size you see is available size and not max available, that is probably already allocated.
Again you should also show non heap memory (usually lesser than heap, but you should have setup it differently)
To be more precise you should post your JVM startup settings.
PS: looking at your memory profile I can't see any leak: I can see JVM shrinking heap size because it is not used at all

Related

Java Heap Dump : How to find the objects/class that is taking memory by 1. io.netty.buffer.ByteBufUtil 2. byte[] array

I found that one of my spring boot project's memory (RAM consumption) is increasing day by day. When I uploaded the jar file to the AWS server, it was taking 582 MB of RAM (Max Allocated RAM is 1500 MB), but each day, the RAM is increasing by 50MB to 100 MB and today after 5 days, it's taking 835 MB. Right now the project is having 100-150 users and with normal usage of Rest APIs.
Because of this increase in the RAM, couple of times the application went down with the following error (error found from the logs):
Exception in thread "http-nio-3384-ClientPoller" java.lang.OutOfMemoryError: Java heap space
So to resolve this, I found that by using JAVA Heap Dump, I can find the objects/classes that are taking the memory. So by using Jmap in the command line, I've created a heap dump and uploaded it to Heap Hero and Eclipse Memory Analyzer Tool. In both of them I found the following:
1. Total Waste memory is: 64.69MB (73%) (check below screenshot)
2. Out of these, 34.06MB is taken by Byte [] array and LinkedHashmap[] (check below screenshot), which I have never used in my whole project. I searched for it in my project but didn't found.
3. Following 2 large objects taking 32 MB and 20 MB respectively.
1. Java Static io.netty.buffer.ByteBufUtil.DEFAULT_ALLOCATOR
2. Java Static com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.connectionFinalizerPhantomRefs`
So I tried to find this netty.buffer. in my project, but I don't find anything which matched with netty or buffer.
Now my question is how can I reduce this memory leak or how can I find the exact memory consumption objects/class/variable so that I can reduce the heap size.
I know few of the experts will ask for the source code or anything similar to that but I believe that from the heap dump we can find the memory leak or live objects that are available in the memory. I am looking for that option or anything that reduces this heap dump!
I am working on this issue for the past 3 weeks. Any help would be appreciated.
Thank you!
Start with enabling the JVM native memory tracker to get an idea which part of the memory is increasing by adding the flag -XX:NativeMemoryTracking=summary. There is some performance overhead according to the documentation (5-10%), but if this isn't a issue I would recommend running the JVM with this flag enabled even in production.
Then you can check the values using jcmd <PID> VM.native_memory (there's a good writeup in this answer: Java native memory usage)
If there is indeed a big chunk of native memory allocated, it's likely this is allocated by Netty.
How do you run your application in AWS? If it's running in a Docker image, you might have stumbled upon this issue: What would cause a java process to greatly exceed the Xmx or Xss limit?
If this is the case, you may need to set the environment variable MALLOC_ARENA_MAX if your application is using native memory (which Netty does) and running on a server with a large number of cores. It's perfectly possible that the JVM allocates this memory for Netty but doesn't see any reason to release it, so it will appear to only continue to grow.
If you want to control how much native memory can be allocated by Netty, you can use the JVM flag -XX:MaxDirectMemorySize for this (I believe the default is the same value as Xmx) and lower it in case you application doesn't require that much memory.
JVM memory tuning is a complex process and it becomes even more complex when native memory is involved - as the linked answer shows it's not as easy as simply setting the Xms and Xmx flag and expecting that no more memory will be used.
Heap dump is not enough to detect memory leaks.
You need to look at the difference of two consecutive heaps snapshots both taken after calling the GC.
Or you need a profiling tool that can give the generations count for each class.
Then you should only look at your domain objects (not generic objects like bytes or strings ...etc) that survived the GC and passed from the old snapshot to the new one.
Or, if using the profiling tool, look for old domain objects that still alive and growing for many generations.
Having objects lived for many generations and keeps growing means those objects are still refernced and the GC is not able to reclaim them. However, living for many generations alone is not enough to cause a leak because cached or static Objects may stay for many generations. The other important factor is that they keep growing.
After you detected what object is being leaked, you may use heap dumb to analyse those objects and get the references.

What are the advantages of specifiying memory limit of Java Virtual Machine?

I have set the default memory limit of Java Virtual Machine while running Java Application like this...
java -mx128m ClassName
I Know this will set maximum memory allocation pool to 128MB, but I don't know what the benefit is, of specifying this memory limit of JVM?
Please enlighten me in this issue...
On Sun's 1.6 JVM, on a server-class machine (meaning one with 2 CPUs and at least 2GB of physical memory) the default maximum heap size is the smaller of 1/4th of the physical memory or 1GB. Using -Xmx lets you change that.
Why would you want to limit the amount of memory Java uses? Two reasons.
Firstly, Java's automatic memory management tends to grab as much memory from the operating system as possible, and then manage it for the benefit of the program. If you are running other programs on the same machine as your Java program, then it will grab more than its fair share of memory, putting pressure on them. If you are running multiple copies of your Java program, they will compete with each other, and you may end up with some instances being starved of memory. Putting a cap on the heap size lets you manage this - if you have 32 GB of RAM, and are running four processes, you can limit each heap to about 8 GB (a bit less would be better), and be confident they will all get the right amount of memory.
Secondly (another aspect of the first, really), if a process grabs more memory than the operating system can supply from physical memory, it uses virtual memory, which gets paged out to disk. This is very slow. Java can reduce its memory usage by making its garbage collector work harder. This is also slow - but not as slow as going to disk. So, you can limit the heap size to avoid the Java process being paged, and so improve performance.
There will be a default heap size limit defined for the JVM. This setting lets you override it, usually so that you can specify that you want more memory to be allocated to the java process.
This sets the maximum Heap Size. The total VM might be larger
There is always a limit because this parameter has a default value (at least for the Oracle/Sun VM)
So the benefit might either be: you can give the memory to the app that it actually needs in order to work (efficiently) or if you come from the other direction: (somewhat) limit the maximum memory used in order to manage the distribution of resources among different applications on one machine.
There already has been a question about java and memory SO: Java memory explained
A very nice article about Java memory is found here. It gives an overview of the memory, how it is used, how it is cleaned and how it can be measured.
The defaults of the memory are (prior java 6):
-Xms size in bytes Sets the initial size of the Java heap. The
default size is 2097152 (2MB). The values must be a multiple of, and
greater than, 1024 bytes (1KB). (The -server flag increases the
default size to 32M.)
-Xmn size in bytes Sets the initial Java heap size for the Eden
generation. The default value is 640K. (The -server flag increases
the default size to 2M.)
-Xmx size in bytes Sets the maximum size to which the Java heap can
grow. The default size is 64M. (The -server flag increases the
default size to 128M.) The maximum heap limit is about 2 GB (2048MB).
Another source (here) states that in Java 6 the default heap size depends on the amount of system memory.
I assume this should help avoid high memory consumption (due to bugs or due to many allocations and deallocations). You would use this if you design for a low-memory system (such as an old computer with little amounts of RAM, mobile phones, etc.).
Alternatively, use this to increase the default memory limit, if it is not enough for you and you are getting OutOfMemoryExceptions for normal behavior.

Weird behavior of Java -Xmx on large amounts of ram

You can control the maximum heap size in java using the -Xmx option.
We are experiencing some weird behavior on Windows with this switch. We run some very beefy servers (think 196gb ram). Windows version is Windows Server 2008R2
Java version is 1.6.0_18, 64-Bit (obviously).
Anyway, we were having some weird bugs where processes were quitting with out of memory exceptions even though the process was using much less memory than specified by the -Xmx setting.
So we wrote simple program that would allocate a 1GB byte array each time one pressed the enter key, and initialize the byte array to random values (to prevent any memory compression etc).
Basically, whats happening is that if we run the program with -Xmx35000m (roughly 35 gb) we get an out of memory exception when we hit 25 GB of process space (using windows task manager to measure). We hit this after allocating 24 GB worth of 1 GB blocks, BTW, so that checks out.
Simply specifying a larger value for -Xmx option makes the program work fine to larger amounts of ram.
So, what is going on? Is -Xmx just "off". BTW: We need to specify -Xmx55000m to get a 35 GB process space...
Any ideas on what is going on?
Is their a bug in the Windows JVM?
Is it safe to simply set the -Xmx option bigger, even though there is a disconnect between the -Xmx option and what is going on process wise?
Theory #1
When you request a 35Gb heap using -Xmx35000m, what you are actually saying is that to allow the total space used for the heap to be 35Gb. But the total space consists of the Tenured Object space (for objects that survive multiple GC cycles), the Eden space for newly created objects, and other spaces into which objects will be copied during garbage collection.
The issue is that some of the spaces are not and cannot be used for allocating new objects. So in effect, you "lose" a significant percent of your 35Gb to overheads.
There are various -XX options that can be used to tweak the sizes of the respective spaces, etc. You might try fiddling with them to see if they make a difference. Refer to this document for more information. (The commonly used GC tuning options are listed in section 8. The -XX:NewSpace option looks promising ...)
Theory #2
This might be happening because you are allocating huge objects. IIRC, objects above a certain size can be allocated directly into the Tenured Object space. In your (highly artificial) benchmark, this might result in the JVM not putting stuff into the Eden space, and therefore being able to use less of the total heap space than is normal.
As an experiment, try changing your benchmark to allocate lots of small objects, and see if it manages to use more of the available space before OOME-ing.
Here are some other theories that I would discount:
"You are running into OS-imposed limits." I would discount this, since you said that you can get significantly greater memory utilization by increasing the -Xmx... setting.
"The Windows task manager is reporting bogus numbers." I would discount this because the numbers reported roughly match the 25Gb that you think your application had managed to allocate.
"You are losing space to other things; e.g. the permgen heap." AFAIK, the permgen heap size is controlled and accounted independently of the "normal" heaps. Other non-heap memory usage is either a constant (for the app) or dependent on the app doing specific things.
"You are suffering from heap fragmentation." All of the JVM garbage collectors are "copying collectors", and this family of collectors has the property that heap nodes are automatically compacted.
"JVM bug on Windows." Highly unlikely. There must be tens of thousands of 64bit Java on Windows installations that maximize the heap size. Someone else would have noticed ...
Finally, if you are NOT doing this because your application requires you to allocate memory in huge chunks, and hang onto it "for ever" ... there's a good chance that you are chasing shadows. A "normal" large-memory application doesn't do this kind of thing, and the JVM is tuned for normal applications ... not anomalous ones.
And if your application really does behave this way, the pragmatic solution is to just set the -Xmx... option larger, and only worry if you start running into OS-level issues.
To get a feeling for what exactly you are measuring you should use some different tools:
the Windows Task Manager (I only know Windows XP, but I heard rumours that the Task Manager has improved since then.)
procexp and vmmap from Sysinternals
jconsole from the JVM (you are using the SunOracle HotSpot JVM, aren't you?)
Now you should answer the following questions:
What does jconsole say about the used heap size? How does that differ from procexp?
Does the value from procexp change if you fill the byte arrays with non-zero numbers instead of keeping them at 0?
did you try turning on the verbose output for the GC to find out why the last allocation fails. is it because the OS fails to allocate a heap beyond 25GB for the native JVM process or is it because the GC is hitting some sort of limit on the maximum memory it can manage. I would recommend you also connect to the command line process using jconsole to see what the status of the heap is just before the allocation failure. Also tools like the sysinternals process explorer might give better details as where the failure is occurring if it is in the jvm process.
Since the process is dying at 25GB and you have a generational collector maybe the rest of the generations are consuming 10GB. I would recommend you install JDK 1.6_u24 and use jvisualvm with the visualGC plugin to see what the GC is doing especially factor in the size of all the generations to see how the 35GB heap is being chopped up into different regions by the GC / VM memory manager.
see this link if you are not familiar with Generational GC http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#generation_sizing.total_heap
I assume this has to do with fragmenting the heap. The free memory is probably not available as a single contiguous free area and when you try to allocate a large block this fails because the requested memory cannot be allocated in a single piece.
The memory displayed by windows task manager is the total memory allocated to the process which includes memory for code, stack, perm gen and heap.
The memory you measure using your click program is the amount of heap jvm makes available to running jvm programs.
Natrually the total allocated memory to JVM by windows should be greater than what JVM makes available to your program as heap memory.

Eclipse release heap back to system

I'm using Eclipse 3.6 with latest Sun Java 6 on Linux (64 bit) with a larger number of large projects. In some special circumstances (SVN updates for example) Eclipse needs up to 1 GB heap. But most of the time it only needs 350 MB. When I enable the heap status panel then I see this most of the time:
350M of 878M
I start Eclipse with these settings: -Xms128m -Xmx1024m
So most of the time lots of MB are just wasted and are just used rarely when memory usage peaks for a short time. I don't like that at all and I want Eclipse to release the memory back to the system, so I can use it for other programs.
When Eclipse needs more memory while there is not enough free RAM than Linux can swap out other running programs, I can live with that. I heard there is a -XX:MaxHeapFreeRatio option. But I never figured out what values I have to use so it works. No value I tried ever made a difference.
So how can I tell Eclipse (Or Java) to release unused heap?
Found a solution. I switched Java to use the G1 garbage collector and now the HeapFreeRatio parameters works as intended. So I use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory is actually released back to the operating system.
You can go to the Preferences -> General and check the Show heap status. This activate a nice view of your heap in the corner of Eclipse. Something like this:
If you click the trash bin, it will try to run garbage collection and return the memory.
Java's heap is nothing more than a big data structure managed within the JVM process' heap space. The two heaps are logically-separate entities even though they occupy the same memory.
The JVM is at the mercy of the host system's implementations of malloc(), which allocates memory from the system using brk(). On Linux systems (Solaris, too), memory allocated for the process heap is almost never returned, largely because it becomes fragmented and the heap must be contiguous. This means that memory allocated to the process will increase monotonically, and the only way to keep the size down is not to allocate it in the first place.
-Xms and -Xmx tell the JVM how to size the Java heap ahead of time, which causes it to allocate process memory. Java can garbage collect until the sun burns out, but that cleanup is internal to the JVM and the process memory backing it doesn't get returned.
Elaboration from comment below:
The standard way for a program written in C (notably the JVM running Eclipse for you) to allocate memory is to call malloc(3), which uses the OS-provided mechanism for allocating memory to the process and then managing individual allocations within those allocations. The details of how malloc() and free() work are implementation-specific.
On most flavors of Unix, a process gets exactly one data segment, which is a contiguous region of memory that has pointers to the start and end. The process can adjust the size of this segment by calling brk(2) and increasing the end pointer to allocate more memory or decreasing it to return it to the system. Only the end can be adjusted. This means that if your implementation of malloc() enlarges the data segment, the corresponding implementation of free() can't shrink it unless it determines that there's space at the end that's not being used. In practice, a humongous chunk of memory you allocated with malloc() rarely winds up at the very end of the data segment when you free() it, which is why processes tend to grow monotonically.

Java memory usages

I cannot understand the Java memory usage. I have an application which is executed with maximum memory size set to 256M. Yet, at some point in time I can see that according to the task manager it takes up to 700MB!
Needless to say, all the rest of the applications are a bit unresponsive when this happens as they are probably swapped out.
It's JDK 1.6 on WinXP. Any ideas ?
The memory configured is available to the application. It won't include
the JVM size
the jars/libs loaded in
native libraries and related allocated memory
which will result in a much bigger image. Note that due to how the OS and the JVM work that 700Mb may be shared between multiple JVMs (due to shared binary images, shared libraries etc.)
The amount you specify with -Xmx is only for the user accessible heap - the space in which you create runtime objects dynamically.
The Java process will usea lot more space for its own needs, including the JVM, the program and other libraries, constants pool, etc.
In addition, because of the way the garbage collection system works, there may be more memory allocated than what is currently in the heap - it just hasn't been reclaimed yet.
All that being said, setting your program to a maximal heap of 256MB is really lowballing it on a modern system. For heavy programs you can usually request at least 1GB of heap.
As you mentioned, one possible cause of slowness is that some of the memory allocated to Java gets swapped off to disk. In that case, the program would indeed start churning the disk, so don't go overboard if you have little physical memory available. On Linux, you can get page miss stats for a process, I am sure there's a similar way on windows.
The -Xmx option only limits the java heap size. In addition to the heap, java will allocate memory for other things, including a stack for each thread (2kB by default, set by -Xss), the PermGenSpace, etc.
So, depending on how many threads you launch, the number of classes your application loads, and some other factors, you may use a lot more memory than expected.
Also, as pointed out, the Windows task manager may take the virtual memory into account.
You mean the heap right? As far as i know there are two things to take care. The Xms option which sets an initial java heap size and the Xmx option which sets the maximum java heap space. If the heap memory is overreaching the Xmx value there should be an OutOfMemoryException.
What about the virtual pages it's taking up. I think Windows shows you the full set of everything aggregated.

Categories

Resources