JVM - XMX limit vs Memory consumed by the process - java

I have 2 questions regarding the resident memory used by a Java application.
Some background details:
I have a java application set up with -Xms2560M -Xmx2560M.
The java application is running in a container. k8s allows the container to consume up to 4GB.
The issue:
Sometimes the process is restarted by k8s, error 137, apparently the process has reached 4GB.
Application behaviour:
Heap: the application seems to work in a way where all memory is used, then freed, then used and so on.
This snapshot illustrates it. The Y column is the free heap memory. (extracted by the application by ((double)Runtime.getRuntime().freeMemory()/Runtime.getRuntime().totalMemory())*100
)
I was also able to confirm it using HotSpotDiagnosticMXBean which allows creating a dump with reachable objects and one that also include unreachable objects.
The one with the unreachable was at the size of the XMX.
In addition, this is also what i see when creating a dump on the machine itself, the resident memory can show 3GB while the size of the dump is 0.5GB. (taken with jcmd)
First question:
Is this be behaviour reasonable or indicates a memory usage issue?
It doesn't seem like a typical leak.
Second question
I have seen more questions, trying to understand what the resident memory, used by the application, is comprised of.
Worth mentioning:
Java using much more memory than heap size (or size correctly Docker memory limit)
And
Native memory consumed by JVM vs java process total memory usage
Not sure if any of this can account for 1-1.5 GB between the XMX and the 4GB k8s limit.
If you were to provide some sort of a check list to close in on the problem what will it be? (feels like i can't see the forest for the trees)
Any free tools that can help? (beside the ones for analysing a memory dump)

You allocate 2.5 GB for the heap, the JVM itself and the OS components will take also some memory (the rule of thump is here 1 GB, but the real figures may differ significantly, especially when running in a container), so we are already at 3.5 GB.
Since Java 8, the JVM will store the code for the classes not longer on the heap, but in an area called 'metaspace'; depending on what your program is doing, how many classes and how many ClassLoaders it uses, this area may grow easily above 0.5 GB. This needs to be considered, in addition to those stuff mentioned in the linked posts.

As well as the answer posted by tquadrat you also have to consider what would happen when the application uses native memory mapped by byte buffers which is outside of the heap space but taken up by the process.

Related

Java Heap Dump : How to find the objects/class that is taking memory by 1. io.netty.buffer.ByteBufUtil 2. byte[] array

I found that one of my spring boot project's memory (RAM consumption) is increasing day by day. When I uploaded the jar file to the AWS server, it was taking 582 MB of RAM (Max Allocated RAM is 1500 MB), but each day, the RAM is increasing by 50MB to 100 MB and today after 5 days, it's taking 835 MB. Right now the project is having 100-150 users and with normal usage of Rest APIs.
Because of this increase in the RAM, couple of times the application went down with the following error (error found from the logs):
Exception in thread "http-nio-3384-ClientPoller" java.lang.OutOfMemoryError: Java heap space
So to resolve this, I found that by using JAVA Heap Dump, I can find the objects/classes that are taking the memory. So by using Jmap in the command line, I've created a heap dump and uploaded it to Heap Hero and Eclipse Memory Analyzer Tool. In both of them I found the following:
1. Total Waste memory is: 64.69MB (73%) (check below screenshot)
2. Out of these, 34.06MB is taken by Byte [] array and LinkedHashmap[] (check below screenshot), which I have never used in my whole project. I searched for it in my project but didn't found.
3. Following 2 large objects taking 32 MB and 20 MB respectively.
1. Java Static io.netty.buffer.ByteBufUtil.DEFAULT_ALLOCATOR
2. Java Static com.mysql.cj.jdbc.AbandonedConnectionCleanupThread.connectionFinalizerPhantomRefs`
So I tried to find this netty.buffer. in my project, but I don't find anything which matched with netty or buffer.
Now my question is how can I reduce this memory leak or how can I find the exact memory consumption objects/class/variable so that I can reduce the heap size.
I know few of the experts will ask for the source code or anything similar to that but I believe that from the heap dump we can find the memory leak or live objects that are available in the memory. I am looking for that option or anything that reduces this heap dump!
I am working on this issue for the past 3 weeks. Any help would be appreciated.
Thank you!
Start with enabling the JVM native memory tracker to get an idea which part of the memory is increasing by adding the flag -XX:NativeMemoryTracking=summary. There is some performance overhead according to the documentation (5-10%), but if this isn't a issue I would recommend running the JVM with this flag enabled even in production.
Then you can check the values using jcmd <PID> VM.native_memory (there's a good writeup in this answer: Java native memory usage)
If there is indeed a big chunk of native memory allocated, it's likely this is allocated by Netty.
How do you run your application in AWS? If it's running in a Docker image, you might have stumbled upon this issue: What would cause a java process to greatly exceed the Xmx or Xss limit?
If this is the case, you may need to set the environment variable MALLOC_ARENA_MAX if your application is using native memory (which Netty does) and running on a server with a large number of cores. It's perfectly possible that the JVM allocates this memory for Netty but doesn't see any reason to release it, so it will appear to only continue to grow.
If you want to control how much native memory can be allocated by Netty, you can use the JVM flag -XX:MaxDirectMemorySize for this (I believe the default is the same value as Xmx) and lower it in case you application doesn't require that much memory.
JVM memory tuning is a complex process and it becomes even more complex when native memory is involved - as the linked answer shows it's not as easy as simply setting the Xms and Xmx flag and expecting that no more memory will be used.
Heap dump is not enough to detect memory leaks.
You need to look at the difference of two consecutive heaps snapshots both taken after calling the GC.
Or you need a profiling tool that can give the generations count for each class.
Then you should only look at your domain objects (not generic objects like bytes or strings ...etc) that survived the GC and passed from the old snapshot to the new one.
Or, if using the profiling tool, look for old domain objects that still alive and growing for many generations.
Having objects lived for many generations and keeps growing means those objects are still refernced and the GC is not able to reclaim them. However, living for many generations alone is not enough to cause a leak because cached or static Objects may stay for many generations. The other important factor is that they keep growing.
After you detected what object is being leaked, you may use heap dumb to analyse those objects and get the references.

java memory leak, visualvm showing wrong data

I've a java application running, after few hours it fulfills memory.
I've tried to detect memory leak with visualvm but it shows wrong data (have no idea how that can happen).
In the screenshot you can see task manager showing memory usage of 700Mb and visualvm showing 225...
Does anyone know whats going on here?
Regards
Beware that your OS is only aware of the total amount of memory java has reserved over the time (and java will not return that amount of memory easily AFAIK). However java may not be using all that memory at a given moment, so you can see differences between those two numbers.
For example, if you launch your program like this
java -Xmx512m -Xms256m ...
Then your JVM will take 256 MB as soon as it starts (and the OS will tell you so, more or less). However, if you open your memory peek tool (be it visualvm, jconsole, etc.), it may show that you are using less than that (it is just you have not needed to use the whole of your reserved heap).
What Java gets it doesn't return. Allocating memory takes quite a lot of effort, so Java doesn't usually return any of the memory the system ever granted it. So if your program ever used 760 MB RAM this is what it sticks with.
And then there are two other factors that play an important role. The heap size is only the amount of memory your program uses or can use. But between your program and the OS is the Java-VM which might take a good bit of memory as well. The task manager shows you the amount of memory that is used by your program plus the vm.
The other factor is memory fragmentation. Some data structures (e.g. arrays) have to be in a consecutive chunk of the memory. array[i+1] has to be in the memory slot after array[i]. This now means that if you have e.g. 10 MB memory allocated, and the the middle 2 MB memory are used and you want to create an 6 MB array the Java-VM has to allocate new memory, so it can fit the array in one piece.
This increases the memory usage in the task manager to go up, but not the heap size, since the heap size only shows actually used memory.
Memory spaces in Java are defined by 3 metrics: used, available, max available (see JMX values). Size you see is available size and not max available, that is probably already allocated.
Again you should also show non heap memory (usually lesser than heap, but you should have setup it differently)
To be more precise you should post your JVM startup settings.
PS: looking at your memory profile I can't see any leak: I can see JVM shrinking heap size because it is not used at all

How to restrict the Java VM overall memory consumption?

I am running a Java application on a Linux-Cluster with SLURM as resource manager. To run my application I have to specify for SLURM the amount of memory I will need. SLURM will run my application in a kind of VM with the specified amount of memory. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
My problem is that I am exceeding the amount of memory I have chosen on SLURM and it terminates my application. It seems that the JVM uses about 1GB of memory, probably for things like GC or so.
Is there a possibility to restrict the size of the JVM or at least to tame it.
Cheers,
Markus
The maximum heap setting only limited the maximum heap. There are other memory regions which you have not limited such as
thread stacks
perm gen
shared libraries
native memory used by libraries
direct memory
memory mapped files.
If you want to limit the over all memory usage you need to be clear about whether you are limiting virtual memory or resident memory. Often monitoring tools make the mistake of monitoring virtual memory which shows a surprising lack of understanding of how applications work, or even why you monitor an application in the first place.
You want to monitor resident memory usage which means you need to know how much memory your application uses over time apart from the heap, then work out how much heap you can have plus some margin for error.
. To tell my java application how much memory it can use I use the "-Xmx##g" parameter. I choose it 1GB less than I have requested from SLURM.
At a guess I would start with 1/2 GB with -Xmx512m and see what is the peak resident memory and increase it if you find there is always a few hundred MB head room.
BTW 1 GB of memory doesn't cost that much these days (as little as $5). Your time could be worth much more than the resources you are trying to save.

Java RAM increases although Heap stays same? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.
There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.

Java memory usages

I cannot understand the Java memory usage. I have an application which is executed with maximum memory size set to 256M. Yet, at some point in time I can see that according to the task manager it takes up to 700MB!
Needless to say, all the rest of the applications are a bit unresponsive when this happens as they are probably swapped out.
It's JDK 1.6 on WinXP. Any ideas ?
The memory configured is available to the application. It won't include
the JVM size
the jars/libs loaded in
native libraries and related allocated memory
which will result in a much bigger image. Note that due to how the OS and the JVM work that 700Mb may be shared between multiple JVMs (due to shared binary images, shared libraries etc.)
The amount you specify with -Xmx is only for the user accessible heap - the space in which you create runtime objects dynamically.
The Java process will usea lot more space for its own needs, including the JVM, the program and other libraries, constants pool, etc.
In addition, because of the way the garbage collection system works, there may be more memory allocated than what is currently in the heap - it just hasn't been reclaimed yet.
All that being said, setting your program to a maximal heap of 256MB is really lowballing it on a modern system. For heavy programs you can usually request at least 1GB of heap.
As you mentioned, one possible cause of slowness is that some of the memory allocated to Java gets swapped off to disk. In that case, the program would indeed start churning the disk, so don't go overboard if you have little physical memory available. On Linux, you can get page miss stats for a process, I am sure there's a similar way on windows.
The -Xmx option only limits the java heap size. In addition to the heap, java will allocate memory for other things, including a stack for each thread (2kB by default, set by -Xss), the PermGenSpace, etc.
So, depending on how many threads you launch, the number of classes your application loads, and some other factors, you may use a lot more memory than expected.
Also, as pointed out, the Windows task manager may take the virtual memory into account.
You mean the heap right? As far as i know there are two things to take care. The Xms option which sets an initial java heap size and the Xmx option which sets the maximum java heap space. If the heap memory is overreaching the Xmx value there should be an OutOfMemoryException.
What about the virtual pages it's taking up. I think Windows shows you the full set of everything aggregated.

Categories

Resources