Java RAM increases although Heap stays same? [duplicate] - java

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.

There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.

Related

JVM - XMX limit vs Memory consumed by the process

I have 2 questions regarding the resident memory used by a Java application.
Some background details:
I have a java application set up with -Xms2560M -Xmx2560M.
The java application is running in a container. k8s allows the container to consume up to 4GB.
The issue:
Sometimes the process is restarted by k8s, error 137, apparently the process has reached 4GB.
Application behaviour:
Heap: the application seems to work in a way where all memory is used, then freed, then used and so on.
This snapshot illustrates it. The Y column is the free heap memory. (extracted by the application by ((double)Runtime.getRuntime().freeMemory()/Runtime.getRuntime().totalMemory())*100
)
I was also able to confirm it using HotSpotDiagnosticMXBean which allows creating a dump with reachable objects and one that also include unreachable objects.
The one with the unreachable was at the size of the XMX.
In addition, this is also what i see when creating a dump on the machine itself, the resident memory can show 3GB while the size of the dump is 0.5GB. (taken with jcmd)
First question:
Is this be behaviour reasonable or indicates a memory usage issue?
It doesn't seem like a typical leak.
Second question
I have seen more questions, trying to understand what the resident memory, used by the application, is comprised of.
Worth mentioning:
Java using much more memory than heap size (or size correctly Docker memory limit)
And
Native memory consumed by JVM vs java process total memory usage
Not sure if any of this can account for 1-1.5 GB between the XMX and the 4GB k8s limit.
If you were to provide some sort of a check list to close in on the problem what will it be? (feels like i can't see the forest for the trees)
Any free tools that can help? (beside the ones for analysing a memory dump)
You allocate 2.5 GB for the heap, the JVM itself and the OS components will take also some memory (the rule of thump is here 1 GB, but the real figures may differ significantly, especially when running in a container), so we are already at 3.5 GB.
Since Java 8, the JVM will store the code for the classes not longer on the heap, but in an area called 'metaspace'; depending on what your program is doing, how many classes and how many ClassLoaders it uses, this area may grow easily above 0.5 GB. This needs to be considered, in addition to those stuff mentioned in the linked posts.
As well as the answer posted by tquadrat you also have to consider what would happen when the application uses native memory mapped by byte buffers which is outside of the heap space but taken up by the process.

Monitoring Java internal objects & memory usage

I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html

Native memory usage in Linux seems to be much higher than observed through JVM itself (e.g. through JConsole)

We have a customer that uses WebSphere 7.0 on RedHat Linux Server 5.6 (Tikanga) with IBM JVM 1.6.
When we look at the OS reports for memory usage, we see very high numbers and OS starts to use SWAP memory in some point due to lack in memory.
On the other hand, JConsole graphs show perfectly normal behavior of memory - Heap size increases until GC is invoked when expected and Heap size drops to ~30% in normal cycles. Non heap is as expected and very constant in size.
Does anyone have an idea what this extra native memory usage can be attributed to?
I would check you are looking at resident memory and not virtual memory (the later can be very high)
If you swap, even slightly this can cause the JVM to halt for very long periods of time on a GC. If your application is not locking up for second or minutes, it probably isn't swapping (another program could be)
If your program really is using native memory, this will most like be due to a native library you have imported. If you have a look at /proc/{id}/mmap this may give you a clue, but more likely to will have to check which native libraries you are loading.
Note: if you have lots of threads, the stack space for all these reads can add up. I would try to keep these to a minimum if you can, but I have seen JVMs with many thousands and this can chew up native memory. GUI components can also use native memory but I assume you don't have any of those.

Under what circumstances does Java performance degrade with more memory?

We're load testing a Java 1.6 application in our DEV environment. The JVM heap allocation is 2Gb, -Xms2048m -Xmx2048m. Under load testing, the app runs smooth, never uses more than 1.25Gb of heap, and garbage collection is totally normal.
In our UAT environment, we run the load test with the same parameters, the only difference is the JVM, it's allocated 4Gb, -Xms4096m -Xmx4096m, otherwise, the hardware is exactly the same with DEV. But during load testing, the performance is horrendous, the app eats up nearly the entire heap, and garbage collection runs rampant.
We've run these tests over and over again, eliminated all possible symptoms that may influence performance, but the results are the same. Under what circumstances can this be the case?
There is something different about your application in the Production and UAT environments.
Judging from the symptoms, it is (IMO) unlikely to be a hardware, operating system performance tuning or a difference in the JVM versions. It goes without saying that this is unlikely to be due to the application having more memory.
(It is not inconceivable that your application might do something strange ... like sizing some data structures based on the maximum heap size and get the calculations wrong. But I think you'd be aware of that possibility, so lets ignore it for now.)
It is probably related to a difference in the OS environment; e.g. a different version of the OS or some application, differences in the networking, differences in locales, etcetera. But the bottom line is that it is 99% certain that there is a memory leak in your application when run on the UAT, and that memory leak is what is chewing up heap memory and overloading the GC.
My advice would be to treat this as a storage leak problem, and use the standard tools / techniques to track down the cause of the problem. In the process, you will most likely be able to figure out why this only occurs on your UAT.
The culprit could be garbage collection, normal "stop-the-world"-type collection caused us some performance problems; the server-software was running very slow, yet the load of the server was also low. Eventually we found out that there was a single "stop-the-world" -garbage collector thread holding up the entire software being run all the time under certain scenarios (operations producing loads of garbage).
Moving to concurrent garbage collection alleviated the problem with start up parameters -XX:+UseParallelOldGC -XX:ParallelGCThreads=8. We were using "only" 2gb heaps in tests and production, but it is also worthy of noting that the amount of time the GC takes goes up with larger heap (even if your software never actually uses all of it).
You might want to read more about different garbage collector -options and tuning from here: Java SE 6 HotSpot[tm] Virtual Machine Garbage Collection Tuning.
Also, answers in this question could provide some help: Java very large heap sizes.
It will be worth while to analyze the heap dumps on both these machines and understand what is consuming the heap differently on these 2 environments. Histograms will help.

JVM memory usage out of control

I have a Tomcat webapp which does some pretty memory and CPU-intensive tasks on the behalf of clients. This is normal and is the desired functionality. However, when I run Tomcat, memory usage skyrockets over time to upwards of 4.0GB at which time I usually kill the process as it's messing with everything else running on my development machine:
I thought I had inadvertently introduced a memory leak with my code, but after checking into it with VisualVM, I'm seeing a different story:
VisualVM is showing the heap as taking up approximately a GB of RAM, which is what I set it to do with CATALINA_OPTS="-Xms256m -Xmx1024".
Why is my system seeing this process as taking up a ton of memory when according to VisualVM, it's taking up hardly any at all?
After a bit of further sniffing around, I'm noticing that if multiple jobs are running simultaneously in the applications, memory does not get freed. However, if I wait for each job to complete before submitting another to my BlockingQueue serviced by an ExecutorService, then memory is recycled effectively. How can I debug this? Why would garbage collection/memory reuse differ?
You can't control what you want to control, -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation. VisualVM is only showing you what the Heap is comsuming, it doesn't show what the entire JVM is consuming as native memory as an OS process. You will have to use OS level tools to see that, and they will report radically different numbers, usually much much larger than anything VisualVM reports, because the JVM uses up native memory in an entirely different way.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
-Xmx doesn't control what you think it controls, it controls the JVM heap, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Plain and simple the JVM uses more memory than what is supplied in -Xms and -Xmx and the other command line parameters.
Here is a very detailed article on how the JVM allocates and manages memory, it isn't as simple as what you are expected based on your assumptions in your question, it is well worth a comprehensive read.
ThreadStack size in many implementations have minimum limits that vary by Operating System and sometimes JVM version; the threadstack setting is ignored if you set the limit below the native OS limit for the JVM or the OS ( ulimit on *nix has to be set instead sometimes ). Other command line options work the same way, silently defaulting to higher values when too small values are supplied. Don't assume that all the values passed in represent what are actually used.
The Classloaders, and Tomcat has more than one, eat up lots of memory that isn't documented easily. The JIT eats up a lot of memory, trading space for time, which is a good trade off most of the time.
You should also check for CPU usage and garbage collector.
It is possible that garbage collection pauses and the CPU gc consumes further slow down your machine.

Categories

Resources