Issue Description:
We are facing the following issue in a web application (on CQ5):
System Configurations details:
• System memory: 7GB
• Xmx: 3.5 GB
• Xms: 1 GB
• MaxPermGen: 300MB
• Max no of observed threads: 620 (including 300 http request serving threads)
• Xss: default
The issue is that the memory consumed by cq5 java process (which runs the servlet engine) keeps on increasing with time.
Once it reaches above 6 to 6.5 GB (and system memory reaches 7 GB), the JVM stops responding. (due to shortage of memory and heavy paging activity).
The heap and permgen however collectively remain at or below 3.8 (3.5+0.3) GB.
This means that non heap memory (native memory + thread stack space) keeps growing from a few 100 MBs (after CQ5 server restart) to more than 2-3 GBs (after long runs 4-5 hrs with heavy loads).
So our goal is basically to find out the memory leaks in non-heap memory which could be introduced due to 3rd party libraries, indirect references of Java code etc. We are not receiving any out of memory errors.
Help needed:
Now most of the tools we used are giving us good information and
details about heap memory. But we are unable to get a view to native
memory. Request to provide your valuable suggestions on how to
monitor non heap memory details (at object level or at memory area
level).
If anyone of you have faced a similar issue (non-heap
memory leak) in any of your applications, and would like to share
knowledge about how to fix non heap memory leaks, request you to
share your experience.
This is really dependent on your specific implementation: what code you've deployed, what infrastructure you're using, what version you're running, what application servers (if any) you're using, etc.
That said, I have experienced memory leak issues with CQ5.5 and the Image Servlet. It's actually a memory leak down in one of the Java libraries that powers the Image Servlet, way down under the covers. It's remedied by a Java version update, but it's caused by the Image servlet. Kind of a long shot that it's your issue, but probably worth mentioning.
Related
I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers
I have 2 questions regarding the resident memory used by a Java application.
Some background details:
I have a java application set up with -Xms2560M -Xmx2560M.
The java application is running in a container. k8s allows the container to consume up to 4GB.
The issue:
Sometimes the process is restarted by k8s, error 137, apparently the process has reached 4GB.
Application behaviour:
Heap: the application seems to work in a way where all memory is used, then freed, then used and so on.
This snapshot illustrates it. The Y column is the free heap memory. (extracted by the application by ((double)Runtime.getRuntime().freeMemory()/Runtime.getRuntime().totalMemory())*100
)
I was also able to confirm it using HotSpotDiagnosticMXBean which allows creating a dump with reachable objects and one that also include unreachable objects.
The one with the unreachable was at the size of the XMX.
In addition, this is also what i see when creating a dump on the machine itself, the resident memory can show 3GB while the size of the dump is 0.5GB. (taken with jcmd)
First question:
Is this be behaviour reasonable or indicates a memory usage issue?
It doesn't seem like a typical leak.
Second question
I have seen more questions, trying to understand what the resident memory, used by the application, is comprised of.
Worth mentioning:
Java using much more memory than heap size (or size correctly Docker memory limit)
And
Native memory consumed by JVM vs java process total memory usage
Not sure if any of this can account for 1-1.5 GB between the XMX and the 4GB k8s limit.
If you were to provide some sort of a check list to close in on the problem what will it be? (feels like i can't see the forest for the trees)
Any free tools that can help? (beside the ones for analysing a memory dump)
You allocate 2.5 GB for the heap, the JVM itself and the OS components will take also some memory (the rule of thump is here 1 GB, but the real figures may differ significantly, especially when running in a container), so we are already at 3.5 GB.
Since Java 8, the JVM will store the code for the classes not longer on the heap, but in an area called 'metaspace'; depending on what your program is doing, how many classes and how many ClassLoaders it uses, this area may grow easily above 0.5 GB. This needs to be considered, in addition to those stuff mentioned in the linked posts.
As well as the answer posted by tquadrat you also have to consider what would happen when the application uses native memory mapped by byte buffers which is outside of the heap space but taken up by the process.
I have a J2EE web application running on MS Windows Server 2008. The application server is Oracle WebLogic 11G. The server has 32GB ram but users still keep complaining the application is very slow.
So I check the JVM config and find that the allocated heap size is just 1 GB while the server actually has 32GB ram.
Then I check the JVM heap size free percentage and find that even when the server is most busy, there is still 50% heap free.
So I want to know if it helps if I increase the heap size to say 2GB or 4GB.
As I have read some articles that when allocating too much heap size to a JVM, it will take a long time to perform garbage collection.
The correct way to make an application faster is to use various tools and other sources of information that are available to you to figure out why it is slow. For example:
Use web browser tools to see if there is problem with web performance, etc.
Use a profiler to identify the execution hotspots in the code
Use system level tools to figure out where bottlenecks are; e.g. front-end versus backend, network traffic, disk I/O ... swapping / thrashing effects.
Check system logfiles and application logfiles for clues.
Turn on JVM GC logging and see if there is a correlation between GC and "sluggish" requests.
Then you address the causes.
Randomly fiddling with the heap / GC parameters based only on gut feeling is unlikely to work.
FWIW: I expect that increasing the heap size is unlikely to improve things.
You can increase your heap memory to about 75% of physical memory .But chances for that to solve the 'slow problem' are way too low.Problem its not there , you should analyse duration of most used functions and see where it last longer than user should expect.
I recently updated a Play v1 app to use OpenJDK v1.8 on Heroku and am finding that after typical load I start getting R14 errors relatively quickly - the physical memory has exceeded the 512MB limit and is now swapping impacting performance. I need to restart the applications frequently to stop the R14 errors. The application is a very typical web application and I would expect it to run comfortably within the memory constraints.
Here is a screenshot from NewRelic showing the physical memory being exceeded. I don't really have 59 JVMs, just the result of numerous restarts.
I don't quite understand why the "used heap" appears to impact the "physical memory" when it hasn't come close to the "committed heap" and why the "physical memory" seems to more closely follow the "used heap" rather than the "committed heap"
I've used the Eclipse Memory Analyzer to analyse some heap dumps and the Leak Suspects report mentions play.Play and play.mvc.Router as suspects though I'm not sure if that's expected and/or if they are directly related to the physical memory being exceeded.
See the generated by MAT for more details.
Leak Suspects
Top Components
Any guidance on how to resolve this would be great. I'm developing on OS X with Oracle Java 1.8 and have not yet been able to replicate the exact Heroku dyno environment locally (e.g. Ubuntu, OpenJDK 1.8) to attempt to reproduce the issue.
UPDATE 11/12/2014:
Here is the response from Heroku Support:
Play, and in turn Netty, allocate direct memory ByteBuffer objects for IO. Because they use direct memory, they will not be reported by the JVM (Netty 4.x now uses ByteBuf objects, which use heap memory).
The 65MB of anonymous maps that your app is using is not terribly uncommon (I’ve seen some use 100+mb). Some solutions include:
Limiting the concurrent of your application (possibly by setting play.pool)
Increase your dyno size to a 2X dyno.
Decrease your Xmx setting to 256m. This will give the JVM more room for non-heap allocation.
Please let us know if these solutions do not work for you, or if you continue to experience problems after adopting them.
If you want to reproduce the Heroku environment locally, I recommend installing Docker, and using this Docker image with your application. Let us know if you have any trouble with this.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Limit jvm process memory on ubuntu
In my application I'm uploading documents to a server, which does some analyzing on it.
Today I analyzed my application using jconsole.exe and heap dumps as I tried to find out if I'm having memory issues / a memory leak. I thought I might suffer of one since my application is growing very much on RAM while the application is running.
As I watched the heap / codecache / perm gen etc. memory with jconsole after some runs, I was surprised as I saw the following:
picture link: https://www7.pic-upload.de/13.06.12/murk9qrka8al.png
As you can see at the jconsole on the right, the heap is increasing when I'm doing analyzing-related stuff, but it's also decreasing again to its normal size when the work is over. On the left you can see the "htop" of the sever the application is deployed on. And there it is: The RAM is, although the heap acts normally and it also seems the garbage collector is running correct, incredible high at almost 3,2gb.
This is now really confusing me. I was thinking if my java vm stack could have to do something with this? I did some research and what I found spoke about the vm stack as a little memory with only a few megabytes (or even only kb).
My technical background:
The application is running on glassfish v.3.1.2
The database is running on MySQL
Hibernate is used as ORM framework
Java version is 1.7.0_04
It's implemented using VAADIN
MySQL database and glassfish are the only things running on this server
I'm constructing XML-DOM-style documents using JAXB during the analysis and save them in the database
Uploaded documents are either .txt or .pdf files
OS is linux
Solution?
Do you have any ideas why this happens and what I can do for fixing it? I'm really surprised at the moment, since I thought the memory problems came from a memory leak which causes the heap to explode. But now, the heap isn't the problem. It's the RAM that goes higher and higher while the heap stays on the same level. And I don't know what to do to resolve it.
Thanks for every thought you're sharing with me.
Edit: Maybe I should also state out that this behaviour is currently making me impossible to really let other people use my application. When the RAM is full and the server doesn't respond anymore I'm out.
Edit2: Maybe I should also add that this RAM keeps increasing after every successfull further analyzation.
There are lots more things that use memory in a JVM implementation than the Heap Settings.
The Heap settings via -Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
the Java Heap isn't the only thing that consumes memory in a JVM implementation, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
Native Code
App Servers many times have native code that runs outside the JVM but still shows up to the OS as memory associated with the process that controls the app server.