Out of Memory - web applications - java

I am trying to figure out why Jetty 6.1.22 is running out of memory on my laptop. I have 2 web applications running JBoss Seam, Hibernate (with EHCache), and separate Quartz scheduler instances.
With little load, the server dies throwing OutOfMemory.
What can I look for? Would you think that I am not properly closing handles for input streams or files?
I tried profiling my application with Netbeans, but it works off and on. Usually, it ends up locking up even though it doesn't use that much CPU or memory.
Walter

What are you JVM's execution parameters?
Try increasing available heap memory though -Xms (default) -Xmx (max) and -Xmn (min) JVM's flags.
You can also monitor your Application Server execution with JConsole. It's usually helpful for finding out where is you application leaking.

add -XX:+HeapDumpOnOutOfMemoryError when invoking the jvm and when you get the OOM situation you will get a .hprof file dumped. You can open it later with several tools and you'll be able to see where the memory is going...
The tool I use is Eclipse Memory Analyzer, it's pretty good.

I can strongly recommend attaching to the troublesome program with jvisualvm in the JDK.
This allows you to investigate memory and cpu usage over time and inspect what happens in general.

Related

How to profile spring-boot application memory consumption?

I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers

Unexpected tomcat memroy usage increasing monitored by jconsole

I am learning jvm and I try to analyse memory used in tomcat. When tomcat started, Eden usage is shown as follows:
Tomcat eden space usage monitored by jconsole
No war was deployed in tomcat, and every default contexts such as hostmanager, manager were removed. Tomcat was started by default configuration and no request was accepted. It was same in debug mode and no debug mode. When GC is running, the memory usage was decreased. What caused memory usage increasing? Could any one help me? Thank you.
Runtime environment:
jdk1.8.0_112
apache-tomcat-8.5.9
I assume your question is in regards to the memory utilization fluctuation you see through JConsole. I will try to provide my two cents in here...
Keep in mind that even though there may be no WAR file deployed, there is still a lot of code actively running on that box to perform various tasks. One example is the file monitor to identify when a new WAR file is deployed to your work directory. Every piece of code routinely allocates memory that eventually gets garbage collected, thus the memory fluctuation you see.
Another piece of code that is likely consuming much of that memory is the profiler itself. For you to be able to get the metrics you get through JConsole, there needs to be an agent running on that box that is collecting (and storing in memory) the information you see.
If you want to have a better idea of what is actually being allocated by tomcat, I suggest getting your hands on a good Java Profiler that can give you more in-depth information on object allocations. I personally suggest YourKit as it is one of the most powerful profilers I have used in my many years of building software.
Now, there may be other reasons, but don't assume that just because you are not directly interacting with Tomcat that it is not doing anything.
Hope this helps, good luck!

Monitoring Java internal objects & memory usage

I have a Java web server running as a Windows service.
I use Tomcat 8 with Java 1.8.*
For a few months now, I've detected that the memory usage is increasing quite rapidly. I cannot make up for sure if it's heap or stack.
The process starts with ~200MB and after a week or so, it can reach up to 2GB.
Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
This has repeated multiple times on multiple environments.
I would like to know if there's a way to monitor the process and view it's internal memory usage, even to the level of viewing which objects are using the most amount of memory.
Can 'Java Native Memory Tracking' be used for this?
This will help me to detect any memory leaks that might cause this.
Thanks in advance.
To monitor the memory usage of a Java process, I'd use a JMX client such as JVisualVM, which is bundled with the Oracle JDK:
https://visualvm.java.net/jmx_connections.html
To identify the cause of a memory leak, I'd instruct the JVM to take a heap dump when it runs out of memory (on the Oracle JVM, this can be accomplished by specifying -XX:-HeapDumpOnOutOfMemoryError when starting your Java program), and then analyze that heap dump using a tool such as Eclipse MAT.
quoting:
the process starts with ~200MB and after a week or so, it can reach up to 2GB. Shortly after it will generate OutOfMemory exception (the memory usage will be 2GB - 2.5GB).
The problem might not be as simple as seeing what java objects you have got in JVisualVM (e.g millions of strings)
What you need to do is identify the code that leaks.
One way you could do that is to force the execution of particular code and then monitor the memory.
The easiest way to force the execution of code inside classes/objects is to use a tool like https://github.com/lorenzoongithub/nudge4j (particularly since you are on java 8)
alternatively you could just wire up nashorn to a command line or run your progam via jjs https://docs.oracle.com/javase/8/docs/technotes/guides/scripting/nashorn/shell.html

Is there a way to determine the memory usage per application in weblogic?

We are running weblogic and appear to have a memory leak - we eventually run out of heap space.
We have 5 apps (5 war deployments) on the server.
Can you think of a way to gather memory usage on a per application basis?
(Then we can concentrate our search by looking through the code in the appropriate app.)
I have run jmap to get a heap dump and loaded the results in jvisualvm but it's unclear where the bulk of objects have come from - for example Strings.
I was thinking that weblogic perhaps uses separate classloaders per application and so we may be able to figure something out via that route...
try using Eclipse MAT, it gives hint of memory leaks, among others features

How to you check memory usage and force garbage collection for jetty application

I think I may have a memory leak in a servlet application running in production on jetty 8.1.7.
Is there a way of seeing how much heap memory is actually being used at an instance of time, not the max memory allocated with -Xmx, but the actual amount of memory being used.
Can I force a garbage collection to occur for an application running within jetty
yes, both are easily achievable using: VisualVM (see: http://docs.oracle.com/javase/6/docs/technotes/guides/visualvm/monitor_tab.html) This one is shipped with Oracle JDK by default (=> no extra installation required)
However for the memory leak detection, I'd suggest to do memory dump and analyze it later with eclipse MAT ( http://www.eclipse.org/mat/ ) as it has quite nice UI visualizing java memory dumps.
EDIT:
For the ssh only access, yes you can use the mentioned two tools. However you need to run them on the machine with running window manager and remotely connect over ssh to the other machine (you need to have java on both of these machines):
For visualVM: you need to have VisualVM running on one maching and via the ssh connect to remote one, see: VisualVM over ssh
and for the memory dump: use jmap (for sample usage see: http://kadirsert.blogspot.de/2012/01/…) afterwards download the dump file and load if locally to eclipse MAT
enable jmx and connect up to it using jconsole
http://wiki.eclipse.org/Jetty/Tutorial/JMX
You can call System.gc(). That will typically perform a full GC ... but this facility can be disabled. (There is a JVM option to do this with HotSpot JVMs.)
However, if your problem is a memory leak, running the GC won't help. In fact, it is likely to make your server even slower than it currently is.
You can also monitor the memory usage (in a variety of ways - see other Answers) but that only gives you evidence that a memory leak might leak.
What you really need to do is find and fix the cause of the memory leak.
Reference:
How to find a Java Memory Leak
You can use jvisualvm.exe which is under the %JAVA_HOME%\bin folder. By using this application you can monitor memory usage and can force gc.

Categories

Resources