I am running a medium sized jersey web app under tomcat. I find out that the app keeps crashing after certain amount of time of execution (couple of days) due to out of memory problem, I already increased the heap size but that is not the problem as I am facing a memory leak somewhere.
I looked for ways to debug this to no avail. I m using a tool called YouKit Java to help me into this and I realised that the used heap memory keeps growing indefinitly until it breaks. The garbage collection doesnt seem to run at any moment.
heap memory usage afer 16h
I have run a debug app for the entire night and even with minimal to no use this happens: used heap grows from a couple of MB to XXXMB (> 1GB in prod) with no load on it at all. After forcing garbage collection, memory usage goes back to normal.
On the left the sudden decrease after I force GC. On the right, the memory couple of minutes after GC
The next picture shows the used memory growing again after my forced GC with basic usage of the app: page reload, some get queries that returns data from db (sqlite) and some post query that write into db and open some sockets. Note that for everything that I tested, I also run the opposite command that should cancel my changes. but the memory just keeps increasing.
I did a memory snapshot to browse through what is instanciated. Biggest object- dominators shows huge arborescence of object java.lang.ref.Finalizer, knowing that I dont ever call any finalize method (not that I know of at least)
So I am very lost in this, Java is not my biggest strength and I am having lot of hard times debugging this. I am wondering if there is something possibily preventing GC from running and causing this ?
(As I see that after force execution of GC things fall back to a more normal state) Can this be caused by TOMCAT or jersey itself ?
Side notes about the app : It is an API that lets you create tcp tunnel in background (server and client sockets). Every tunnel is spawned in a thread. It also does some data fetching and writing to an sqlite db. I tried to be sure that everything is closed properly (db connections, queries, sockets and is unreference...) when the work is done. For the tunnels, I am relying on a lightly edited version of a library called javatunnel (it may be also bew the culprit but couldn't find anything proving it)
The socket are perhaps not correctly closed so that the garbage collector cannot freed them and the related objects ? I think you should try to investigate deeper in this way.
You can activate garbage collector logs to detect when it runs. You have to add flags -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps at the JVM start-up.
I realised that the used heap memory keeps growing indefinitly until
it breaks. The garbage collection doesnt seem to run at any moment.
In this case you should take a look at your GC parameters in JVM startup arguments.
What is the ratio of your max old size and max new size. If max old size is too high as compared to max new size, your objects will keep on moving from survivor to old generation and wont be garbage collect until old gen size reaches a threshold which is i guess > 75% by default.
So, check you GC tuning. It might help.
Related
we have a java8 web application running on tomcat8.5.47 server.we have only 20-60 users sessions per time but most of time up to 600mb uploading files on server.we also use hibernate and c3p0 for manage database connections.
we monitored server several days and saw sometimes java reserved ram increased suddenly and garbage collector did not released it.how can we manage this?and is there any way to release reserved ram and prevent tomcat from increasing ram? and also any way to decrease used ram in task manager?
these are our settings:
-XX:MaxPermSize=1g -XX:+UseG1GC -XX:+UseStringDeduplication -XX:MaxHeapFreeRatio=15 -XX:MinHeapFreeRatio=5 -XX:-UseGCOverheadLimit -Xmn1g -XX:+UseCompressedOops -Xms10g -Xmx56g
and it is an image of profiler when this happened:
and it is an image of profiler and also task manager after 2 hours:
P.s. we use jprofiler to profile and the green colour shows reserved ram and the blue colour is for used ram.also in second box you can track gc activity and third is for classes and forth shows threads activities and last is for cpu activities.
Thank you all for your answers.
These types of questions are never easy, mainly because to get it "right", the person asking them needs to have some basic understanding of how an OS treats and deals with memory; and the fact that there are different types of memory (at least resident, committed and reserved). I am by far not versatile enough to get this entirely right too, but I keep learning and getting better at this. They mean very different things and some of them are usually irrelevant (I find reserved to be such). You are using windows, as such this, imho is a must watch to begin with.
After you watch that, you need to move to the JVM world and how a JVM process. The heap is managed by a garbage collector, so to shrink some un-used heap - the GC needs to be able to do that. And while, before jdk-12, G1 could do that - it was never very eager to. Since jdk-12, there is this JEP that will return memory back, i.e.: it will un-commit memory back. Be sure to read when that happens, though. Also notice that other collectors like Shenandoah and/or ZGC do it much more often.
Of course, since you disable -UseGCOverheadLimit, you get a huge spike in CPU (GC threads are running like crazy to free space) and of course everything slows down. If I were you, i would enable that one back, let GC fail and analyze GC logs to understand what is going on. 56GB of Heap is a huge number for 20-60 users (this surely looks like a leak?). Notice, that without GC logs, this might be impossible to give a solution to.
P.S. Look at the first screen you shared and notice how there are two colors there: green and blue. I don't know what tool is that, but it looks like green is for "reserved memory" and blue is "used" (this is what used means). But it would be great if you said exactly what those are.
Java8 doesn't return allocated RAM back to OS even if JVM doesn't need it. For that feature you need to move to another version of JDK. This is JEP for that https://openjdk.java.net/jeps/346 it says that it was delivered in version 12 so I assume JDKs with version after 12 should have that feature.
The only way to prevent increasing of reserved memory is to decrease Xmx value. And since you are setting it to 56g I assume you are OK with Tomcat consuming up to 56g of memory. So if you think that it is too much then just decrease that number.
I have run into an issue with a java application I wrote causing hardware performance issues. The problem (I'm fairly certain), is that a few of the machines that I'm running the application on only have 1GB of memory. When I start my java application, I"m setting the heap size to -Xms 512m -Xmx 1024m.
My first question, is my assumption correct that this will obviously cause performance problems because I'm allocating all of the machines memory to the java heap?
This leads to another question. I'm running jconsole on the app and monitoring the apps memory usage. What I'm seeing is that the app consumes about 30mb at startup, gets to about 150mb and the garbage collector runs and it goes back down to 30mb. What I'm also seeing using top on the pid is that the application starts by using about 6% memory then slowly climbs up to about 20%. I do not understand this. Why would it only get up to 20% memory usage when I'm allocating 1GB to it. Shouldn't it go to 100%. Also, why is it using that much memory (20%) when it doesn't appear that the app ever uses more than 150mb?
I think its pretty obvious I need to adjust my Xms and Xmx and that should resolve the issue, but I'm trying to understand better what exactly is happening.
Two possibilities for the memory use:
Your app just does not use that much memory
Or
Your app does not use that much memory fast enough.
What happens:
The garbage collector has several points where it will execute:
Just scheduled: It will clean up easy to remove objects
Full collection: This runs when you hit the set memory limits.
If options 1, the general much lower impact quick collection, can keep your memory use under control, it will not hit the full collection unless it the JVM GC options are set to run a full on a schedule.
With your application I would start setting lower xmx/xms values so that more guaranteed resources are left for the OS, and maybe some paging is prevented.
I have a Scala daemon application, that runs in a server in Rackspace with a limit of 2GB. Because of an unknown reason, the server get stuck after some time the application is running. I am suspecting there is a memory leak, because the server memory gets full after some time.
I tried to run jvisualvm, making snapshots of memory in two different moments and compare them to see if there were objects that remained allocated, but I could not find anything.
The heap allocation is just around 400MB. Here is a snapshot of the JVM memory in New Relic:
Notice that PS Eden Space heap is what keeps increasing. I did a work around that kills the application every 3 hours and starts it back again (this is why the graph suddenly goes back down).
Any idea of why this PS Eden is increasing? How to fix it?
Edit 1:
Screenshot of the machine that halted minutes before 13:00
Edit 2:
On a new round, a left the server hang itself, and used G1GC. Here is the new relic graph for this run:
It's normal that the Eden grows constantly, that is where new objects are allocated. Eden will keep growing until it get's full or until a partial collection runs that collects unused objects, and shifts objects being used to the survivor region S0.
This is as per the way this type of garbage collection was designed. The idea is that it's OK that Eden is full, we let it grow and garbage collect it only when it's most convenient, minimizing the impact for application code.
Try to remove the workaround, let the server freeze and see if there are any out of memory errors in the logs. Too many classes would have cause such errors.
Try to see if the OldGen is full. Then using visualvm, force a garbage collection, and see if it goes down. If it doesn't, there is the problem.
Then take a heap dump and a thread dump and analyse the heap dump in MAT - Eclipse Memory Analyser tool, see this tutorial as well. it could be that the server just needs more memory.
One important notion, in Java there is really no notion of a memory leak, the garbage collector works mostly flawlessly to collect unused objects.
Usually the problem comes from objects that are created but are for example kept around in static collections or thread local variables accidentally, and because they are referenced get never collected.
A tool that has a free trial and allows to generate a report that pinpoints a lot of these common causes is Plumbr. That is probably the best chance at a quick solution, try to run plumbr to see if it finds something, if not then MAT analisys of the heap dump.
Our JBoss 3.2.6 application server is having some performance issues and after turning on the verbose GC logging and analyzing these logs with GCViewer we've noticed that after a while (7 to 35 hours after a server restart) the GC going crazy. It seems that initially the GC is working fine and doing a GC every hour or so but at a certain point it starts going crazy and performing full GC's every minute. As this only happens in our production environment have not been able to try turning off explicit GCs (-XX:-DisableExplicitGC) or modify the RMI GC interval yet but as this happens after a few hours it does not seem to be caused by the know RMI GC issues.
Any ideas?
Update:
I'm not able to post the GCViewer output just yet but it does not seem to be hitting the max heap limitations at all. Before the GC goes crazy it is GC-ing just fine but when the GC goes crazy the heap doesn't get above 2GB (24GB max).
Besides RMI are there any other ways explicit GC can be triggered? (I checked our code and no calls to System.gc() are being made)
Is your heap filling up? Sometimes the VM will get stuck in a 'GC loop' when it can free up just enough memory to prevent a real OutOfMemoryError but not enough to actually keep the application running steadily.
Normally this would trigger an "OutOfMemoryError: GC overhead limit exceeded", but there is a certain threshold that must be crossed before this happens (98% CPU time spent on GC off the top of my head).
Have you tried enlarging heap size? Have you inspected your code / used a profiler to detect memory leaks?
You almost certainly have a memory leak and the if you let the application server continue to run it will eventually crash with an OutOfMemoryException. You need to use a memory analysis tool - one example would be VisualVM - and determine what is the source of the problem. Usually memory leaks are caused by some static or global objects that never release object references that they store.
Good luck!
Update:
Rereading your question it sounds like things are fine and then suddenly you get in this situation where GC is working much harder to reclaim space. That sounds like there is some specific operation that occurs that consumes (and doesn't release) a large amount of heap.
Perhaps, as #Tim suggests, your heap requirements are just at the threshold of max heap size, but in my experience, you'd need to pretty lucky to hit that exactly. At any rate some analysis should determine whether it is a leak or you just need to increase the size of the heap.
Apart from the more likely event of a memory leak in your application, there could be 1-2 other reasons for this.
On a Solaris environment, I've once had such an issue when I allocated almost all of the available 4GB of physical memory to the JVM, leaving only around 200-300MB to the operating system. This lead to the VM process suddenly swapping to the disk whenever the OS had some increased load. The solution was not to exceed 3.2GB. A real corner-case, but maybe it's the same issue as yours?
The reason why this lead to increased GC activity is the fact that heavy swapping slows down the JVM's memory management, which lead to many short-lived objects escaping the survivor space, ending up in the tenured space, which again filled up much more quickly.
I recommend when this happens that you do a stack dump.
More often or not I have seen this happen with a thread population explosion.
Anyway look at the stack dump file and see whats running. You could easily setup some cron jobs or monitoring scripts to run jstack periodically.
You can also compare the size of the stack dump. If it grows really big you have something thats making lots of threads.
If it doesn't get bigger you can at least see which objects (call stacks) are running.
You can use VisualVM or some fancy JMX crap later if that doesn't work but first start with jstack as its easy to use.
Hi there
I would like to start by saying that i'm a beginner, but i'm working on a really small and simple Java app, that really shouldn't cause some major problems.
I was monitoring memory usage from windows task manager, and noticed that with my application started, java.exe was using about 70MB of available memory. So I thought to myself, ok, this probably is a little large, but still, nothing that my PC couldn't handle. But really strange thing started happening when i tried to resize my window, memory usage suddenly jumped to like 80-90 MB, and if i would continue dragging my window, randomly resizing, it kept increasing memory usage. I thought it has something to do with calling repainting methods on GUI components during window resize, so i took a few suspicious components that could cause some kind of memory leak, and deleted those from my mainwindow form, leaving my program almost completely stripped down, but this issue persisted. What i noticed later was that if i keep resizing my window, memory usage grows up to 200-220 MB, and then stops this uncontrolled growth there.
So can somebody tell me, could this be a normal behavior having in mind memory management in java?
Java objects created are not necessarily cleaned up once they're finished with. Instead, something called the "garbage collector" periodically runs in the background looking for orphaned objects and deletes them, freeing up memory.
Your application is likely creating lots of temporary objects as it resizes your window. Although no longer being referenced by anything (ie orphans), these objects are hanging around until the garbage collector runs.
You'll probably find that your max memory is 256M (the default) - the garbage collector is probably being called more often as you approach your max memory and the creation of new objects requires memory to be freed up immediately - hence the memory hovering just under 256M as the creation/deletion rate is balanced by demand.
This is completely normal behaviour.
No, this behaviour is perfectly normal. Java memory management is based on automatic garbage collection, which means that unused memory accumulates for a while before being garbage collected (because that is a significant amount of work, you want to do it as rarely as possible.
So the JVM will tend to use a large part of the memory it's allowed to use (the maximum heap size) - and on a modern PC with multiple GBs of memory available, the default maximum heap size will be pretty big. However, if you have a small app that you know won't need much memory, you can adjust the maximum heap size via the command line option -Xmx, for example
java -Xmx64M main.class.name
will restrict the heap to 64MB