Memory leak or to be expected - java

I noticed today that my program slowly chews memory. I checked with Java VisualIVM to try and learn more. I am very new at this. I am coding in Java8 with Swing to take care of the game.
Note that nothing is supposed to happen except rendering of objects.
No new instances or anything.
The Game Loop is looking something along the lines of
while (running)
try render all drawables at optimal fps
try update all entities at optimal rate
sleep if there is time over
From what I could find during a 20 minute period the following happened.
Sleeping is working, it is yielding most of its time.
5 classes were loaded some time into the run time.
The game uses about 70 MB when first launched. (At this point everything is loaded as far as I can tell.)
15 MB of RAM were taken pretty rapidly after the initial 70 MB. Followed by a slowly increase. Now 100 MB is taken in total.
CPU Usage seems sane, about 7% on my i-2500k.
The Heap size has been increased once. Used heap has never exceeded 50%.
If I comment everything in the Game Loop except for the while (running {} part I get fewer leaks, they still occur however.
Is this normal or should I dig deeper? If I need to dig deeper can someone point me in the direction of what to look after.
Now after 25 minutes it is up to 102 MB of ram. Meaning the leaks are smaller and fewer.
A reminder, I not very good at this. First time I try to debug my project this way. Please bear that in my mind.
Update
After roughly 40 minutes it settles at 101 to 102 MB of RAM usage. It hasn't exceeded that for 15 minutes now. It goes a bit up and down.
The Heap size is getting smaller and smaller. CPU Usage is steady.

Short Answer: There is no leak.
Expalanation
Using these questions as a reference.
Simple Class - Is it a Memory Leak?
Tracking down a memory leak / garbage-collection issue in Java
Creating a memory leak with Java
And this article.
http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html
As I mentioned in my question. I am very new to this. What triggered me was that I checked Windows Task Manager and saw an increase in memory usage by my application. So I decided to dig deeper. I learned a couple of things.
Javas Garbage Collection was vastly underestimated by me. It is messy to cause a memory leak if that is your intention. There can be problems with it when there is Threading involved however. Which caught my attention as I am using Threading in my project.
The tools that Windows provide are sub-par, I recommend using external tools. I used Java VisuaIVM. In this tool I found that there are loads of classes being loaded the first 2 minutes of the game. And 5 a bit in. Some of the first ones created are String references that JVM makes.
"Thread.sleep could be allocating objects under the covers." -- I found out it does these. a total of 5 even. Which explains my initial "5 classes were loaded some time into the run time.". What they do I have no clue as of yet.
About 10 - 15 MB was from the profiling that I did. I wish I wasn't such a rookie.
So again no leak that I could find.

Related

How to deal with long Full Garbage Collection cycle in Java

We inherited a system which runs in production and started to fail every 10 hours recently. Basically, our internal software marks the system that is has failed if it is unresponsive for a minute. We found that our problem that our Full GC cycles last for 1.5 minutes, we use 30 GB heap. Now the problem is that we cannot optimize a lot in a short period of time and we cannot partition of our service quickly but we need to get rid of 1.5 minutes pauses as soon as possible as our system fails because of these pauses in production. For us, an acceptable delay is 20 milliseconds but not more. What will be the quickest way to tweak the system? Reduce the heap to trigger GCs frequently? Use System.gc() hints? Any other solutions? We use Java 8 default settings and we have more and more users - i.e. more and more objects created.
Some GC stat
You have a lot of retained data. There is a few options which are worth considering.
increase the heap to 32 GB, this has little impact if you have free memory. Looking again at your totals it appears you are using 32 GB rather than 30 GB, so this might not help.
if you don't have plenty of free memory, it is possible a small portion of your heap is being swapped as this can increase full GC times dramatically.
there might be some simple ways to make the data structures more compact. e.g. use compact strings, use primitives instead of wrappers e.g. long for a timestamp instead of Date or LocalDateTime. (long is about 1/8th the size)
if neither of these help, try moving some of the data off heap. e.g. Chronicle Map is a ConcurrentMap which uses off heap memory can can reduce you GC times dramatically. i.e. there is no GC overhead for data stored off heap. How easy this is to add highly depends on how your data is structured.
I suggest analysing how your data is structured to see if there is any easy ways to make it more efficient.
There is no one-size-fits-all magic bullet solution to your problem: you'll need to have a good handle on your application's allocation and liveness patterns, and you'll need to know how that interacts with the specific garbage collection algorithm you are running (function of version of Java and command line flags passed to java).
Broadly speaking, a Full GC (that succeeds in reclaiming lots of space) means that lots of objects are surviving the minor collections (but aren't being leaked). Start by looking at the size of your Eden and Survivor spaces: if the Eden is too small, minor collections will run very frequently, and perhaps you aren't giving an object a chance to die before its tenuring threshold is reached. If the Survivors are too small, objects are going to be promoted into the Old gen prematurely.
GC tuning is a bit of an art: you run your app, study the results, tweak some parameters, and run it again. As such, you will need a benchmark version of your application, one which behaves as close as possible to the production one but which hopefully doesn't need 10 hours to cause a full GC.
As you stated that you are running Java 8 with the default settings, I believe that means that your Old collections are running with a Serial collector. You might see some very quick improvements by switching to a Parallel collector for the Old generation (-XX:+UseParallelOldGC). While this might reduce the 1.5 minute pause to some number of seconds (depending on the number of cores on your box, and the number of threads you specify for GC), this will not reduce your max pause to to 20ms.
When this happened to me, it was due to a memory leak caused by a static variable eating up memory. I would go through all recent code changes and look for any possible memory leaks.

Java slower with big heap

I have a Java program that operates on a (large) graph. Thus, it uses a significant amount of heap space (~50GB, which is about 25% of the physical memory on the host machine). At one point, the program (repeatedly) picks one node from the graph and does some computation with it. For some nodes, this computation takes much longer than anticipated (30-60 minutes, instead of an expected few seconds). In order to profile these opertations to find out what takes so much time, I have created a test program that creates only a very small part of the large graph and then runs the same operation on one of the nodes that took very long to compute in the original program. Thus, the test program obviously only uses very little heap space, compared to the original program.
It turns out that an operation that took 48 minutes in the original program can be done in 9 seconds in the test program. This really confuses me. The first thought might be that the larger program spends a lot of time on garbage collection. So I turned on the verbose mode of the VM's garbage collector. According to that, no full garbage collections are performed during the 48 minutes, and only about 20 collections in the young generation, which each take less than 1 second.
So my questions is what else could there be that explains such a huge difference in timing? I don't know much about how Java internally organizes the heap. Is there something that takes significantly longer for a large heap with a large number of live objects? Could it be that object allocation takes much longer in such a setting, because it takes longer to find an adequate place in the heap? Or does the VM do any internal reorganization of the heap that might take a lot of time (besides garbage collection, obviously).
I am using Oracle JDK 1.7, if that's of any importance.
While bigger memory might mean bigger problems, I'd say there's nothing (except the GC which you've excluded) what could extend 9 seconds to 48 minutes (factor 320).
A big heap makes seemingly worse spatial locality possible, but I don't think it matters. I disagree with Tim's answer w.r.t. "having to leave the cache for everything".
There's also the TLB which a cache for the virtual address translation, which could cause some problems with very large memory. But again, not factor 320.
I don't think there's anything in the JVM which could cause such problems.
The only reason I can imagine is that you have some swap space which gets used - despite the fact that you have enough physical memory. Even slight swapping can be the cause for a huge slowdown. Make sure it's off (and possibly check swappiness).
Even when things are in memory you have multiple levels of caching of data on modern CPUs. Every time you leave the cache to fetch data the slower that will go. Having 50GB of ram could well mean that it is having to leave the cache for everything.
The symptoms and differences you describe are just massive though and I don't see something as simple as cache coherency making that much difference.
The best advice I can five you is to try running a profiler against it both when it's running slow and when it's running fast and compare the difference.
You need solid numbers and timings. "In this environment doing X took Y time". From that you can start narrowing things down.

Does this memory usage pattern indicate that my Java application leaks memory?

I have a Java application that waits for the user to hit a key and then runs a task. Once done, it goes back and waits again. I was looking at memory profile for this application with jvisualvm, and it showed an increasing pattern.
Committed memory size is 16MB.
Used memory, on application startup, was 2.7 MB, and then it climbed with intermediate drops (garbage collection). Once this sawtooth pattern approached close to 16MB, a major drop occurred and the memory usage fell close to 4 MB. This major drop point has been increasing though. 4MB, 6MB, 8MB. The usage never goes beyond 16 MB but the whole sawtooth pattern is on a climb towards 16 MB.
Do I have a memory leak?
Since this is my first time posting to StackOverflow, do not have enough reputation to post an image.
Modern SunOracle JVMs use what is called a generational garbage collector:
When the collector runs it first tries a partial collection only releases memory that was allocated recently
recently created objects that are still active get 'promoted'
Once an object has been promoted a few times, it will no longer get cleaned up by partial collections even after it is ready for collection
These objects, called tenured, are only cleaned up when a full collection becomes necessary in order to make enough room for the program to continue running
So basically, bits of your program that stick around long enough to get missed by the fast 'partial' collections will hang around until JVM decides it has to do a full collection. If you let it go long enough you should eventually see the full collection happen and usage drop back down to your original starting point.
If that never happens and you eventually get an Out Of Memory exception, then you probably have a memory leak :)
That kind of sawtooth pattern is commonly observed and is not an indication of memory leak.
Because garbage collecting in big chunks is more efficient than constantly collecting small amounts, the JVM does the collecting in batches. That's why you see this pattern.
As stated by others, this behavior is normal. This is a good description of the garbage collection process. To summarize, the JVM usese a generational garbage collector. The vast majority of objects are very short-lived, and those that survive longer tend to last much longer. Knowing this, the GC will check the newer generation first to avoid having to repeatedly check the older objects which are less likely to be inaccessible. After a period of time, the survivors move to the older generation. This increasing saw-tooth is exactly what you're seeing- the rising troughs are due to the older generation growing larger as the survivors are being moved to it. If your program ran long enough eventually checking the newer generation wouldn't free up enough memory and it would have to GC the old generation as well.
Hope that helps.

java heap size increasing

I compile a jar file, and i write to the Log every half a minute to check threads and memory situation.
Attached are the start of the log, and the end of the day log, after this software was stuck, and stop working.
In the middle of the day several automatic operations happened. I received quotes about 40 per seconds, and finished to take care of every one of the quotes before the next came.
Plus, every 4 seconds i write a map with info to the DB.
Any ideas why heap size in increasing?
(look at currHeapSize)
morning:
evening:
Any ideas why heap size in increasing?
These are classic symptoms of a Java storage leak. Somewhere in your application is a data structure that is accumulating more and more objects, and preventing them from being garbage collected.
The best way to find a problem like this is to use a memory profiler. The Answers to this Question explain.

Strange behavior of memory when i make profiling

I have a strange problem when i make profiling. I explain, in figure belows, we can see clairly that the greatest object take only 35mo. see this figure :
. But when i verify a memory used at the same time i remark that it exceed 500mo
Some one can explain me why the greatest object take maximum 35mo while the heap used exceed 500 for the same time ? and how calculate heap used ?
Probably you are not profiling all object creations. One of the standard profiling settings in Netbeans profiles only 1 in 10 object creations. Your measurement sais 35MB is 51% of your data. So you have profiled 70MB in total. This is roughly 1/10 of what you are measuring as the total heap size.
In general measuring only part of the creations is enough if you are looking for clues to who is the big memory spender. The reason for not tracking all creations is performance.
If you want to see where all your memory is used you can do the following:
In Netbeans 6.9.1 this is a setting saying 'Track every ... object allocations'. You can lower this number (if 1 in 10 doesn't help you find your problem, nor tells you enough about the application). It is possible however that this makes it impossible for your application to be run.
You can also make a heap dump. This will not contain information about creation and removal of objects, but it will tell you all objects currently in alive in your application.

Categories

Resources