What does fragmented memory look like? - java

I have a mobile application that is suffering from slow-down over time. My hunch, (In part fed by this article,) is that this is due to fragmentation of memory slowing the app down, but I'm not sure. Here's a pretty graph of the app's memory use over time:
fraggle rock http://kupio.com/image-dump/fragmented.png
The 4 peaks on the graph are 4 executions of the exact same task on the app. I start the task, it allocates a bunch of memory, it sits for a bit (The flat line on top) and then I stop the task. At that point it calls System.gc(); and the memory gets cleaned up.
As can be seen, each of the 4 runs of the exact same task take longer to execute. The low-points in the graph all return to the same level so there do not seem to be any memory leaks between task runs.
What I want to know is, is memory fragmentation a feasible explanation or should I look elsewhere first, bearing in mind that I've already done a lot of looking? The low-points on the graph are relatively low so my assumption is that in this state the memory would not be very fragmented since there can't be a lot of small memory holes to be causing problems.
I don't know how the j2me memory allocator works though, so I really don't know. Can anyone advise? Has anyone else had problems with this and recognises the memory profile of the app?

If you've got a little bit of time, you could test your theory by re-using the memory by using Memory Pool techniques: each run of the task uses the 'same' chunks of memory by getting them from the pool and returning them at release time.
If you're still seeing the degrading performance after doing this investigation, it's not memory fragmentation causing the problem. Let us all know your results and we can help troubleshoot further.

Memory fragmentation would account for it... what is not clear is whether the Apps use of memory is causing paging? this would also slow things up.... and could cause the same issues.

It the problem really is memory fragmentation, there is not much you can do about it.
But before you give up in despair, try running your app with a execution profiler to see if it is spending a lot of time executing in an unexpected place. It is possible that the slow down is actually due to a problem in your algorithms, and nothing to do with memory fragmentation. As people have already said, J2ME garbage collectors should not suffer from fragmentation issues.

Consider looking at garbage collection statistics. You should have a lot more on the last run than the first, if your theory is to hold. Another thought might be that something else eats your memory so your application has less.
In other words, profiler time :)

What OS are you running this on? I have some experience with Windows CE5 (or Windows Mobile) devices. CE5's operating system level memory architecture is quite broken and will fail soon for memory intensive applications. Your graph does not have any scales, but every process only gets 32MB of address space on CE5. The VM and shared libraries will take their fair share of that as well, leaving you with quite little left.
The only way around this is to re-use the memory you allocated instead of giving it back to the collector and re-allocating later. This is, of course, much more low-level programming than you would usually want to do in Java, but on this platform you might be out of luck.

Related

How would you analyze a change in heap memory allocation?

I'm trying to figure out the best way to analyse why an application suddenly allocates memory more often than before. For a better understanding what I mean, take a look at this memory graph.
As you can see it runs fine for a few days and then allocates memory way faster.
What tool can help me here? What is the best approach?
I have already taken several thread dumps but there is nothing noticeable. I dumped the heap twice and compared the objects but that didn't help either.
I don't think it is a memory problem because the full gc can always clean up the same amount of memory. It rather looks like some routine started to run which requires a lot of memory. It is not related to heavier usage as it turns back to normal as soon as I restart the application.

How to prevent the Garbage Collector to slow down my application

Let's say I've got an applciation which has a memory leak. At some point the GC will try very hard to clear memory and will slow down my application. I know that if you set this parameter for the JVM -XX:-UseGCOverheadLimit it will throw an OutOfMemoryException:
if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered.
However this is somehow not good enough for me. Because my application will become very slow even before these numbers hit. The GC will absorb the CPU for some time before the OutOfMemoryException will be thrown. My goal is to somehow recognize very early if there will most likly a problem and then throw the OutOfMemoryexception. After that I have some kind of recovery strategy.
Ok now I've found these two additional parameters GCTimeLimit and GCHeapFreeLimit. With them it is possible to tweak the two quoted constants (98% and 2%).
I've made some tests on my own like a small piece of code which produces a memory leak and played with those settings. But I'm not really sure how to find the correct tradeoff. My hope is that someone else had the same problem and came up with a reasonable solution, or maybe there are some other GC switches which i don't know yet.
I'm feeling a little bit lost since I'm not really an expert on this topic and it seems that there are a lot of thing's which can be considered.
If you are using the Sun/Oracle JVM, this page seems to be a pretty complete GC-tuning primer.
You can use java.lang.management.MemoryUsage to determine the used memory, and total memory available. As it approaches the tunable GC collection threshold then you can throw your error.
Of course doing this is a little ridiculous. If the issue is that you need more memory then increase the heap size. The more likely issue is that you're not releasing memory gracefully when you're done with it.
Side-step the JVM heap and use something like Terra Cotta's Big Memory which uses direct memory management to grow beyond the reach of the garbage collector.

Memory leak without OutOfMemory exception?

I have a problem with my program (JSF running on Glassfish). It's proceeding a lot of data (and inserting it to the database using hibernate). And the problem is that after about 2 hours of work it slows down. I don't get any exception (especially there is no OutOfMemory). Is it possible that it is a memory leak? I've checked the heap dump with Eclipse Memory Analyzer and there were some HashMap issues. I've repaired it where it was possible and now the tool doesn't show this problem. But my application still doesn't work properly.
It sounds like your problem is not a conventional memory leak at all.
If I was to guess, I'd say that you've got a poorly designed data structure, an ineffective cache, or maybe a concurrency bottleneck.
You should probably focus on performance profiling to see where the time is going and to look for signs of lock contention.
There is a chance, that you have some sort of memory leak and produce a lot of temporary objects so that after a decent time the garbage collector kills your performance. If this is the case, you could play with the -Xmx option: with less heap size your application should slow down earlier, a bigger heap should show an oppisite effect.
The effect could also be caused by growing internal datastructures. Operations on datastructures always have a time complexity ("Big-O-Notation") and if the complexity is polynomal or even worse, such operations can kill performance too. Have a look at those collections in your applications that grow over time and double-check, that you've chose the optimal collection type.

Detecting memory-leak programmatically

If, on purpose, I create an application that crunches data while suffering from memory-leaks, I can notice that the memory as reported by, say:
Runtime.getRuntime().freeMemory()
starts oscillating between 1 and 2 MB of free memory.
The application then enters a loop that goes like this: GC, processing some data, GC, etc. but because the GC happens so often, the application basically isn't doing much else anymore. Even the GUI takes age to respond (and, no, I'm not talking about EDT issues here, it's really the VM basically stuck in some endless GC'ing mode).
And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?
Note that I'm not talking about ouf-of-memory errors nor about detecting the memory leak itself.
I'm talking about detecting that an application is running so low on memory that it is basically calling the GC all the time, leaving hardly any time to do something else (in my hypothetical example: crunching data).
Would it work, for example, to repeatedly read how much memory is available during, say, one minute, and see that if the number has been "oscillating" between different values all below, say, 4 MB, conclude that there's been some leak and that the application has become unusable?
And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?
I don't think so. You can find out roughly how much heap memory is free at any given instant, but AFAIK you cannot reliably determine when you are running out of memory. (Sure, you can do things like scraping the GC log files, or trying to pick patterns in the free memory oscillations. But these are likely to be unreliable and fragile in the face of JVM changes.)
However, there is another (and IMO better) approach.
In recent versions of Hotspot (version 1.6 and later, I believe), you can tune the JVM / GC so that it will give up and throw an OOME sooner. Specifically, the JVM can be configured to check that:
the ratio of free heap to total heap is greater than a given threshold after a full GC, and/or
the time spent running the GC is less than a certain percentage of the total.
The relevant JVM parameters are "UseGCOverheadLimit", "GCTimeLimit" and "GCHeapFreeLimit". Unfortunately, Hotspot's tuning parameters are not well documented on the public web, but these ones are all listed here.
Assuming that you want your application to do the sensible thing ... give up when it doesn't have enough memory to run properly anymore ... then just launch the JVM with a smaller "GCTimeLimitor" or "GCHeapFreeLimit" than the defaults.
EDIT
I've discovered that the MemoryPoolMXBean API allows you to look at the peak usage of individual memory pools (heaps), and set thresholds. However, I've never tried this, and the APIs have lots of hints that suggest that not all JVMs implement the full API. So, I would still recommend the HotSpot tuning option approach (see above) over this one.
You can use getHeapMemoryUsage.
I see two attack vectors.
Either monitor your memory consumption.
When you more or less constantly use lots of the available memory it is very likely that you have a memory leak (or are just using too much memory). The vm will constantly try to free some memory without much success => constant high memory usage.
You need to distinguish that from a large zigzag pattern which happens often without being an indicator of memory problem. Basically you use more an more memory, but when gc finds time to do its job it finds lots of garbage to bring out, so everything is fine.
The other attack vector is to monitor how often and what kind of success the gc runs. If it runs often with only small gains in memory, it is likely you have a problem.
I don't know if you can access this kind of information directly from your program. But if nothing else I think you can specify parameters on startup which makes the gc log information into a file which in turn could get parsed.
What you could do is spawn a thread that wakes up periodically and calculates the amount of used memory and records the result. Then you can do regression analysis on the result to estimate the rate of memory growth in your application. If you know the rate of growth, and the maximum amount of memory, you can predict (with some confidence) when your application will run out of memory.
You can pass arguments to your java virtual machine that gives you GC diagnostics such as
-verbose:gc This flag turns on the logging of GC information. Available
in all JVMs.
-XX:+PrintGCTimeStamps Prints the times at which the GCs happen
relative to the start of the
application.
If you capture that output in a file, in your application you can periodcly read that file and parse it to know when the GC has happened. So you can work out the average time between every GC
I think the JVM does exactly this for you and throws java.lang.OutOfMemoryError: GC overhead limit exceeded. So if you catch OutOfMemoryError and check for that message then you have what you want, don't you?
See this question for more details
i've been using plumbr for memory leak detection and it's been a great experience, though the licence is very expensive: http://plumbr.eu/

Java App with lots of memory problems

I've written a pretty complex java application that is doing a lot of calculations on price data from the markets in real time and from looking at the task manager in windows this sucker is taking close to 1MEG every 30 seconds and the performance is fine until it gets closer to the memory limit around 300MEG and then the g-collector really kicks in and spikes my CPU to around 50% and the UI performance rapidly degrades from all I've written so far it sounds like I have some bad code going on because the nature of my program is CPU intensive but by design stores very little data in memory.
I need some help on what might be some good next steps to take to see how I can figure out what the problem is, I think if I can see what objects are getting stored in memory that would help as maybe I have some lousy code but I am heart broken with Java as I thought these are problems I would not have to worry about. Thanks for any answers. - Duncan
Identify some reasonable performance targets (memory usage, throughput, latency).
Put together some repeatable performance tests, the closer you can get these to real life scenarios the better.
Get a hold of a good profiler. I've used YourKit with a lot of success, the Netbeans and Eclipse profilers are not bad either. Most decent profilers will be able to identify memory usage, GC and performance hotspots.
Identify the biggest culprits and start fixing the issues beginning at the TOP of the list.
Check out VisualVM. It's in the current JDK bin directory as jvisualvm. If you don't have a memory leak, the heap usage should go down when you run the garbage collector, and you can see which objects may be holding memory by calculating the retained sizes of objects in the heap.
http://download.oracle.com/javase/6/docs/technotes/guides/visualvm/intro.html
Like others say, use a profiler to find what is consuming the memory.
If you don't know already, the garbage collector can only release memory on objects that are out of scope. That is, don't have any references to them. Just make sure it goes out of scope when your done with it. It sounds like your locking it up in a way were it's still referenced some where.
Also, if you want to suggest to the GC that it cleans up, try this:
System.gc();
System.runFinalization();
Again, that is only a suggestion to the gc; but I've found it really helps if you run it after a lot of objects go out of scope.
Lastly, you can tweak your vm arguments.
There are settings for min/max heap size settings. If it's a critical application set them to the same and set it high (that way it doesn't have to keep allocating/deallocating - it just grabs one big chunk at startup). This isn't a fix, just a workaround.

Categories

Resources