I have a problem with my program (JSF running on Glassfish). It's proceeding a lot of data (and inserting it to the database using hibernate). And the problem is that after about 2 hours of work it slows down. I don't get any exception (especially there is no OutOfMemory). Is it possible that it is a memory leak? I've checked the heap dump with Eclipse Memory Analyzer and there were some HashMap issues. I've repaired it where it was possible and now the tool doesn't show this problem. But my application still doesn't work properly.
It sounds like your problem is not a conventional memory leak at all.
If I was to guess, I'd say that you've got a poorly designed data structure, an ineffective cache, or maybe a concurrency bottleneck.
You should probably focus on performance profiling to see where the time is going and to look for signs of lock contention.
There is a chance, that you have some sort of memory leak and produce a lot of temporary objects so that after a decent time the garbage collector kills your performance. If this is the case, you could play with the -Xmx option: with less heap size your application should slow down earlier, a bigger heap should show an oppisite effect.
The effect could also be caused by growing internal datastructures. Operations on datastructures always have a time complexity ("Big-O-Notation") and if the complexity is polynomal or even worse, such operations can kill performance too. Have a look at those collections in your applications that grow over time and double-check, that you've chose the optimal collection type.
Related
I'm trying to figure out the best way to analyse why an application suddenly allocates memory more often than before. For a better understanding what I mean, take a look at this memory graph.
As you can see it runs fine for a few days and then allocates memory way faster.
What tool can help me here? What is the best approach?
I have already taken several thread dumps but there is nothing noticeable. I dumped the heap twice and compared the objects but that didn't help either.
I don't think it is a memory problem because the full gc can always clean up the same amount of memory. It rather looks like some routine started to run which requires a lot of memory. It is not related to heavier usage as it turns back to normal as soon as I restart the application.
Let's say I've got an applciation which has a memory leak. At some point the GC will try very hard to clear memory and will slow down my application. I know that if you set this parameter for the JVM -XX:-UseGCOverheadLimit it will throw an OutOfMemoryException:
if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered.
However this is somehow not good enough for me. Because my application will become very slow even before these numbers hit. The GC will absorb the CPU for some time before the OutOfMemoryException will be thrown. My goal is to somehow recognize very early if there will most likly a problem and then throw the OutOfMemoryexception. After that I have some kind of recovery strategy.
Ok now I've found these two additional parameters GCTimeLimit and GCHeapFreeLimit. With them it is possible to tweak the two quoted constants (98% and 2%).
I've made some tests on my own like a small piece of code which produces a memory leak and played with those settings. But I'm not really sure how to find the correct tradeoff. My hope is that someone else had the same problem and came up with a reasonable solution, or maybe there are some other GC switches which i don't know yet.
I'm feeling a little bit lost since I'm not really an expert on this topic and it seems that there are a lot of thing's which can be considered.
If you are using the Sun/Oracle JVM, this page seems to be a pretty complete GC-tuning primer.
You can use java.lang.management.MemoryUsage to determine the used memory, and total memory available. As it approaches the tunable GC collection threshold then you can throw your error.
Of course doing this is a little ridiculous. If the issue is that you need more memory then increase the heap size. The more likely issue is that you're not releasing memory gracefully when you're done with it.
Side-step the JVM heap and use something like Terra Cotta's Big Memory which uses direct memory management to grow beyond the reach of the garbage collector.
I have DB connection .
the question is when I do the "next()" for getting data from DB, why the garbage collector runs simultaneoulsly. Cause of the GC , collectiing data from DB is so slow.
How can I solve this out?
The garbage collector is running because something (in this case, probably the DB connection stack) is trying to allocate an object when there is insufficient free memory. This is normal behavior and you cannot entirely prevent it.
The first question is, is this really causing a significant slowdown? The minimal evidence you have presented is unconvincing. As comments point out, slow queries are not evidence of a GC-related problem, let alone that GC is causing the queries to be slow. Indeed, it could be the other way around. Poor query design could be the cause of the high level of GC activity; see point 3 below.
The second question is, why is this causing a noticeable slowdown? There are three possibilities:
Simply your heap may be too small. The JVM works best, and GC is most efficient, if you can run with a big heap. If you run with a small heap, the GC runs more often and takes proportionally more time per object reclaimed. You can increase the heap size using the JVM options "-Xmx" and "-Xms"; look them up in the manual.
Your application may have a memory leak. Over time, this will cause more and more of your heap to be filled with useless (but not garbage collectable) objects. As the heap approaches full, the GC will run more and more frequently, taking longer and longer. Eventually, the application will die with an OutOfMemoryError. The proper way to address this is to find and fix the memory leak. There are "bandaid" solutions such as increasing the heap, and using a certain JVM option to cause the JVM to exit when more than a given percentage of time is spent garbage collecting.
Your application may just be generating huge numbers of objects as it fetches query results from the database. If this is the case, increasing the heap size will help a bit. However, you really need to look in detail at the DB queries you are performing to see if there's some way to reduce the amount of data that is retrieved from the database. Can you reduce it by doing more work on the DB side?
Finally, once you have addressed the three possibilities above, you may be able to improve things further by enabling the parallel GC. This will only help if you are running on a multi-core machine.
To test if it is really the garbage collector: increase heap space (-Xmx512m or even higher, depending on your actual settings), if it was the gc, there's a good chance that the problem is solved.
I think what you need to do firstly is find out the cause of the GC. Use some tool (like jconsole) to find out if something uses out the memory. If the momory consumption is expected, then you must increase your heap size. If this is an abnormal behavior. Use some profiling tool to find out where the memory leak happen. Considering you are doing some database operations, please double check if you forget to release some database resources (like connection) after using. After you solve the GC's problem, then go back to see if it's still very slow for "next()".
I've written a pretty complex java application that is doing a lot of calculations on price data from the markets in real time and from looking at the task manager in windows this sucker is taking close to 1MEG every 30 seconds and the performance is fine until it gets closer to the memory limit around 300MEG and then the g-collector really kicks in and spikes my CPU to around 50% and the UI performance rapidly degrades from all I've written so far it sounds like I have some bad code going on because the nature of my program is CPU intensive but by design stores very little data in memory.
I need some help on what might be some good next steps to take to see how I can figure out what the problem is, I think if I can see what objects are getting stored in memory that would help as maybe I have some lousy code but I am heart broken with Java as I thought these are problems I would not have to worry about. Thanks for any answers. - Duncan
Identify some reasonable performance targets (memory usage, throughput, latency).
Put together some repeatable performance tests, the closer you can get these to real life scenarios the better.
Get a hold of a good profiler. I've used YourKit with a lot of success, the Netbeans and Eclipse profilers are not bad either. Most decent profilers will be able to identify memory usage, GC and performance hotspots.
Identify the biggest culprits and start fixing the issues beginning at the TOP of the list.
Check out VisualVM. It's in the current JDK bin directory as jvisualvm. If you don't have a memory leak, the heap usage should go down when you run the garbage collector, and you can see which objects may be holding memory by calculating the retained sizes of objects in the heap.
http://download.oracle.com/javase/6/docs/technotes/guides/visualvm/intro.html
Like others say, use a profiler to find what is consuming the memory.
If you don't know already, the garbage collector can only release memory on objects that are out of scope. That is, don't have any references to them. Just make sure it goes out of scope when your done with it. It sounds like your locking it up in a way were it's still referenced some where.
Also, if you want to suggest to the GC that it cleans up, try this:
System.gc();
System.runFinalization();
Again, that is only a suggestion to the gc; but I've found it really helps if you run it after a lot of objects go out of scope.
Lastly, you can tweak your vm arguments.
There are settings for min/max heap size settings. If it's a critical application set them to the same and set it high (that way it doesn't have to keep allocating/deallocating - it just grabs one big chunk at startup). This isn't a fix, just a workaround.
I have a mobile application that is suffering from slow-down over time. My hunch, (In part fed by this article,) is that this is due to fragmentation of memory slowing the app down, but I'm not sure. Here's a pretty graph of the app's memory use over time:
fraggle rock http://kupio.com/image-dump/fragmented.png
The 4 peaks on the graph are 4 executions of the exact same task on the app. I start the task, it allocates a bunch of memory, it sits for a bit (The flat line on top) and then I stop the task. At that point it calls System.gc(); and the memory gets cleaned up.
As can be seen, each of the 4 runs of the exact same task take longer to execute. The low-points in the graph all return to the same level so there do not seem to be any memory leaks between task runs.
What I want to know is, is memory fragmentation a feasible explanation or should I look elsewhere first, bearing in mind that I've already done a lot of looking? The low-points on the graph are relatively low so my assumption is that in this state the memory would not be very fragmented since there can't be a lot of small memory holes to be causing problems.
I don't know how the j2me memory allocator works though, so I really don't know. Can anyone advise? Has anyone else had problems with this and recognises the memory profile of the app?
If you've got a little bit of time, you could test your theory by re-using the memory by using Memory Pool techniques: each run of the task uses the 'same' chunks of memory by getting them from the pool and returning them at release time.
If you're still seeing the degrading performance after doing this investigation, it's not memory fragmentation causing the problem. Let us all know your results and we can help troubleshoot further.
Memory fragmentation would account for it... what is not clear is whether the Apps use of memory is causing paging? this would also slow things up.... and could cause the same issues.
It the problem really is memory fragmentation, there is not much you can do about it.
But before you give up in despair, try running your app with a execution profiler to see if it is spending a lot of time executing in an unexpected place. It is possible that the slow down is actually due to a problem in your algorithms, and nothing to do with memory fragmentation. As people have already said, J2ME garbage collectors should not suffer from fragmentation issues.
Consider looking at garbage collection statistics. You should have a lot more on the last run than the first, if your theory is to hold. Another thought might be that something else eats your memory so your application has less.
In other words, profiler time :)
What OS are you running this on? I have some experience with Windows CE5 (or Windows Mobile) devices. CE5's operating system level memory architecture is quite broken and will fail soon for memory intensive applications. Your graph does not have any scales, but every process only gets 32MB of address space on CE5. The VM and shared libraries will take their fair share of that as well, leaving you with quite little left.
The only way around this is to re-use the memory you allocated instead of giving it back to the collector and re-allocating later. This is, of course, much more low-level programming than you would usually want to do in Java, but on this platform you might be out of luck.