I have DB connection .
the question is when I do the "next()" for getting data from DB, why the garbage collector runs simultaneoulsly. Cause of the GC , collectiing data from DB is so slow.
How can I solve this out?
The garbage collector is running because something (in this case, probably the DB connection stack) is trying to allocate an object when there is insufficient free memory. This is normal behavior and you cannot entirely prevent it.
The first question is, is this really causing a significant slowdown? The minimal evidence you have presented is unconvincing. As comments point out, slow queries are not evidence of a GC-related problem, let alone that GC is causing the queries to be slow. Indeed, it could be the other way around. Poor query design could be the cause of the high level of GC activity; see point 3 below.
The second question is, why is this causing a noticeable slowdown? There are three possibilities:
Simply your heap may be too small. The JVM works best, and GC is most efficient, if you can run with a big heap. If you run with a small heap, the GC runs more often and takes proportionally more time per object reclaimed. You can increase the heap size using the JVM options "-Xmx" and "-Xms"; look them up in the manual.
Your application may have a memory leak. Over time, this will cause more and more of your heap to be filled with useless (but not garbage collectable) objects. As the heap approaches full, the GC will run more and more frequently, taking longer and longer. Eventually, the application will die with an OutOfMemoryError. The proper way to address this is to find and fix the memory leak. There are "bandaid" solutions such as increasing the heap, and using a certain JVM option to cause the JVM to exit when more than a given percentage of time is spent garbage collecting.
Your application may just be generating huge numbers of objects as it fetches query results from the database. If this is the case, increasing the heap size will help a bit. However, you really need to look in detail at the DB queries you are performing to see if there's some way to reduce the amount of data that is retrieved from the database. Can you reduce it by doing more work on the DB side?
Finally, once you have addressed the three possibilities above, you may be able to improve things further by enabling the parallel GC. This will only help if you are running on a multi-core machine.
To test if it is really the garbage collector: increase heap space (-Xmx512m or even higher, depending on your actual settings), if it was the gc, there's a good chance that the problem is solved.
I think what you need to do firstly is find out the cause of the GC. Use some tool (like jconsole) to find out if something uses out the memory. If the momory consumption is expected, then you must increase your heap size. If this is an abnormal behavior. Use some profiling tool to find out where the memory leak happen. Considering you are doing some database operations, please double check if you forget to release some database resources (like connection) after using. After you solve the GC's problem, then go back to see if it's still very slow for "next()".
Related
I'm using Java persistence API to develop a standalone software. Recently I saw that the memory usage keep rising when I'm creating objects from entity classes, as well as JPAController classes. It seems that the objects stays at the memory since the memory allocation to the project won't decrease (Eg: 400mb ---> Create Object ---> 450mb ---> Stays at 450mb). Will this affect badly on performance? Should I call System.gc() method to remove these objects?
Generally System.gc() is not guarenteed to perform a garbage collection. Ultimately it is up to the JVM to decide. See the javadoc.
Have you observed what happens when you are approaching your memory limits of the JVM, does garbage collection happen then ? If not and you receive an OutOfMemoryError, you either are retaining something longer than you need to, or actually need extra heap allocated to your VM.
In anycase System.gc() I believe shouldn't be used to solve such problems.
In my opinion, the approach to the problem should be different. Actually the call to System.gc() is not a guarantee that it will free any memory at all; please see When does System.gc() do anything
If you can measure a problem in your memory allocation, either via jconsole, or making a post mortem analysis on the jvm dump, or whatever, then this is another problem. By gathering this information you will know what remains where in your memory regions, and then take actions in order to contain it.
The only way that this would negatively affect performance throughout the life of your program is if you want to keep these entities around forever but the size of your old generation in your heap is less than the 450MB you specified. Assuming that you are want to keep around between 1 and 2 times the 450MB you have you have specified forever, with the default ratios of the JVM, setting a parameter such as -Xmx2g will probably be fine. There are many more parameters to fine tune your performance much more than that, but that's probably all the complexity you're looking for for now. If you want to check out some more details on heap tuning and really get into performance, check out this doc on Garbage Collection Tuning by Oracle. Alternatively, something to eat lunch to is a great Youtube video on GC tuning by a guy named Gil Tene.
But calling System.gc() probably won't do anything useful.
Our JBoss 3.2.6 application server is having some performance issues and after turning on the verbose GC logging and analyzing these logs with GCViewer we've noticed that after a while (7 to 35 hours after a server restart) the GC going crazy. It seems that initially the GC is working fine and doing a GC every hour or so but at a certain point it starts going crazy and performing full GC's every minute. As this only happens in our production environment have not been able to try turning off explicit GCs (-XX:-DisableExplicitGC) or modify the RMI GC interval yet but as this happens after a few hours it does not seem to be caused by the know RMI GC issues.
Any ideas?
Update:
I'm not able to post the GCViewer output just yet but it does not seem to be hitting the max heap limitations at all. Before the GC goes crazy it is GC-ing just fine but when the GC goes crazy the heap doesn't get above 2GB (24GB max).
Besides RMI are there any other ways explicit GC can be triggered? (I checked our code and no calls to System.gc() are being made)
Is your heap filling up? Sometimes the VM will get stuck in a 'GC loop' when it can free up just enough memory to prevent a real OutOfMemoryError but not enough to actually keep the application running steadily.
Normally this would trigger an "OutOfMemoryError: GC overhead limit exceeded", but there is a certain threshold that must be crossed before this happens (98% CPU time spent on GC off the top of my head).
Have you tried enlarging heap size? Have you inspected your code / used a profiler to detect memory leaks?
You almost certainly have a memory leak and the if you let the application server continue to run it will eventually crash with an OutOfMemoryException. You need to use a memory analysis tool - one example would be VisualVM - and determine what is the source of the problem. Usually memory leaks are caused by some static or global objects that never release object references that they store.
Good luck!
Update:
Rereading your question it sounds like things are fine and then suddenly you get in this situation where GC is working much harder to reclaim space. That sounds like there is some specific operation that occurs that consumes (and doesn't release) a large amount of heap.
Perhaps, as #Tim suggests, your heap requirements are just at the threshold of max heap size, but in my experience, you'd need to pretty lucky to hit that exactly. At any rate some analysis should determine whether it is a leak or you just need to increase the size of the heap.
Apart from the more likely event of a memory leak in your application, there could be 1-2 other reasons for this.
On a Solaris environment, I've once had such an issue when I allocated almost all of the available 4GB of physical memory to the JVM, leaving only around 200-300MB to the operating system. This lead to the VM process suddenly swapping to the disk whenever the OS had some increased load. The solution was not to exceed 3.2GB. A real corner-case, but maybe it's the same issue as yours?
The reason why this lead to increased GC activity is the fact that heavy swapping slows down the JVM's memory management, which lead to many short-lived objects escaping the survivor space, ending up in the tenured space, which again filled up much more quickly.
I recommend when this happens that you do a stack dump.
More often or not I have seen this happen with a thread population explosion.
Anyway look at the stack dump file and see whats running. You could easily setup some cron jobs or monitoring scripts to run jstack periodically.
You can also compare the size of the stack dump. If it grows really big you have something thats making lots of threads.
If it doesn't get bigger you can at least see which objects (call stacks) are running.
You can use VisualVM or some fancy JMX crap later if that doesn't work but first start with jstack as its easy to use.
Let's say I've got an applciation which has a memory leak. At some point the GC will try very hard to clear memory and will slow down my application. I know that if you set this parameter for the JVM -XX:-UseGCOverheadLimit it will throw an OutOfMemoryException:
if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered.
However this is somehow not good enough for me. Because my application will become very slow even before these numbers hit. The GC will absorb the CPU for some time before the OutOfMemoryException will be thrown. My goal is to somehow recognize very early if there will most likly a problem and then throw the OutOfMemoryexception. After that I have some kind of recovery strategy.
Ok now I've found these two additional parameters GCTimeLimit and GCHeapFreeLimit. With them it is possible to tweak the two quoted constants (98% and 2%).
I've made some tests on my own like a small piece of code which produces a memory leak and played with those settings. But I'm not really sure how to find the correct tradeoff. My hope is that someone else had the same problem and came up with a reasonable solution, or maybe there are some other GC switches which i don't know yet.
I'm feeling a little bit lost since I'm not really an expert on this topic and it seems that there are a lot of thing's which can be considered.
If you are using the Sun/Oracle JVM, this page seems to be a pretty complete GC-tuning primer.
You can use java.lang.management.MemoryUsage to determine the used memory, and total memory available. As it approaches the tunable GC collection threshold then you can throw your error.
Of course doing this is a little ridiculous. If the issue is that you need more memory then increase the heap size. The more likely issue is that you're not releasing memory gracefully when you're done with it.
Side-step the JVM heap and use something like Terra Cotta's Big Memory which uses direct memory management to grow beyond the reach of the garbage collector.
I have a problem with my program (JSF running on Glassfish). It's proceeding a lot of data (and inserting it to the database using hibernate). And the problem is that after about 2 hours of work it slows down. I don't get any exception (especially there is no OutOfMemory). Is it possible that it is a memory leak? I've checked the heap dump with Eclipse Memory Analyzer and there were some HashMap issues. I've repaired it where it was possible and now the tool doesn't show this problem. But my application still doesn't work properly.
It sounds like your problem is not a conventional memory leak at all.
If I was to guess, I'd say that you've got a poorly designed data structure, an ineffective cache, or maybe a concurrency bottleneck.
You should probably focus on performance profiling to see where the time is going and to look for signs of lock contention.
There is a chance, that you have some sort of memory leak and produce a lot of temporary objects so that after a decent time the garbage collector kills your performance. If this is the case, you could play with the -Xmx option: with less heap size your application should slow down earlier, a bigger heap should show an oppisite effect.
The effect could also be caused by growing internal datastructures. Operations on datastructures always have a time complexity ("Big-O-Notation") and if the complexity is polynomal or even worse, such operations can kill performance too. Have a look at those collections in your applications that grow over time and double-check, that you've chose the optimal collection type.
If, on purpose, I create an application that crunches data while suffering from memory-leaks, I can notice that the memory as reported by, say:
Runtime.getRuntime().freeMemory()
starts oscillating between 1 and 2 MB of free memory.
The application then enters a loop that goes like this: GC, processing some data, GC, etc. but because the GC happens so often, the application basically isn't doing much else anymore. Even the GUI takes age to respond (and, no, I'm not talking about EDT issues here, it's really the VM basically stuck in some endless GC'ing mode).
And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?
Note that I'm not talking about ouf-of-memory errors nor about detecting the memory leak itself.
I'm talking about detecting that an application is running so low on memory that it is basically calling the GC all the time, leaving hardly any time to do something else (in my hypothetical example: crunching data).
Would it work, for example, to repeatedly read how much memory is available during, say, one minute, and see that if the number has been "oscillating" between different values all below, say, 4 MB, conclude that there's been some leak and that the application has become unusable?
And I was wondering: is there a way to programmatically detect that the JVM doesn't have enough memory anymore?
I don't think so. You can find out roughly how much heap memory is free at any given instant, but AFAIK you cannot reliably determine when you are running out of memory. (Sure, you can do things like scraping the GC log files, or trying to pick patterns in the free memory oscillations. But these are likely to be unreliable and fragile in the face of JVM changes.)
However, there is another (and IMO better) approach.
In recent versions of Hotspot (version 1.6 and later, I believe), you can tune the JVM / GC so that it will give up and throw an OOME sooner. Specifically, the JVM can be configured to check that:
the ratio of free heap to total heap is greater than a given threshold after a full GC, and/or
the time spent running the GC is less than a certain percentage of the total.
The relevant JVM parameters are "UseGCOverheadLimit", "GCTimeLimit" and "GCHeapFreeLimit". Unfortunately, Hotspot's tuning parameters are not well documented on the public web, but these ones are all listed here.
Assuming that you want your application to do the sensible thing ... give up when it doesn't have enough memory to run properly anymore ... then just launch the JVM with a smaller "GCTimeLimitor" or "GCHeapFreeLimit" than the defaults.
EDIT
I've discovered that the MemoryPoolMXBean API allows you to look at the peak usage of individual memory pools (heaps), and set thresholds. However, I've never tried this, and the APIs have lots of hints that suggest that not all JVMs implement the full API. So, I would still recommend the HotSpot tuning option approach (see above) over this one.
You can use getHeapMemoryUsage.
I see two attack vectors.
Either monitor your memory consumption.
When you more or less constantly use lots of the available memory it is very likely that you have a memory leak (or are just using too much memory). The vm will constantly try to free some memory without much success => constant high memory usage.
You need to distinguish that from a large zigzag pattern which happens often without being an indicator of memory problem. Basically you use more an more memory, but when gc finds time to do its job it finds lots of garbage to bring out, so everything is fine.
The other attack vector is to monitor how often and what kind of success the gc runs. If it runs often with only small gains in memory, it is likely you have a problem.
I don't know if you can access this kind of information directly from your program. But if nothing else I think you can specify parameters on startup which makes the gc log information into a file which in turn could get parsed.
What you could do is spawn a thread that wakes up periodically and calculates the amount of used memory and records the result. Then you can do regression analysis on the result to estimate the rate of memory growth in your application. If you know the rate of growth, and the maximum amount of memory, you can predict (with some confidence) when your application will run out of memory.
You can pass arguments to your java virtual machine that gives you GC diagnostics such as
-verbose:gc This flag turns on the logging of GC information. Available
in all JVMs.
-XX:+PrintGCTimeStamps Prints the times at which the GCs happen
relative to the start of the
application.
If you capture that output in a file, in your application you can periodcly read that file and parse it to know when the GC has happened. So you can work out the average time between every GC
I think the JVM does exactly this for you and throws java.lang.OutOfMemoryError: GC overhead limit exceeded. So if you catch OutOfMemoryError and check for that message then you have what you want, don't you?
See this question for more details
i've been using plumbr for memory leak detection and it's been a great experience, though the licence is very expensive: http://plumbr.eu/