Force hotspot to make frequent GCs? - java

I am benchmarking a server process, in Java and it appears that Hotspot is not making many GCs, but when it does, its hitting performance massively.
Can I force hotspot to make frequent smaller GCs, rather than a few massive long GCs?

You can try changing the GC to parallel or concurrent.
Here's a link to the documentation.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html

Interferring with when the GC is called, is usually a bad idea.
A better approach would be tuning the sizes of eden, survivor and old space if you have problems with performance of the gc.
If a full sweep has to be done it does not really matter how often it was called, the speed will always be relatively slow, the only fast gc calls are those in eden and survivor space.
So increasing eden and survivor space might solve your problem, but unfortunately a good memory profiling is rather time consuming and complex to perform.
http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html
(link stolen from other answer) also gives the options on how to configure that if necessary. -XX:NewRatio=2 or -XX:NewRatio=3 might increase your speed but it might also slow it up. Unfortunately that is very application dependant.

You can tell the JVM to do a garbage collection programatically by: System.gc(). Please note that the Javadoc says that this is only a suggestion. You can try calling this before getting into a critical section where you don't want the GC performance penalty.

You can increase how often the GC is performed by decreasing the young/new sizes or call gc more often. This doesn't mean you will pause for a less time, just that it will happen mroe often.
The best way to reduce the impact of GC is to memory profile your application and reduce the amount of garbage you are producing. This will not only make your code faster, but reduce how often and for how long each GC occurs.
In the more extreme case, you can reduce how often the GC occurs to less than once per day, removing it as an issue all together.

Related

Java heap space : pros/cons of size in performance terms

What are the limitations on Java heap space? What are the advantages/disadvantages of not increasing the size to a large value?
This question stems from a program that I am running sometimes running out of space. I am aware that we can change the size of the heap to a larger value in order to reduce the chance of this. However, I am looking for pros and cons of keeping the heap size small or large. Does decreasing it increase speed?
Increasing the heap size may, under some circumstances, delay the activities of the garbage collector. If you are lucky, it may be delayed until your program has finished its work and terminated.
Depending on the data structures that you are moving around in your program, a larger heap size may also improve performance because the GC can work more effectively. But to tune a program into that direction is … tricky, at best.
Using in-memory caches in your program will definitely improve performance (if you have a decent cache-hit ratio), and of course, this will need heap space, perhaps a lot.
But if your program reports OutOfMemoryError: heap space because of the data to process, you do not have much alternatives other than increasing the heap space; performance is your least problem in this case. Or you change your algorithm that it will not load all data into memory, instead processing it on disk. But then again, performance is out of the door.
If you run a server of some kind, having about 80% to 85% heap utilisation is a good value, if you do not have heavy spikes. If you know for sure that incoming request do not cause significant additional memory load, you may even go up to 95%. You want value for money, and you paid for the memory, one or the other way – so you want to use it!
You can even set Xms and Xmx to different values; then the JVM can increase the heap space when needed, and today it can even release that additional memory if no longer needed. But this increase costs performance – on the other side, a slow system is always better than one that crashes.
A too small heap size may affect performance if your system also does not have enough cores, so that the garbage collectors do compete over the CPU with the business threads. At some point, the CPU spends a significant time on garbage collection. Again, if you run a server, this behaviour is nearly unavoidable, but for a tool-like program, increasing the heap can prevent it (because the program may come to an end before the GC needs to get active). This is already said in the beginning …

Does small amount of Xmx cause java program to run inefficiently because of Garbage Collection?

My point is, since there's limited amount of heapsize, does the JVM need to run garbage collection more frequently? and practically, is it a performance killer?
The optimal amount of memory to use might be 2-5x the minimum heap to run the program. How often the GC is run is inversely proportional to the amount of free memory after a GC.
practically, is it a performance killer?
That depends on your application, but I would assume it is.
Given RAM is relative cheap compared to the cost of your time, I tend to make sure I have plenty of RAM. You can buy 16 GB for less than $80.
This kind of depends on the algorithm used for the gc and the jdk you are using. The normal gc is a killer as it stops execution of the other threads. If you are on jdk 1.6 or better you can make this visible using e.g. visualVM.
There are different gc algorithms to overcome this. Here I would send you to the docs as they the differences best
Finding the right balance between the memory requirements of your application and the memory allocation you give it (using Xmx) is a key performance tuning activity.
Yes, you want to make heap big enough so that the JVM does not end up thrashing on constant GC, which can absolutely be a performance killer.
What heap size you need is totally application dependent.

How to prevent the Garbage Collector to slow down my application

Let's say I've got an applciation which has a memory leak. At some point the GC will try very hard to clear memory and will slow down my application. I know that if you set this parameter for the JVM -XX:-UseGCOverheadLimit it will throw an OutOfMemoryException:
if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered.
However this is somehow not good enough for me. Because my application will become very slow even before these numbers hit. The GC will absorb the CPU for some time before the OutOfMemoryException will be thrown. My goal is to somehow recognize very early if there will most likly a problem and then throw the OutOfMemoryexception. After that I have some kind of recovery strategy.
Ok now I've found these two additional parameters GCTimeLimit and GCHeapFreeLimit. With them it is possible to tweak the two quoted constants (98% and 2%).
I've made some tests on my own like a small piece of code which produces a memory leak and played with those settings. But I'm not really sure how to find the correct tradeoff. My hope is that someone else had the same problem and came up with a reasonable solution, or maybe there are some other GC switches which i don't know yet.
I'm feeling a little bit lost since I'm not really an expert on this topic and it seems that there are a lot of thing's which can be considered.
If you are using the Sun/Oracle JVM, this page seems to be a pretty complete GC-tuning primer.
You can use java.lang.management.MemoryUsage to determine the used memory, and total memory available. As it approaches the tunable GC collection threshold then you can throw your error.
Of course doing this is a little ridiculous. If the issue is that you need more memory then increase the heap size. The more likely issue is that you're not releasing memory gracefully when you're done with it.
Side-step the JVM heap and use something like Terra Cotta's Big Memory which uses direct memory management to grow beyond the reach of the garbage collector.

find availabe internal memory in runtime from code

i must find the available internal memory in runtime from code because when i lower internal phone memory to less than 100K, sqlite operations throw SQLiteDiskIOException on each db operation, due to insufficient disk space
any ideas?
clahav
In such a case, using an embedded DB seems overkill, as you have a low amount of RAM and, as a consequence, a weak environment. I would instead prefer prevalent layer like
Space4J
Prevayler
You'll loose some features over JDBC (transaction, SQL), but your memory consumption can be really lowered.
Calling Runtime.getRuntime().freeMemory() gives you an estimate of the amount of free memory. However, it is not a particularly useful measure, since there is a good chance that you can allocate more memory than the reported value. When you try to allocate more memory than is currently free, the JVM will automatically run the GC in an attempt to reclaim enough space for your allocation, and this will usually succeed.
You can request the GC to run "right now" by calling System.gc(), but this is generally a bad idea from a performance perspective. The GC is most efficient (in terms of time spent per byte reclaimed) if you just let the JVM run the GC when it needs to. The only case where it can be worthwhile to run the GC is if you know that you have CPU cycles to spare at a particular point and you want to mitigate GC pauses.
In your particular case, I'm not sure what advantage there is in knowing how much free memory there is. Why don't you just try the sqlite operations and catch / deal with the exceptions? Or increase the heap size? Or track down what is leaking memory or using it inefficiently?
Runtime rt=Runtime.getRuntime();
rt.freeMemory()
calling System.gc() may increasing the available free memory

Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains:
The parallel collector will throw an OutOfMemoryError if too much time is
being spent in garbage collection: if more than 98% of the total time is
spent in garbage collection and less than 2% of the heap is recovered, an
OutOfMemoryError will be thrown. This feature is designed to prevent
applications from running for an extended period of time while making
little or no progress because the heap is too small. If necessary, this
feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the
command line.
Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap.
But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond?
I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.
I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.
Well, you're using too much memory - and from the sound of it, it's probably because of a slow memory leak.
You can try increasing the heap size with -Xmx, which would help if this isn't a memory leak but a sign that your app actually needs a lot of heap and the setting you currently have is slightly to low. If it is a memory leak, this'll just postpone the inevitable.
To investigate if it is a memory leak, instruct the VM to dump heap on OOM using the -XX:+HeapDumpOnOutOfMemoryError switch, and then analyze the heap dump to see if there are more objects of some kind than there should be. http://blogs.oracle.com/alanb/entry/heap_dumps_are_back_with is a pretty good place to start.
Edit: As fate would have it, I happened to run into this problem myself just a day after this question was asked, in a batch-style app. This was not caused by a memory leak, and increasing heap size didn't help, either. What I did was actually to decrease heap size (from 1GB to 256MB) to make full GCs faster (though somewhat more frequent). YMMV, but it's worth a shot.
Edit 2: Not all problems solved by smaller heap... next step was enabling the G1 garbage collector which seems to do a better job than CMS.
The >98% would be measured over the same period in which less than 2% of memory is recovered.
It's quite possible that there is no fixed period for this. For instance, if the OOM check would be done after every 1,000,000 object live checks. The time that takes would be machine-dependent.
You most likely can't "solve" your problem by adding -XX:-UseGCOverheadLimit. The most likely result is that your application will slow to a crawl, use a bit more of memory, and then hit the point where the GC simply does not recover any memory anymore. Instead, fix your memory leaks and then (if still needed) increase your heap size.
But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond?
The simple answer is that it is not specified. However, in practice the heuristic "works", so it cannot be either of the two extreme interpretations that you posited.
If you really wanted to find out what the interval over which the measurements are made, you could always read the OpenJDK 6 or 7 source-code. But I wouldn't bother because it wouldn't help you solve your problem.
The "best" approach is to do some reading on tuning (starting with the Oracle / Sun pages), and then carefully "twiddle the tuning knobs". It is not very scientific, but the problem space (accurately predicting application + GC performance) is "too hard" given the tools that are currently available.

Categories

Resources