is there limit to increase the max heap size in java? I am wondering if the large heap size can be set as long as the physical memory is available.
For example, if a server has 100G for RAM, then can i set the max heap at 90G? I know that GC will halt the app, but I am just curious.
Thanks.
With a 32 bit JVM, the hard limit would be 4 GB but the actual one would be lower as, at least if you aren't running a 64 bit OS, some space must be left for non heap memory, like the JVM own address space (non java), stacks for all threads, architecture/OS limitations and the likes. A 64 bit JVM has no such limitation so you could set the limit to 90 GB although I wouldn't recommend it for the reason you already pointed.
Related
I am profiling a Java process with VisualVM and I found out that while my used heap remains constantly below 100 MB, the heap size keeps increasing to a point at which it is 10 times bigger than the used heap!
Reading from the docs:
By default, the virtual machine grows or shrinks the heap at each
collection to try to keep the proportion of free space to live objects
at each collection within a specific range. This target range is set
as a percentage by the parameters -XX:MinHeapFreeRatio= and
-XX:MaxHeapFreeRatio=, and the total size is bounded below by -Xms and above by -Xmx.
So, consindering that MinHeapFreeRatio and MaxHeapFreeRatio are set to 40% and 70% respectively, why is this happening?
In jvisual vm i see three attributes under Monitor>Heap, i see 3 attributes depicting memory details all with differnt figures
Size : ?
Used :- I believe this is the actual memory used
Max :- I believe this is the max heap size allocated to java process (specified with Xmx)
I am not sure what size actually depicts?
The three attributes can be defined as next:
Size: The actual total reserved heap size
Used: The actual used heap size.
Max: The max size of the Java heap (young generation + tenured generation)
Indeed when you launch your JVM, the initial heap size (can be defined with -Xms) will be the initial total reserved heap size, then according to how your application behaves, it could need to increase the total reserved size until it reaches the max size and if it is still not enough you could get OOME.
Size depicts the heap block size assigned to java process. Try with -Xms 512m or 1024m then your size to start with will be 512m but used memory may be much lower. As soon as used memory grows , heap resizing occurs proactively so that memory can be allocated to live objects.
Its like you have Gas tank of 30 litre max capacity . But you know for now you may just need 20 litres for the trip but actually used in trip is 5 litres
Heap size is actual size of heap your running application has.
Used heap is used portion of heap size.
Max heap size is the maximum value the application's heap size can have (can be defined by the arg option -Xmx).
When monitor memory usage of a java application, you see that heap size may vary during running of the application. It can not be greater than max heap size. For a sample profiling (monitoring of an application), see below image:
A. If I execute a huge simulation program with -Xmx100000m (~100GB) I see some spikes in the used heap (~30 GB). That spikes increase the heap size and decreases the memory that can be used by other programs. I would like to limit the heap size to the size that is actually required to run the program without memory exceptions.
B. If I execute my simulation program with -Xmx10000 (~10GB) I am able to limit the used heap size (~ 7 GB). The total heap size is less, too (of course). I do not get out of memory exceptions in the first phase of the program that is shown in the VisualVM figures (about 16 minutes).
I naively expected that if I increase xmx from 10GB (B) to 100GB (A) that the used heap would stay about the same and that Java only would use more memory in order to avoid out of memory exceptions. However, the behavior seems to be different. I guess that Java works this way in order to improve performance.
An explanation for the large used heap in A might be that the growth behavior of hash maps is different if xmx is larger? Does xmx have an effect on the load factor?
In the phase of the program where a lot of mini spikes exist (see for example B at 12:06) instead of a few large ones (A) some java streams are processed. Does the memory allocation for stream processing automatically adapt with the xmx value? (There is still some memory left that could be used to have less mini spikes at 12:06 in B.)
If not, what might be the reasons for the larger used heap in A?
How can I tell Java to keep the used heap low if possible (like in the curves for B) but to take more memory if an out of memory exception could occur (allow to temporarily switch to A). Could this be done by tuning some garbage collection properties?
Edit
As stated by the answer below, the profile can be altered by garbage collection parameters. Applying -Xmx100000m -XX:MaxGCPauseMillis=1000 adapts the profile from A to consume less memory (~ 20 GB used) and more time (~ 22 min).
I would like to limit the heap size to the size that is actually required to run the program without memory exceptions.
You do not actually want to do that because it would make your program extremely slow because only providing the amount equivalent to the application peak footprint means that every single allocation would trigger a garbage collection while the application is near the maximum.
I guess that Java works this way in order to improve performance.
Indeed.
The JVM has several goals, in descending order:
pause times (latency)
allocation throughput
footprint
If you want to prioritize footprint over other goals you have to relax the other ones.
set -XX:MaxGCPauseMillis=18446744073709551615, this is the default for the parallel collector but G1 has a 200ms default.
configure it to keep less breathing room
I'm facing heap space OutOfMemory error during my Mapper side cleanup method, where i'm reading the data from inputStream and converting it into byte array using IOUtils.toByteArray(inptuStream);
I know i can resolve it by increasing the max heap space(Xmx), but i should be having enough heap space(1Gb) already. I found the below info on debugging(approximate space value),
runtime.maxMemory() - 1024Mb
runtime.totalMemory - 700Mb
runtime.freeMemory - 200Mb
My block size is 128 Mb and i'm not adding any additional data to it on my RecordReader. My output size from the mapper wont be more than 128 Mb.
And also i saw the available bytes in inputStream(.available()) which is provided an approximate value of 128 Mb.
I'm also a bit confused about the memory allocation of JVM. Let's say I set my heap space value as Xms-128m;Xmx-1024m. My tasktracker has 16Gb RAM and already I've 8jobs(8JVM) running in that tasktracker. Lets assume that the tasktracker can allocate only 8.5 Gb RAM for JVM and it'll use the rest for it's internal purpose. So we have 8.5Gb RAM available and 8 tasks are running which is currently using only 6Gb RAM. Is it possible for a new task be assigned to the same task tracker since already 8 tasks are running which might require 8Gb in which case the new task wont be able to provide user requested heap size(1Gb) if required.
PS: I know that not all heap needs to be in RAM(paging). My main question is, will the user be able to get the maximum requested heap size in all scenario?
While reading some notes on performance tuning, I did notice a recommendations while setting memory size:
Java application should size both initial and maximum permanent generation size to the same value since growing or contracting the permanent generation space requires a full GC. Similar suggestion is given while setting the heap size, I.e. -Xmx=-Xms.
My question is, then why do we have -Xms setting at all?
Also,
Why GC gets triggered often if I''ve different value for -Xmx and -Xms, and not when I''ve same size for -Xmx and -Xms.
To add more to my second question, If I start with minimum heap size of 64M and Max 512 M, I believe full GC will not get triggered unless memory utilized by my app reaches 512M.
Similarly If I start with 512M for both -Xmx and -Xms, still JVM will trigger full GC when my app memory use reaches this limit. So why it's advised to set both max and min to the same value?
The setting flags were designed before the VM had generational, incremental collection. In that case complete collections were all there were. In more modern collectors full collections are rare. That's good because incremental collections are normally a few milliseconds, so the UI experience doesn't change. Complete collections of big arenas can take several seconds ore more. Changing the arena size - as the document says - is guarenteed to cause a full collection every time.
The guidance isn't perfect 100% of the time. There are few kinds of apps where allowing the arena to grow is reasonable.
-Xms=64m -Xmx=512m does not mean "start up with a heap between 64 and 512 MB". It instructs the JVM to request 64MB of committed memory and 512MB of reserved memory at startup. The heap will be 64MB to start with, and as it fills up, will expand into the space reserved for it. So, with Xms of 64MB you would see a full collection before the heap filled to 64MB.
If you start your application with a low value for Xms and turn on GC logging (-verbose:gc -Xloggc:FILENAME) the log file will show how the heap and generation sizes change as the application runs.
Minor collections may be more frequent with a lower Xms because the new generation will be smaller (assuming you are using proportional generation sizing rather than explicit) and so will fill more quickly.
One reason to have -Xms < -Xmx is to allow the JVM to not pre-allocate the whole Xmx upfront so that the difference is available (foe a while perhaps) to other applications. Gory details here: http://www.ibm.com/developerworks/library/j-memusage/