Suppose the maximum size of a JVM heap is 2GB (-Xmx2048m -Xms100m), we find that the peak used usage of this heap is 1GB and the peak committed usage is 1.2GB after it finishes. So, my question is whether the free space (2GB - 1.2GB) can be consumed by other applications while the JVM is running.
I think the free space cannot be used by others but I'm not sure currently: The operating system reserves 2GB free space before the JVM runs. The reserved space may not be consumed by other applications though the JVM cannot use it up.
JVM checks whether OS has enough address space for -Xmx, but OS won't actually allocate the memory until that much is requested by JVM. JVM will only reserve -Xms memory but can extend upto -Xmx provided that much memory is available.
Runtime.getRuntime().totalMemory() will return the current size of the heap, which will not exceed the maximum size specified on the command line.
That is approximately the amount of memory assigned to the JVM (not including non-heap JVM memory) by the operating system. Other memory is free for use by other applications.
Of course that is grossly oversimplified -- total system memory is total physical memory + total available swap, with other complications (e.g. Linux makes promises of memory to processes but doesn't actually commit it to that process unless it is touched, also simplified). In any case though, the short answer is: Yes, you specify a maximum size on the command line, but the current size is what is allocated to the JVM; the rest is available for other applications.
The memory seen by Java process is virtual memory. The operating system doesn't need to really reserved 2GB free physical memory for Java process.
Related
This question is regarding the Garbage collection behavior when request need more memory than allocated to Pod . If GC is not able to free memory, will it continue to run GC continuously or throw out
of memory.
One pod contains java based app and another contain PHP based. In case of java xmx value is same as given to pod limit.
I can only talk about Java GC. (PHP's GC behavior will be different.)
If GC is not able to free memory, will it continue to run GC continuously after regular interval or throw out of memory.
It depends on the JVM options.
A JVM starts with an initial size for the heap and will expand it as required. However, it will only expand the heap up to a fixed maximum size. That maximum size is determined when the JVM starts from either an option (-Xmx) or default heap size rules. It can't be changed after startup.
As the heap space used gets close to the limit, the GC is likely to occur more and more frequently. The default behavior on a modern JVM is to monitor the %time spent doing garbage collection. If it exceeds a (configurable) threshold, then you will get an OOME with a message about the GC Overhead Threshold having been exceeded. This can happen even if there is enough space to "limp along" for a bit longer.
You can turn off the GC Overhead Limit stuff, but it is inadvisable.
The JVM will also throw an OOME if it simply doesn't have enough heap space after doing a full garbage collection.
Finally, a JVM will throw an OOME if it tries to grow the heap and the OS refuses to give it the memory it requested. This could happen because:
the OS has run out of RAM
the OS has run out of swap space
the process has exceeded a ulimit, or
the process group (container) has exceeded a container limit.
The JVM is only marginally aware of the memory available in its environment. On a bare metal OS or a VM under a hypervisor, the default heap size depends on the amount of RAM. On a bare metal OS, that is physical RAM. On a VM, it will be ... what ever the guest OS sees as its physical memory.
With Kubernetes, the memory available to an application is likely to be further limited by cgroups or similar. I understand that recent Java releases have tweaks that make them more suitable for running in containers. I think this means that they can use the cgroup memory limits rather than the physical memory size when calculating a default heap size.
Imagine I have a 64-bit machine with the total amount of memory (physical + virtual) equal to 6 GB. Now, what will happen when I run 5 applications with -Xmx2048m at the same time? They won't fail on start since this is 64-bit OS but what will happen when they all will need to use the 2GB of memory which I've set for them?
Question: Is it possible that there will be some memory leaks or something? What will happen? What are possible consequences of doing that?
I've read those questions: this and this, but they don't actually answer my question since I want to run more than one application which doesn't exceed the limit of memory on its own, but all together they do.
What you will experience is increased swapping during major garbage collections and thus increased GC pause-times.
When used memory exceeds physical memory (well, in fact even before that) modern OSs will write some of the less-used memory to disk. With a JVM this will most probably be some parts of the heap's tenured generation. When a major GC occurs it will have to touch all of the heap so it has to swap back in all of the pages it has offloaded to disk resulting in heavy IO-activity and increased CPU-load.
With multiple JVMs that have few major GCs this might work out with slightly increased pause times since one JVM's heap should easily fit into physical memory, but with one JVM's heap exceeding physical memory or simultaneous major GCs from several JVMs this might result in lots of swapping in and out of memory-pages ("thrashing") and GCs might take a really loooong time (up to several minutes).
Xmx merely reserves virtual address space. And it is virtual not physic.
The response of this setting depends on the OS :
Windows : The limit is defined by swap + physical. The exception are large pages (which need to be enabled with a group policy setting), which will limit it by physical ram (XMS is not possible).
Linux behavior is more complicated. (it depends on the vm.overcommit_memory and related sysctls and various flags from the mmap syscall) can (but not all) be controlled by JVM configuration flags. The behavior can be :
- Xms can exceed total ram + swap
- Xmx is capped by available physical ram.
I know that the -Xms flag of JVM process is to allow the JVM process to use a specific amount of memory to initialize its process. And in regard to performance of a Java application, it is often recommended to set the same values to both -Xms and -Xmx when starting the application, like -Xms2048M -Xmx2048M.
I'm curious whether the -Xms and -Xmx flags mean that the JVM process makes a reservation for the specific amount of memory to prevent other processes in the same machine from using it.
Is this right?
Xmx merely reserves virtual address space.
Xms actually allocates (commits) it but does not necessarily prefault it.
How operating systems respond to allocations varies.
Windows does allow you to reserve very large chunks of address space (Xmx) but will not allow overcommit (Xms). The limit is defined by swap + physical. The exception are large pages (which need to be enabled with a group policy setting), which will limit it by physical ram.
Linux behavior is more complicated, it depends on the vm.overcommit_memory and related sysctls and various flags passed to the mmap syscall, which to some extent can be controlled by JVM configuration flags. The behavior can range from a) Xms can exceed total ram + swap to b) Xmx is capped by available physical ram.
Short answer: Depends on the OS, though it's definitely a NO in all popular operating systems.
I'll take the example of Linux's memory allocation terminology here.
-Xms and -Xmx specify the minimum and maximum size of JVM heap. These sizes reflect VIRTUAL MEMORY allocations which can be physical mapped to pages in RAM called the RESIDENT SIZE of the process at any time.
When the JVM starts, it'll allocate -Xms amount of virtual memory. This can be mapped to resident memory (physical memory) once you dynamically create more objects on heap. This operation will not require JVM requesting any new allocation from the OS, but will increase you RAM utilization, because those virtual pages will now actually have corresponding physical memory allocation too. However, once your process tries to create more objects on heap after consuming all its Xms allocation on RAM, it has to request the OS for more virtual memory from the OS, which may/may not also be mapped to physical memory later depending on when you need it. The limit for this is your -Xmx allocation.
Note that this is all possible because the memory in linux is shared. So, even if a process allocates memory beforehand, what it gets is virtual memory which is just an addressable contiguous fictional allocation that may or may not be mapped to real physical pages depending on the demand. Read this answer for a short description of how memory management works in popular operating systems. Here is a much detailed (slightly outdated but very useful) information on how Linux's memory management works.
Also note that, these flags only affect heap sizes. The resident memory size that you will see will be larger than the current JVM heap size. More specifically, the memory consumed by a JVM is equals to its HEAP SIZE plus DIRECT MEMORY which reflects things coming from method stacks, native buffer allocations etc.
Does JVM process makes a reservation for the specific amount of memory?
Yes, the JVM reserves the memory specified by Xms at the start and might reserve upto Xmx but the reservation need not be in the physical memory, it can also be in the swap. The JVM pages will be swaped in and out of memory as needed.
Why is it recommended to have same value for Xms and Xmx?
Note: Setting Xms and Xmx is generally recommended for production systems where the machines are dedicated for a single application (or there aren't many applications competing for system resources). This does not generalize it is good everywhere.
Avoids Heap Size:
The JVM starts with the heap size specified by the Xms value initially. When the heap is exhausted due to allocation of objects by the application. The JVM starts increasing the heap. Each time the JVM increases the heap size it must ask the operating system for additional memory. This is a time consuming operation and results in increased gc pause times and inturn the response times for the requests.
Applications Behaviour In the Long Run:
Even though I cannot generalize, many applications over the long run eventually grow to the maximum heap value. This is another reason to start of with maximum memory instead of growing the heap over time and creating unnecessary overhead of heap resize. It is like asking the application to take up the memory at the start itself which it will eventually take.
Number of GCs::
Starting off with small heap sizes results in garbage collection more often. Bigger heap sizes reduce the number of gcs that happen because more memory is available to object allocation. However it must be noted that increased heap sizes increases gc pause times. This is an advantage only if your garbage collection has been tuned properly and the pause times don't increase significantly with increase in heap sizes.
One more reason for doing this is servers generally come with large amounts of memory, So why not use the resources available?
I have set the default memory limit of Java Virtual Machine while running Java Application like this...
java -mx128m ClassName
I Know this will set maximum memory allocation pool to 128MB, but I don't know what the benefit is, of specifying this memory limit of JVM?
Please enlighten me in this issue...
On Sun's 1.6 JVM, on a server-class machine (meaning one with 2 CPUs and at least 2GB of physical memory) the default maximum heap size is the smaller of 1/4th of the physical memory or 1GB. Using -Xmx lets you change that.
Why would you want to limit the amount of memory Java uses? Two reasons.
Firstly, Java's automatic memory management tends to grab as much memory from the operating system as possible, and then manage it for the benefit of the program. If you are running other programs on the same machine as your Java program, then it will grab more than its fair share of memory, putting pressure on them. If you are running multiple copies of your Java program, they will compete with each other, and you may end up with some instances being starved of memory. Putting a cap on the heap size lets you manage this - if you have 32 GB of RAM, and are running four processes, you can limit each heap to about 8 GB (a bit less would be better), and be confident they will all get the right amount of memory.
Secondly (another aspect of the first, really), if a process grabs more memory than the operating system can supply from physical memory, it uses virtual memory, which gets paged out to disk. This is very slow. Java can reduce its memory usage by making its garbage collector work harder. This is also slow - but not as slow as going to disk. So, you can limit the heap size to avoid the Java process being paged, and so improve performance.
There will be a default heap size limit defined for the JVM. This setting lets you override it, usually so that you can specify that you want more memory to be allocated to the java process.
This sets the maximum Heap Size. The total VM might be larger
There is always a limit because this parameter has a default value (at least for the Oracle/Sun VM)
So the benefit might either be: you can give the memory to the app that it actually needs in order to work (efficiently) or if you come from the other direction: (somewhat) limit the maximum memory used in order to manage the distribution of resources among different applications on one machine.
There already has been a question about java and memory SO: Java memory explained
A very nice article about Java memory is found here. It gives an overview of the memory, how it is used, how it is cleaned and how it can be measured.
The defaults of the memory are (prior java 6):
-Xms size in bytes Sets the initial size of the Java heap. The
default size is 2097152 (2MB). The values must be a multiple of, and
greater than, 1024 bytes (1KB). (The -server flag increases the
default size to 32M.)
-Xmn size in bytes Sets the initial Java heap size for the Eden
generation. The default value is 640K. (The -server flag increases
the default size to 2M.)
-Xmx size in bytes Sets the maximum size to which the Java heap can
grow. The default size is 64M. (The -server flag increases the
default size to 128M.) The maximum heap limit is about 2 GB (2048MB).
Another source (here) states that in Java 6 the default heap size depends on the amount of system memory.
I assume this should help avoid high memory consumption (due to bugs or due to many allocations and deallocations). You would use this if you design for a low-memory system (such as an old computer with little amounts of RAM, mobile phones, etc.).
Alternatively, use this to increase the default memory limit, if it is not enough for you and you are getting OutOfMemoryExceptions for normal behavior.
just my complete Linux box crashed with OOM (OOM Killer Process killed the wrong processes), due to a java application consumed too much memory and there was no memroy left.
My question is, if I use the JVM Paramter -XmX, does this limit Java to no more use Memory as specified by the -XmX option? Or said differently, If I do NOT specify the -XmX than java might allocate more and more memory with the result my linux box is crahsing itself with OOM?
Thank you very much!
Jens
The default maximum for Java 6 is 1/4 of the main memory. This can mean the total virtual memory of your applications can exceed the main memory and swap space.
Given the cost of memory (8 GB costs less than £40) you should buy more memory. However, an alternative is to use less memory or increase the swap space, so you are less likely to run out.
There's a default maximum heap size (used to be 64M, I think it's 128M now.) The -Xmx parameter changes that maximum size. Oracle's JVMs will never allocate a larger heap than specified in that parameter.
That's not to say that -Xmx gives the total amount of RAM used by the JVM; it'll actually use more than that. Some is for the executable code of the JVM implementation itself; there's also memory used for the "permgen" area, and possibly memory-mapped buffers for other purposes. But Oracle's JVMs, in any event, will not grow their RAM usage without bound; there's always an upper limit.
Now, why doesn't your Linux box have more swap space? It's cheap, and it would prevent this sort of thing from happening in the first place.