We have a desktop Java Swing application. For shipping it, we need to specify the minimum memory requirements for deploying this application. In the JVM parameters we specify 2GB as max heap size.
Is there any tool for a Windows based machine which can quantify the requirements?
Also, as follow-up question, I would like to know: If we do not specify the max heap size in Java 7, does the JVM still automatically adjust the heap size on the fly before throwing an OutOfMemoryError?
Possible approach:
If you specify your own product to work with at max. 2GB of heap, you also have to consider the other parts of memory, allocated within the Java virtual machine:
To find out your memory consumption, I suggest you to test your application with MemoryMXBean. This includes methods such as getHeapMemoryUsage() and getNonHeapMemoryUsage().
Then stress-test your applications and periodically check these properties. This way you should get a feeling for how much memory your application consumes.
Additionally to that, Windows specifies 2GB as minimum RAM for Windows 10.
So, your final minimum requirements should be Minimum = MaximumHeap (2GB) + StressTestNonHeap (?) + WindowsMinimum (2GB) + SomeSecurityThreshold (~1GB).
Further approaches:
You could also use VisualVM to check your memory consumption.
Another possibility is to use Java HotSpot Native Memory Tracking (NMT), for which I posted an example on Stack Overflow.
Anything that also informs you about non-heap memory useage is applicable.
Max heap limits:
Regarding your question
Also on another note just wanted to know if we do not specify the max heap limits with Java 7, does the JVM automatically allocates heap on the fly to adjust before throwing out of memory.
If you do not specify the max heap size, the JVM will set it automatically depending on the used GC (in Java 7 this should be UseParallelOldGC) and your system. To test this, run java -XX:+PrintVMOptions -XX:+AggressiveOpts -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal -version and check what values are set for MaxHeapSize and UseParallelOldGC.
GC considerations:
Also: You probably want to consider using the garbage first (G1) GC, which will be the default GC in Java 9. In this question I show that the G1 GC also re-shrinks the heap if it thinks it is pratical. This may be useful if your application has memory-intensive and non-memory-intensive parts. This way, the heap may shrink during the non-memory-intensive parts, which most probably won't happen with the ParallelOldGC.
When you run the JVM without the maximum heap size for the server JVM it uses 1/4 of main memory up to 32 GB. If you use the 32-bit windows client VM, it uses 64MB or 128MB.
The best way to determine the required memory consumption is to test you application with different memory sizes. The minimum memory is the lowest memory size you are willing to support. Only you know what you are comfortable supporting.
Related
I have read couples of articles on java heap space and found out that the default max heap for JVM is 1/4th of the actual physical space. But none of the article had reason for this ?
Whats the reason of having it as 1/4th of actual memory?
https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gc-ergonomics.html
This dates back to JDK 5, which introduced JVM ergonomics. Prior to this, the JVM would set very small defaults for the heap space. JDK 1.1 had a default of 16Mb for both Xms and Xmx, JDK 1.2 changed this to Xms of 1Mb and Xmx of 64Mb by default. In JDK 1.3, Xms default increased to 2Mb.
Since Java was proving more popular on servers and memory capacities were increasing significantly, Sun introduced the concept of a server-class machine in JDK 5. This is one that has 2 or more physical processors and 2 or more Gb of memory (if I remember rightly, in JDK 5, the machine also had to not be running Windows to count as a server).
On server-class machines by default, the following parameters were set
Throughput garbage collector (i.e. the parallel collector)
initial heap size of 1/64 of physical memory up to 1Gbyte
maximum heap size of 1/4 of physical memory up to 1Gbyte
Server runtime compiler
Ergonomics provided two command-line flags that allowed a user to set a performance goal for the JVM; the idea being that the JVM would then figure out internally how to achieve this goal by modifying its parameters. The ultimate goal was to eliminate a lot of the -XX flags that were being used to tune JVM performance manually.
The parameters are:
-XX:MaxGCPauseMillis=nnn which sets the maximum pause time you want for GC in milliseconds.
-XX:GCTimeRatio= which sets the ratio of garbage collection time to application time being 1 / (1 + nnn). This was referred to as the throughput goal.
You can specify either of these goals or both. If the JVM manages to achieve both of these goals it then attempts to reduce the memory being used (the footprint goal).
There's more detail here:
https://www.oracle.com/technetwork/java/ergo5-140223.html
Why does Java not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one? In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
Thanks
EDIT: I really think it is a bad design choice for java. It is not safe to set the Xmx as high as possible (e.g. 100 GB!). If a user need to run my code on bigger data, he may run it on a system with more available RAM. Why should I, as the developer, set the maximum available memory of my program? I do not know which size the data is !!!
The JVM increases the heap size when it needs to up to the maximum heap size you set. It doesn't take all the memory as it has to preallocate this on startup and you might want to use some memory for something else, like thread stacks, share libraries, off heap memory etc.
Why Java does not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
If you set the maximum heap size large enough, or use off heap memory, it will. It just won't do this by default. One reason is that heap memory has to be in main memory and cannot be swapped out without killing the performance of your machine (if not killing your machine) This is not true of C programs and expanding so much is worse than failing to expand.
If you have a JVM with a heap size of 10% more than main memory and you use that much, as soon as you perform a GC, which has to touch every page more than once, you are likely to find you need to power cycle the box.
Linux has a process killer when resources run out, and this doesn't trigger you might be luck enough to restart.
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one
A key feature of the JVM is that it is platform independent, so it has its own control. The JVM running at the limit of your process space is likely to prevent your machine from working (from heavy swapping) I don't know .NET avoids this from happening.
In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
It does already as I have said, it's just not a good idea to allow it to use too much memory.
It is a developers decision to decide how much heap memory must be allowed for the java process. it is based on various factors like the project design, platform on which it is going to run etc.
We can set heap size properties
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
As you can see we set the initial heap size and if later JVM finds that more is needed then it can increase the heap size upto the maximum specified limit. Infact the size changes when we do GC(not a mandate). I had posted question on similar grounds. You can refer to it. So increase/decrease of heap size is done by JVM. All we have to do as developers is specify limit based on our requirements.
java -Xms is apparently not having an affect on the amount of memory the java process consumes during a run.
I have an app that consumes about 1Gb from the system point of view. I tried setting -Xms2048m (and -Xmx4096m) and I see absolutely no change in memory consumption.
The hotspot docs claim the heap size is bounded below by the Xms value or the default.
The only thing I can think of is maybe the process cannot grab a contiguous block of memory, so it grabbed all it could and then will allocate more later, or maybe windows is not letting it have that much memory to start with. (64-bit windows 7)
(I don't need this for anything, it is just something curious I noticed)
The default memory usage windows task manager shows you is not what's allocated in the processes virtual memory space. It's how much that process has actually written into the virtual space that has had to be mapped onto real memory. If you enable the column for 'Commit Size' in your task manager that will show what is actually considered "used" from the perspective of your processes's virtual address space. (roughly Xms + permsize + size of VM and system stuff itself.)
For Java 1 try with -ms and -mx
Since Java2 you can use -Xms and -Xmx
My experience is, that -ms and `-mx works also in Java2. See http://www.devx.com/tips/Tip/5578
The JVM need a continuous region of memory for the heap. This means it allocates the maximum size as virtual memory on startup. This is not as bad as it sounds as the OS only allocates main memory to the application as it uses it, (not when it allocates virtual memory)
If you look at the amount of memory used in a tool like VisualVM, you can find that even with overhead of 150 - 500 MB, the size is less than the minimum size. This is because Java doesn't just use the minimum size if it doesn't have a use for it.
Instead the minimum size is the point below which it makes only minor attempts to clean up memory. (You may see it perform minor GCs) In most cases this means the application will use the minimum size very quickly. However, a "hello world" program will not use the minimum size.
maybe windows is not letting it have that much memory to start with
The JVM will fail to start if it cannot allocate the maximum size as a continuous block. (This was a common problem on 32-bit Window, such that the limit could be 1.5 GB or as low as 1.2 GB)
I have set the default memory limit of Java Virtual Machine while running Java Application like this...
java -mx128m ClassName
I Know this will set maximum memory allocation pool to 128MB, but I don't know what the benefit is, of specifying this memory limit of JVM?
Please enlighten me in this issue...
On Sun's 1.6 JVM, on a server-class machine (meaning one with 2 CPUs and at least 2GB of physical memory) the default maximum heap size is the smaller of 1/4th of the physical memory or 1GB. Using -Xmx lets you change that.
Why would you want to limit the amount of memory Java uses? Two reasons.
Firstly, Java's automatic memory management tends to grab as much memory from the operating system as possible, and then manage it for the benefit of the program. If you are running other programs on the same machine as your Java program, then it will grab more than its fair share of memory, putting pressure on them. If you are running multiple copies of your Java program, they will compete with each other, and you may end up with some instances being starved of memory. Putting a cap on the heap size lets you manage this - if you have 32 GB of RAM, and are running four processes, you can limit each heap to about 8 GB (a bit less would be better), and be confident they will all get the right amount of memory.
Secondly (another aspect of the first, really), if a process grabs more memory than the operating system can supply from physical memory, it uses virtual memory, which gets paged out to disk. This is very slow. Java can reduce its memory usage by making its garbage collector work harder. This is also slow - but not as slow as going to disk. So, you can limit the heap size to avoid the Java process being paged, and so improve performance.
There will be a default heap size limit defined for the JVM. This setting lets you override it, usually so that you can specify that you want more memory to be allocated to the java process.
This sets the maximum Heap Size. The total VM might be larger
There is always a limit because this parameter has a default value (at least for the Oracle/Sun VM)
So the benefit might either be: you can give the memory to the app that it actually needs in order to work (efficiently) or if you come from the other direction: (somewhat) limit the maximum memory used in order to manage the distribution of resources among different applications on one machine.
There already has been a question about java and memory SO: Java memory explained
A very nice article about Java memory is found here. It gives an overview of the memory, how it is used, how it is cleaned and how it can be measured.
The defaults of the memory are (prior java 6):
-Xms size in bytes Sets the initial size of the Java heap. The
default size is 2097152 (2MB). The values must be a multiple of, and
greater than, 1024 bytes (1KB). (The -server flag increases the
default size to 32M.)
-Xmn size in bytes Sets the initial Java heap size for the Eden
generation. The default value is 640K. (The -server flag increases
the default size to 2M.)
-Xmx size in bytes Sets the maximum size to which the Java heap can
grow. The default size is 64M. (The -server flag increases the
default size to 128M.) The maximum heap limit is about 2 GB (2048MB).
Another source (here) states that in Java 6 the default heap size depends on the amount of system memory.
I assume this should help avoid high memory consumption (due to bugs or due to many allocations and deallocations). You would use this if you design for a low-memory system (such as an old computer with little amounts of RAM, mobile phones, etc.).
Alternatively, use this to increase the default memory limit, if it is not enough for you and you are getting OutOfMemoryExceptions for normal behavior.
I have a system which cannot provide more than 1.5 Gb for Java process. Thus i need an exact way to specify java process settings, including all memory kinds inside java and possible fork.
One specific java process and system to illustrate my problem:
My current environment is java 1.6.0_18 under Ubuntu Linux 9.10.
I start large java server process with following JVM Options:
"-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"
Now, "top" command reports that the process uses 1.6gb memory...
Questions:
1 - how the maximal space used by java process is calculated? Please provide exact formula if possible.
( Smth. Like: max.heap + max.perm + stack + jvm space = maximal space )
2 - what is the infamous fork behavior under linux in my case? Will the forked JVM occupy extra 1.6 gb (resulting in total 3.2 Gb of used memory)?
3 - Which options must be used to absolutely ensure that no more than 1.5gb is used at any time?
thank you
#rancidfishbreath: "ulimit" will ensure that java cannot take more than specified amount of memory. My purpose is to ensure that java doesn't ever try to do that.
top reports 1.6GB because PermSize is ON TOP of the heap-size maximum heap size. In your case you set MaxPermSize to 512m and Xmx to 1024m. This amounts to 1536m. Just like in other languages, an absolutely precise number can not be calculated unless you know precisely how many threads are started, how many file handles are used, etc. The stack size per thread depends on the OS and JDK version, in your case its 1024k (if it is a 64bit machine). So if you have 10 threads you use 10240k extra as the stack is not allocated from the heap (Xmx). Most applications that behave nicely work perfectly when setting a lower stack and MaxPermSize. Try to set the ThreadStackSize to 128k and if you get a StackOverflowError (i.e. if you do lots of deep recursions) you can increase it in small steps until the problem disappears.
So my answer is essentially that you can not control it down to the MB how much the Java process will use, but you come fairly close by setting i.e. -Xmx1024m -XX:MaxPermSize=384m and -XX:ThreadStackSize=128k -XX:+UseCompressedOops. Even if you have lots of threads you will still have plenty of headroom until you reach 1.5GB. The UseCompressedOops tells the VM to use narrow pointers even when running on a 64bit JVM, thus saving some memory.
At high level JVM address space is divided in three main parts:
kernel space: ~1GB, also depends on platform, windows its more than 1GB
Java Heap: Java heap specified by user using the -Xmx, -XX:MaxPermSize, etc...
Rest of virtual address space goes to native usage of JVM, to accomodate the malloc/calloc done by JVM, native threads stack: thread respective the java threads and addition JVM native threads for GC, etc...
So you have (4GB - kernel space 1-1.25GB) ~2.75GB to play with,so you can set your java/native heap accordingly. But generally we should keep atleast 500MB for JVM native heap else there is a chances that you get native OOM. So we need to do a trade off here based on your application's java heap utilization.