Is there a way to set heap size from a running Java program?
No.
What you can do with an app that has very variable heap requirements is to set your max heap size very high with -Xmx and tune -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio so that the app will not hang on to a lot of memory when the heap shrinks (it does that with default settings).
But note that this may cause performance problems when the memory actually used by the app varies both strongly and quickly - in that case you're better off having it hang on to all the memory rather than give it back to the OS only to claim it again a second later. You might also want to fiddle with the GC options to ensure that the GC doesn't leave too much unclaimed objects lying around, which it tends to do when there's a lot of room for the heap to grow, and which would defeat the goal of wanting the heap size to adjust to the app's needs.
According to http://www.dreamincode.net/forums/showtopic96263.htm, you can't do this at runtime, but you can spawn another process with a different heap size.
You can tweak those settings when you start your application but once the JVM is up and running those values cannot be changed. Something like this:
java -Xms32m -Xmx512m FooBar
will set the minimum heap size to 32MB and the maximum heap size to 512MB. Once these are set, you cannot change them within the running program.
The consensus may indeed be that this is not possible, but we should be looking at the JVM source to see how it can be ergonomically controlled. It would be very nice to be able to have a JVMTI agent be able to adjust the heap/perm/tenured/new/&c sizing online/at runtime.
What would this do? it would allow agents to infer sizing adjustments based upon performance or footprint goals which will be important when moving JVMs into the Cloud.
You can use the -mx option on startup (also known as -Xmx) This is maximum size you should ever need in which cause you shouldn't need to set it to more than the maximum size you will ever need.
However, a work around is to have the main() check the maximum size and restart the java if the maximum size is not as desired. i.e. start another java program and die.
I asked same question to myself. And unlike answers above there is something I can do about my app increasing max heap JVM size. If app is a web server in cluster mode, I could start a new instance with changed min/max heap size and than shutdown initial instance. That shall be especially simple in GlassFish where you have management instance separated from nodeAgent(app server clustered instance) JVM.
Since many JVM applications are web apps, I think it worth to preserve in this blog.
If I understand your question correctly, you're trying to change the heap size at runtime. I don't see any reason why this should be possible. Set the heap size at startup using the -Xmx JVM option. I also advise you to set the -Xms option only if you absolutely need to. This option sets the initial amount of head memory that is allocated for the JVM.
You should know how your application behaves in terms of memory. Set the the value of -Xmx wisely. If your app is some kind of a server app you can set a higher value, otherwise compromise your choice with other possible apps running in client machines and of course available memory.
Related
The JVM -Xmx argument lets one set the max heap size for the JVM to some value. But, is there a way to make that value dynamic? In other words, I want to tell the JVM "look, if you need it, just keep taking RAM from the system until the system is out."
Two-part reason for asking:
First, the app in question can use a really wide range of ram depending on what the user is doing, so the conceptual min and max values are pretty far apart. Second, it would seem that the JVM reserves the max heap space from virtual memory at boot time. This particular app is run on a pretty wide variety of hardware, so picking a "one-size-fits-all" max heap space is hard since it has to be low enough to run on low-end hardware, but we'd really like to be able to take advantage of really beefy machines if they're available.
But, is there a way to make that value dynamic?
Literally, no. The max heap size is set at JVM launch time and cannot be increased.
In practice, you could just set the max heap size to as large as your platform will allow, and let the JVM grow the heap as it needs. There is an obvious risk in doing this; i.e. that your application will use all of the memory and cause the user's machine to grind to a halt. But that risk is implicit in your question.
EDIT
It is worth noting that there are various -XX... GC tuning options that allow you to tweak the way that the JVM expands the heap (up to the maximum).
Another possibility is to split your application into 2 parts. The first part of the application does all of the preparation necessary to determine the "size" of the problem. Then it works out an appropriate max heap size, and launches the memory hungry second part of the application in a new JVM.
This only works if the application can sensibly be partitioned as above.
This only works if it is possible to compute the problem size. In some cases, computing the problem size is tantamount to computing the result.
It is not clear that you will get better overall performance than if you just let the heap grow up to a maximum size.
It doesn't. It could, and it probably should:
-Xmx90% // 90% of physical memory
However, a default implicit, 100%, is proabbly not a good idea.
A program written in a non-GC language manages its memory very diligently, it will prune any garbage as soon as possible. It makes sense to allow it to get any memory it requests, assuming it's responsible for prompt garbage disposal.
A GC language is different. It collects garbage only when necessary. As long as there's room, it doesn't care about garbage lingering around. If it could get all the memory it would like to have, it would get all the memory in the computer.
So a GC programmer doesn't have to worry about disposing every piece of garbage any more, but he still have to have a general idea of the tolerable garbage/live object ratio, and instruct GC with -Xmx.
Basically, you can't adapt to various users' hardware using pure Java: that's when a little bit of shell/batch scripting can come in handy.
I do just that on OS X and Linux: I've got a little bash shell script that takes care of finding the correct JVM parameters depending on the hardware the application is run on and then calling the JVM.
Note that if you're providing a desktop Java application, then you may want to use something like izpack to provide your users an installer:
http://izpack.org
I don't know at all if Java Web Start can be used to provide different JVM parameters depending on the user's config (probably not, and JWS really s*cks big time anyway if you plan to provide a professional looking desktop app).
There is a JDK Enhancement Proposal (JEP) 8204088
https://bugs.openjdk.java.net/browse/JDK-8204088
http://openjdk.java.net/jeps/8204088
"Dynamic Max Memory Limit"
that suggests to introduce CurrentMaxHeapSize:
To dynamically limit how large the committed memory (i.e. the heap
size) can grow, a new dynamically user-defined variable is introduced:
CurrentMaxHeapSize. This variable (defined in bytes) limits how large
the heap can be expanded. It can be set at launch time and changed at
runtime. Regardless of when it is defined, it must always have a value
equal or below to MaxHeapSize (Xmx - the launch time option that
limits how large the heap can grow). Unlike MaxHeapSize,
CurrentMaxHeapSize, can be dynamically changed at runtime.
The expected usage is to setup the JVM with a very conservative Xmx
value (which is shown to have a very small impact on memory footprint)
and then control how large the heap is using the CurrentMaxHeapSize
dynamic limit.
While there are no signs of this feature actively being worked at,
it's relatively new JEP (from 2018), so I would still keep this in mind.
And company Jelastic (jelastic.com) has made a working prototype
of JEP 8204088 for G1 garbage collector:
See description at http://mail.openjdk.java.net/pipermail/hotspot-gc-dev/2018-May/022077.html
and list of patches for OpenJDK
http://cr.openjdk.java.net/~tschatzl/jelastic/cmx/
I'm 100% aware of the pitfalls of the JVM and advances/strides that have been done in the JVM world for it to understand the ergonomics/resources of a container. Also, yes I know that Java by default tries to allocate 1/4 the RAM available in the environment it's running...
So I had some thoughts and questions. I have made a 50/50 rule. If my application needs 1GB of Xmx, then I create 2GB container which gives another 1GB of RAM for the JVM overhead and any container/swap stuff (though not sure how swap really works within a container).
So I was thinking if my application needs 6GB of Xmx do I really need to create 12GB container or can I get away with 7GB or 8GB container? How much headroom do we need to give inside the container when it comes to RAM?
If your container is dedicated to the JVM, then you don't need to use a percentage.
You need to know how much java heap you need -- in this case 6GB -- and how much everything else needs, which it looks like you've already proven is less than 2GB.
Then just add them up. An 8GB container should be fine in this case. Note that stack and heap memory are separate, though, so if you will need a large number of threads running simultaneously, don't forget to add 1MB of stack space for each one (or whatever you set with -Xss)
ALSO, I really recommend setting the minimum and maximum Java heap size to the SAME value -- 6GB in this case. Then you're guaranteed that there will no surprises when Java tries to grow the heap.
This is a very complicated question. The heap is only a portion of Java process; which has a lot of native resources also, not tracked with -Xms and -Xmx. Here is a very nice summary about what these might be.
Then, there is the garbage collector algorithm that you use: the more freedom (extra-space) it has - the better. This has to do with so-called GC barriers. In very simple words, when GC is executed - all the heap manipulations will be "intercepted" and tracked separately. If the allocation rate is high, the space needed to track these changes (while an active GC happens) tends to grow. The more space you have - the better the GC will perform.
Then, it matters what java version you are using (because -Xmx and -Xms might mean different things under different java versions with respect to containers), for example take a look here.
So there is no easy answer here. If you can afford 12GB of memory - do so. Memory is a lot cheaper (usually) than debugging any of the problems above.
Why does Java not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one? In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
Thanks
EDIT: I really think it is a bad design choice for java. It is not safe to set the Xmx as high as possible (e.g. 100 GB!). If a user need to run my code on bigger data, he may run it on a system with more available RAM. Why should I, as the developer, set the maximum available memory of my program? I do not know which size the data is !!!
The JVM increases the heap size when it needs to up to the maximum heap size you set. It doesn't take all the memory as it has to preallocate this on startup and you might want to use some memory for something else, like thread stacks, share libraries, off heap memory etc.
Why Java does not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
If you set the maximum heap size large enough, or use off heap memory, it will. It just won't do this by default. One reason is that heap memory has to be in main memory and cannot be swapped out without killing the performance of your machine (if not killing your machine) This is not true of C programs and expanding so much is worse than failing to expand.
If you have a JVM with a heap size of 10% more than main memory and you use that much, as soon as you perform a GC, which has to touch every page more than once, you are likely to find you need to power cycle the box.
Linux has a process killer when resources run out, and this doesn't trigger you might be luck enough to restart.
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one
A key feature of the JVM is that it is platform independent, so it has its own control. The JVM running at the limit of your process space is likely to prevent your machine from working (from heavy swapping) I don't know .NET avoids this from happening.
In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
It does already as I have said, it's just not a good idea to allow it to use too much memory.
It is a developers decision to decide how much heap memory must be allowed for the java process. it is based on various factors like the project design, platform on which it is going to run etc.
We can set heap size properties
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
As you can see we set the initial heap size and if later JVM finds that more is needed then it can increase the heap size upto the maximum specified limit. Infact the size changes when we do GC(not a mandate). I had posted question on similar grounds. You can refer to it. So increase/decrease of heap size is done by JVM. All we have to do as developers is specify limit based on our requirements.
I cannot understand the Java memory usage. I have an application which is executed with maximum memory size set to 256M. Yet, at some point in time I can see that according to the task manager it takes up to 700MB!
Needless to say, all the rest of the applications are a bit unresponsive when this happens as they are probably swapped out.
It's JDK 1.6 on WinXP. Any ideas ?
The memory configured is available to the application. It won't include
the JVM size
the jars/libs loaded in
native libraries and related allocated memory
which will result in a much bigger image. Note that due to how the OS and the JVM work that 700Mb may be shared between multiple JVMs (due to shared binary images, shared libraries etc.)
The amount you specify with -Xmx is only for the user accessible heap - the space in which you create runtime objects dynamically.
The Java process will usea lot more space for its own needs, including the JVM, the program and other libraries, constants pool, etc.
In addition, because of the way the garbage collection system works, there may be more memory allocated than what is currently in the heap - it just hasn't been reclaimed yet.
All that being said, setting your program to a maximal heap of 256MB is really lowballing it on a modern system. For heavy programs you can usually request at least 1GB of heap.
As you mentioned, one possible cause of slowness is that some of the memory allocated to Java gets swapped off to disk. In that case, the program would indeed start churning the disk, so don't go overboard if you have little physical memory available. On Linux, you can get page miss stats for a process, I am sure there's a similar way on windows.
The -Xmx option only limits the java heap size. In addition to the heap, java will allocate memory for other things, including a stack for each thread (2kB by default, set by -Xss), the PermGenSpace, etc.
So, depending on how many threads you launch, the number of classes your application loads, and some other factors, you may use a lot more memory than expected.
Also, as pointed out, the Windows task manager may take the virtual memory into account.
You mean the heap right? As far as i know there are two things to take care. The Xms option which sets an initial java heap size and the Xmx option which sets the maximum java heap space. If the heap memory is overreaching the Xmx value there should be an OutOfMemoryException.
What about the virtual pages it's taking up. I think Windows shows you the full set of everything aggregated.
I know how to set the Java heap size in Tomcat and Eclipse. My question is why? Was there an arbitrary limit set on the initial heap back when Java was first introduced so the VM wouldn't grow over a certain size? It seems with most machines today with large memory space available this isn't something we should have to deal with.
Thanks,
Tom
Even now, the heap doesn't grow without limit.
When the oldest generation is full, should you expand it or just GC? Or should you only expand it if a GC doesn't free any memory?
.NET takes the approach you'd like: you can't tell it to only use a certain amount of heap. Sometimes it feels like that's a better idea, but other times it's nice to be able to have two processes on the same machine and know that neither of them will be able to hog the whole of the memory...
I glanced by this the other day, but I'm not sure if this is what you want: -XX:+AggressiveHeap. According to Sun:
This option instructs the JVM to push
memory use to the limit: the overall
heap is more than 3850MB, the
allocation area of each thread is
256K, the memory management policy
defers collection as long as possible,
and (beginning with J2SE 1.3.1_02)
some GC activity is done in parallel.
Because this option sets heap size, do
not use the -Xms or -Xmx options in
conjunction with -XX:+AggressiveHeap.
Doing so will cause the options to
override each other's settings for
heap size.
I wasn't sure if this really meant what I thought it meant, though - that you could just let the JVM gobble up heap space until it is satisfied. However, it doesn't sound like it's a good option to use for most situations.
I would think that it's good to be able to provide a limit so that if you have a memory issue it doesn't gobble up all the system memory leaving you with only a reboot option.
Java is a cross-platform system. Some systems (like Unix and derviates) have a ulimit command which allows you to limit how much memory a process can use. Others don't. Plus Java is sometimes run embedded, for example in a web browser. You don't want a broken applet to bring down your desktop (well, that was at least the idea but applets never really caught on but that's another story). Essentially, this option is one of the key cornerstones for sandboxing.
So the VM developers needed a portable solution: They added an option to the VM which would allow anyone (user, admin, web browser) to control how much RAM a VM could allocate tops. The needs of the various uses of Java are just too diverse to have one size fits all.
This becomes even more important today when you look at mobile devices. You desktop has 2-8GB RAM but your mobile has probably much less. And for these things, you really don't want one bad app to bring down the device because there might not even be a user who could check.