I cannot understand the Java memory usage. I have an application which is executed with maximum memory size set to 256M. Yet, at some point in time I can see that according to the task manager it takes up to 700MB!
Needless to say, all the rest of the applications are a bit unresponsive when this happens as they are probably swapped out.
It's JDK 1.6 on WinXP. Any ideas ?
The memory configured is available to the application. It won't include
the JVM size
the jars/libs loaded in
native libraries and related allocated memory
which will result in a much bigger image. Note that due to how the OS and the JVM work that 700Mb may be shared between multiple JVMs (due to shared binary images, shared libraries etc.)
The amount you specify with -Xmx is only for the user accessible heap - the space in which you create runtime objects dynamically.
The Java process will usea lot more space for its own needs, including the JVM, the program and other libraries, constants pool, etc.
In addition, because of the way the garbage collection system works, there may be more memory allocated than what is currently in the heap - it just hasn't been reclaimed yet.
All that being said, setting your program to a maximal heap of 256MB is really lowballing it on a modern system. For heavy programs you can usually request at least 1GB of heap.
As you mentioned, one possible cause of slowness is that some of the memory allocated to Java gets swapped off to disk. In that case, the program would indeed start churning the disk, so don't go overboard if you have little physical memory available. On Linux, you can get page miss stats for a process, I am sure there's a similar way on windows.
The -Xmx option only limits the java heap size. In addition to the heap, java will allocate memory for other things, including a stack for each thread (2kB by default, set by -Xss), the PermGenSpace, etc.
So, depending on how many threads you launch, the number of classes your application loads, and some other factors, you may use a lot more memory than expected.
Also, as pointed out, the Windows task manager may take the virtual memory into account.
You mean the heap right? As far as i know there are two things to take care. The Xms option which sets an initial java heap size and the Xmx option which sets the maximum java heap space. If the heap memory is overreaching the Xmx value there should be an OutOfMemoryException.
What about the virtual pages it's taking up. I think Windows shows you the full set of everything aggregated.
Related
In the spirit of question Java: Why does MaxPermSize exist?, I'd like to ask why the Oracle JVM uses a fixed upper limit for the size of its memory allocation pool.
The default is 1/4 of your physical RAM (with upper & lower limit); as a consequence, if you have a memory-hungry application you have to manually change the limit (parameter -Xmx), or your app will perform poorly, possible even crash with an OutOfMemoryError.
Why does this fixed limit even exist? Why does the JVM not allocate memory as needed, like native programs do on most operating systems?
This would solve a whole class of common problems with Java software (just Google to see how many hints there are on the net on solving problems by setting -Xmx).
Edit:
Some answers point out that this will protect the rest of the system from a Java program with a run-away memory leak; without the limit this would bring the whole system down by exhausting all memory. This is true. However, it is equally true for any other program, and modern OSes already let you limit the maximum memory for a programm (Linux ulimit, Windows "Job Objects"). So this does not really answer the question, which is "Why does the JVM do it differently from most other programs / runtime environments?".
Why does this fixed limit even exist? Why does the JVM not allocate memory as needed, like native programs do on most operating systems?
The reason is NOT that the GC needs to know before hand what the maximum heap size can be. The JVM is clearly capable of expanding its heap ... up to the maximum ... and I'm sure it would be a relatively small change to remove that maximum. (After all, other Java implementations do this.) And it would equally be possible to have a simple way to say "use as much memory as you like" to the JVM.
I'm sure that the real reason is to protect the host operating system against the effects of faulty Java applications using all available memory. Running with an unbounded heap is potentially dangerous.
Basically, many operating systems (e.g. Windows, Linux) suffer serious performance degradation if some application tries to use all available memory. On Linux for example, the system may thrash badly, resulting in everything on the system running incredibly slowly. In the worst case, the system won't be able to start new processes, and existing processes may start crashing when the operating system refuses their (legitimate) requests for more memory. Often, the only option is to reboot.
If the JVM ran with an unbounded heap by default, any time someone ran a Java program with a storage leak ... or that simply tried to use too much memory ... they would risk bringing down the entire operating system.
In summary, having a default heap bound is a good thing because:
it protects the health of your system,
it encourages developers / users to think about memory usage by "hungry" applications, and
it potentially allows GC optimizations. (As suggested by other answers: it is plausible, but I cannot confirm this.)
EDIT
In response to the comments:
It doesn't really matter why Sun's JVMs live within a bounded heap, where other applications don't. They do, and advantages of doing so are (IMO) clear. Perhaps a more interesting question is why other managed languages don't put a bound on their heaps by default.
The -Xmx and ulimit approaches are qualitatively different. In the former case, the JVM has full knowledge of the limits it is running under and gets a chance to manage its memory usage accordingly. In the latter case, the first thing a typical C application knows about it is when a malloc call fails. The typical response is to exit with an error code (if the program checks the malloc result), or die with a segmentation fault. OK, a C application could in theory keep track of how much memory it has used, and try to respond to an impending memory crisis. But it would be hard work.
The other thing that is different about Java and C/C++ applications is that the former tend to be both more complicated and longer running. In practice, this means that Java applications are more likely to suffer from slow leaks. In the C/C++ case, the fact that memory management is harder means that developers don't attempt to build single applications of that complexity. Rather, they are more likely to build (say) a complex service by having a listener process fork of child processes to do stuff ... and then exit. This naturally mitigates the effect of memory leaks in the child process.
The idea of a JVM responding "adaptively" to requests from the OS to give memory back is interesting. But there is a BIG problem. In order to give a segment of memory back, the JVM first has to clear out any reachable objects in the segment. Typically that means running the garbage collector. But running the garbage collector is the last thing you want to do if the system is in a memory crisis ... because it is pretty much guaranteed to generate a burst of virtual memory paging.
Hm, I'll try summarizing the answers so far.
There is no technical reason why the JVM needs to have a hard limit for its heap size. It could have been implemented without one, and in fact many other dynamic languages do not have this.
Therefore, giving the JVM a heap size limit was simply a design decision by the implementors. Second-guessing why this was done is a bit difficult, and there may not be a single reason. The most likely reason is that it helps protect a system from a Java program with a memory leak, which might otherwise exhaust all RAM and cause other apps to crash or the system to thrash.
Sun could have omitted the feature and simply told people to use the OS-native resource limiting mechanisms, but they probably wanted to always have a limit, so they implemented it themselves.
At any rate, the JVM needs to be aware of any such limit (to adapt its GC strategy), so using an OS-native mechanism would not have saved much programming effort.
Also, there is one reason why such a built-in limit is more important for the JVM than for a "normal" program without GC (such as a C/C++ program):
Unlike a program with manual memory management, a program using GC does not really have a well-defined memory requirement, even with fixed input data. It only has a minimum requirement, i.e. the sum of the sizes of all objects that are actually live (reachable) at a given point in time. However, in practice a program will need additional memory to hold dead, but not yet GCed objects, because the GC cannot collect every object right away, as that would cause too much GC overhead. So GC only kicks in from time to time, and therefore some "breathing room" is required on the heap, where dead objects can await the GC.
This means that the memory required for a program using GC is really a compromise between saving memory and having good througput (by letting the GC run less often). So in some cases it may make sense to set the heap limit lower than what the JVM would use if it could, so save RAM at the expense of performance. To do this, there needs to be a way to set a heap limit.
I think part of it has to do with the implementation of the Garbage Collector (GC). The GC is typically lazy, meaning it will only start really trying to reclaim memory internally when the heap is at its maximum size. If you didn't set an upper limit, the runtime would happily continue to inflate until it used every available bit of memory on your system.
That's because from the application's perspective, it's more performant to take more resources than exert effort to use the resources you already have to full utilization. This tends to make sense for a lot of (if not most) uses of Java, which is a server setting where the application is literally the only thing that matters on the server. It tends to be slightly less ideal when you're trying to implement a client in Java, which will run amongst dozens of other applications at the same time.
Remember that with native programs, the programmer typically requests but also explicitly cleans up resources. That isn't typically true with environments who do automatic memory management.
It is due to the design of the JVM. Other JVM's (like the one from Microsoft and some IBM ones) can use all the memory available in the system if needed, without an arbitrary limit.
I believe it allows for GC-optimizations.
I think that the upper limit for memory is is linked to the fact that JVM is a VM.
As any physical machine has a given (fixed) ammount of RAM so the VM has one.
The maximal size makes the JVM easier to manage by the operating system and ensures some performance gains(less swapping).
Sun' JVM also works in quite limited hardware architecture(embedded ARM systems) and there the management of resources is crucial.
One answer that no-one above gave is that the JVM uses both heap and non-heap memory pools. Putting an upper limit on the heap defines not only how much memory is available for the heap memory pools, but it also defines how much memory is available for NON-HEAP usages. I suppose that the JVM could just allocate non-heap at the top of virtual memory and heap at the bottom of virtual memory and grow both toward each other.
Non-heap memory includes the DLLs or SOs that comprise the JVM and any native code being used as well as compiled Java code, thread stacks, native objects, PermGen (meta-data about compiled classes), among other uses. I've seen Java programs crash because so much memory was given to the heap that the application ran out of non-heap memory. This is where I learned that it can be important to reserve memory for non-heap usages by not setting the heap to be too large.
This makes a much bigger difference in a 32-bit world where an application often has only 2GB of virtual address space than it does in a 64-bit world, of course.
Would it not make more sense to separate the upper bound that triggers GC and the maximum that can be allocated ? Once the memory allocated hits the upper-bound, GC can kick in and release some memory to the free pool.
sort of like how I clean my desk that I share with my co-worker. I have a large desk, and my threshold of how much junk I can tolerate on the table is much less than the size of my desk. I don't need to have fill up every available inch before I garbage collect.
I could also return some of the desk space that I using to my co-worker, who is sharing my desk....I understand jvms don't return memory back to the system after they've allocated it to themselves, but it does not have to be that way no ?
It does allocate memory as needed, up to -Xmx ;)
One reason I can think of is that once the JVM allocates an amount of memory for its heap, it will never let it go. So if your heap has no upper bound, the JVM may just grab all the free memory on the system and then never let it go.
The upper bound also tells the JVM when it needs to do a full garbage collection. If your app is still under the upper bound, the JVM will postpone garbage collection and let the memory footprint of your application grow.
Native programs can die due to out of memory errors as well since native applications also have a memory limit: the memory available on the system - the memory already held by other applications.
The JVM also needs a contiguous block of system memory in order for garbage collection to be performed efficiently.
EDIT
Contiguous memory claim or here
The JVM will apparently let some memory go, but it is rare with the default configuration.
I'm 100% aware of the pitfalls of the JVM and advances/strides that have been done in the JVM world for it to understand the ergonomics/resources of a container. Also, yes I know that Java by default tries to allocate 1/4 the RAM available in the environment it's running...
So I had some thoughts and questions. I have made a 50/50 rule. If my application needs 1GB of Xmx, then I create 2GB container which gives another 1GB of RAM for the JVM overhead and any container/swap stuff (though not sure how swap really works within a container).
So I was thinking if my application needs 6GB of Xmx do I really need to create 12GB container or can I get away with 7GB or 8GB container? How much headroom do we need to give inside the container when it comes to RAM?
If your container is dedicated to the JVM, then you don't need to use a percentage.
You need to know how much java heap you need -- in this case 6GB -- and how much everything else needs, which it looks like you've already proven is less than 2GB.
Then just add them up. An 8GB container should be fine in this case. Note that stack and heap memory are separate, though, so if you will need a large number of threads running simultaneously, don't forget to add 1MB of stack space for each one (or whatever you set with -Xss)
ALSO, I really recommend setting the minimum and maximum Java heap size to the SAME value -- 6GB in this case. Then you're guaranteed that there will no surprises when Java tries to grow the heap.
This is a very complicated question. The heap is only a portion of Java process; which has a lot of native resources also, not tracked with -Xms and -Xmx. Here is a very nice summary about what these might be.
Then, there is the garbage collector algorithm that you use: the more freedom (extra-space) it has - the better. This has to do with so-called GC barriers. In very simple words, when GC is executed - all the heap manipulations will be "intercepted" and tracked separately. If the allocation rate is high, the space needed to track these changes (while an active GC happens) tends to grow. The more space you have - the better the GC will perform.
Then, it matters what java version you are using (because -Xmx and -Xms might mean different things under different java versions with respect to containers), for example take a look here.
So there is no easy answer here. If you can afford 12GB of memory - do so. Memory is a lot cheaper (usually) than debugging any of the problems above.
In Java the virtual machine pre-allocates a memory heap which cannot be expanded at runtime. The developer can increase the size of the heap with the -Xmx switch when the VM loads, but there is no way to increase the maximum size of the heap at runtime. Why is this?
Fragmentation is a massive problem in memory allocation, as is memory starvation. It's a lot simpler, and less error-prone if you can allocate and reserve the memory you need, especially in a server environment. By pre-allocating memory, you also have a higher probability of having most of your memory in continuously allocation (not guaranteed, thank you #mttdbrd) which could be faster to access.
Going back to when Java first started out, installations with more than 1GB of RAM were pretty much unheard of, instead, we had to work with machines that had as little as 256mb of RAM, sometimes even less! Couple that with how slow RAM was, and it made much more sense to be able to read and write to hopefully contiguously allocated blocks. You are also not constantly hammering the OS to give you more RAM and then releasing it again, freeing up (back then) precious CPU cycles.
In that environment, it's very easy to run out of memory suddenly, so it made a lot of sense to be able to allocate what you MIGHT need and make sure you would have it when the time comes.
These days, I guess with RAM being so much more accessible, it makes a lot less sense, although, when I look at my servers and how memory is allocated, I love the fact that all my Java applications have nice, mostly contiguously allocated blocks of memory when compared to some of the other applications that are all over the place.
That is also why you can't up the heap at runtime, there's no way to guarantee that you will have a contiguous allocation any more.
There is no reason per the JVM specification why the heap size must be specified ahead of time except that it was the choice of the implementors. The specification states: "A Java Virtual Machine implementation may provide the programmer or the user control over the initial size of the heap, as well as, if the heap can be dynamically expanded or contracted, control over the maximum and minimum heap size."
The other answers here are just wrong: "The heap may be of a fixed size or may be expanded as required by the computation and may be contracted if a larger heap becomes unnecessary."
Source: The Java Virtual Machine Specification, Java SE 7 Edition. Section 2.5.3, "Heap." That's page 13 in the printed edition.
When your code starts to work, JVM is already created and configured. Besides, this limitation guarantees that program will not take all available system resources, breaking normal functioning of other applications on a server regardless how bad its code is. ;)
Why does Java not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one? In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
Thanks
EDIT: I really think it is a bad design choice for java. It is not safe to set the Xmx as high as possible (e.g. 100 GB!). If a user need to run my code on bigger data, he may run it on a system with more available RAM. Why should I, as the developer, set the maximum available memory of my program? I do not know which size the data is !!!
The JVM increases the heap size when it needs to up to the maximum heap size you set. It doesn't take all the memory as it has to preallocate this on startup and you might want to use some memory for something else, like thread stacks, share libraries, off heap memory etc.
Why Java does not expand the heap size until it hits the OS-imposed process memory limit, in the same way .NET CLR does?
If you set the maximum heap size large enough, or use off heap memory, it will. It just won't do this by default. One reason is that heap memory has to be in main memory and cannot be swapped out without killing the performance of your machine (if not killing your machine) This is not true of C programs and expanding so much is worse than failing to expand.
If you have a JVM with a heap size of 10% more than main memory and you use that much, as soon as you perform a GC, which has to touch every page more than once, you are likely to find you need to power cycle the box.
Linux has a process killer when resources run out, and this doesn't trigger you might be luck enough to restart.
Is it just a policy made by JVM developers, or is an advantage of .NET CLR's architecture over JVM's one
A key feature of the JVM is that it is platform independent, so it has its own control. The JVM running at the limit of your process space is likely to prevent your machine from working (from heavy swapping) I don't know .NET avoids this from happening.
In other words, if Oracle engineers want to implement automatic heap expansion for the JVM, are they able to do that?
It does already as I have said, it's just not a good idea to allow it to use too much memory.
It is a developers decision to decide how much heap memory must be allowed for the java process. it is based on various factors like the project design, platform on which it is going to run etc.
We can set heap size properties
-Xms<size> set initial Java heap size
-Xmx<size> set maximum Java heap size
-Xss<size> set java thread stack size
As you can see we set the initial heap size and if later JVM finds that more is needed then it can increase the heap size upto the maximum specified limit. Infact the size changes when we do GC(not a mandate). I had posted question on similar grounds. You can refer to it. So increase/decrease of heap size is done by JVM. All we have to do as developers is specify limit based on our requirements.
Is there a way to set heap size from a running Java program?
No.
What you can do with an app that has very variable heap requirements is to set your max heap size very high with -Xmx and tune -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio so that the app will not hang on to a lot of memory when the heap shrinks (it does that with default settings).
But note that this may cause performance problems when the memory actually used by the app varies both strongly and quickly - in that case you're better off having it hang on to all the memory rather than give it back to the OS only to claim it again a second later. You might also want to fiddle with the GC options to ensure that the GC doesn't leave too much unclaimed objects lying around, which it tends to do when there's a lot of room for the heap to grow, and which would defeat the goal of wanting the heap size to adjust to the app's needs.
According to http://www.dreamincode.net/forums/showtopic96263.htm, you can't do this at runtime, but you can spawn another process with a different heap size.
You can tweak those settings when you start your application but once the JVM is up and running those values cannot be changed. Something like this:
java -Xms32m -Xmx512m FooBar
will set the minimum heap size to 32MB and the maximum heap size to 512MB. Once these are set, you cannot change them within the running program.
The consensus may indeed be that this is not possible, but we should be looking at the JVM source to see how it can be ergonomically controlled. It would be very nice to be able to have a JVMTI agent be able to adjust the heap/perm/tenured/new/&c sizing online/at runtime.
What would this do? it would allow agents to infer sizing adjustments based upon performance or footprint goals which will be important when moving JVMs into the Cloud.
You can use the -mx option on startup (also known as -Xmx) This is maximum size you should ever need in which cause you shouldn't need to set it to more than the maximum size you will ever need.
However, a work around is to have the main() check the maximum size and restart the java if the maximum size is not as desired. i.e. start another java program and die.
I asked same question to myself. And unlike answers above there is something I can do about my app increasing max heap JVM size. If app is a web server in cluster mode, I could start a new instance with changed min/max heap size and than shutdown initial instance. That shall be especially simple in GlassFish where you have management instance separated from nodeAgent(app server clustered instance) JVM.
Since many JVM applications are web apps, I think it worth to preserve in this blog.
If I understand your question correctly, you're trying to change the heap size at runtime. I don't see any reason why this should be possible. Set the heap size at startup using the -Xmx JVM option. I also advise you to set the -Xms option only if you absolutely need to. This option sets the initial amount of head memory that is allocated for the JVM.
You should know how your application behaves in terms of memory. Set the the value of -Xmx wisely. If your app is some kind of a server app you can set a higher value, otherwise compromise your choice with other possible apps running in client machines and of course available memory.