I am running a server with the following attributes:
Windows Server 2008 R2 Standard - 64bit
4gb RAM
I am trying to set the heap size to 3gb for an application. I am using the flags -Xmx3G -Xms3G. Running with the flags results in the following error:
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
I have been playing with the setting to see what my ceiling is and found that 1568 is my ceiling. What am I missing?
How much physical memory is available on your system (out of the original 4 GB)? It sounds like your system doesn't have 3GB of physical memory available when the vm starts up.
Remember that the JVM needs more memory than is allocated to the heap -- there are other data structures as well (thread stacks, etc) that also need memory. So the settings you are providing attempt to use more than 3GB of memory.
Also, are you using a 64-bit jvm? The practical limit for heap size on a 32-bit vm is 1.4 to 1.6 gigabytes according to this document.
Java requires continuous virtual memory on startup. On windows, 32-bit application run in an 32-bit emulated environment so you don't get much more continuous memory than you would in a 32-bit OS. c.f. on Solaris you get over 3 GB virtual memory for 32-bit Java.
I suggest you use the 64-bit version of Java as this will make use of all the memory you have. You still need to have free memory but the larger address space doesn't suffer from fragmentation.
BTW: The heap space is only part of the memory used, you need memory for shared libraries, direct memory, GUI components etc.
It seems you don't have 3G of physical mememory available. Here is an interesting article on Java heap size settings errors. Java heap size setting errors
Related
I am working on java application and I have set the following configurations in the VM option
-Xms and -Xmx options are set to 1024m
-XX:MaxPermSize=128m Hardware :32 bit Windows 7 system with 2GB ram.
I often encounter with java out of swap space error. What could be the reason? Please help me.
The reason is that your operating system is not configured with enough swap space for the job mix that you are running. Swap space is an area on disc where the operating system puts copies memory pages when there are more virtual memory pages than physical memory pages.
So what has happened is that your JVM has asked for more virtual memory than the operating system can give it.
(Updated to include Peter's comments)
Some possible fixes:
Add more physical memory assuming that the hardware and OS allow this. (In this case, the answer the OS should allow it ...)
Configure the system with more swap space.
Kill some of the other non-essential applications and services running on the machine.
Change the Java application's JVM options to reduce the heap size.
The Java release notes under "Hotspot VM" say about this exact error;
If you see this symptom, consider increasing the available swap space
by allocating more of your disk for virtual memory and/or by limiting
the number of applications you run simultaneously. You can also
decrease your usage of memory by reducing the value of the -Xmx flag,
which limits the size of the Java object heap.
In other words, your machine is doing too many other things using up memory to be able to supply the 1GB you're telling the Java VM that it should be able to use.
we have a 32 bit JVM running under 64 bit RHEL5 on a box which has plenty of memory (32G). For different reasons, this process requires a pretty large managed heap and permgen space -- currently, it runs with the following VM arguments:
-Xmx2200M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled
I have started seeing JVM crashes recently because it - seemingly - ran out of native memory (it could not create native threads, or failed to allocate native memory, etc.). These crashes were not (directly) related to the state of the managed heap, as when those crashes happened the managed heap was ~50-70% full.
I know that the memory reserved for the managed process is close to 2.5 G which leaves not more than 0.5G for the JVM itself, BUT
- I don't understand why 0.5 isn't enough for the JVM, even if there is constant GCing going on
- the real question is this: when I connect to the process using jconsole, then it says that (currently)
Committed virtual memory:
3,211,180 kbytes
Which is more than 3G. I can imagine that for some reason JVM thinks that it has
3,211,180 kbytes (3.06G) of memory is but when it tries to go over 3G the memory allocation fails.
Any ideas on
a) why does this happen
b) how is it possible to avoid this
Thanks.
Mate
There is a lot of overhead in a typical VM that is not counted in the VM accounting because it is essentially stolen by the native elements of the process - e.g. mapping in .so files that are used for performing native level code for system libraries are not counted in the base VM accounting. your typical shared library is mapped in at the top GB of memory, so if you try to allocate memory into this region you will be denied, because it would overrun with the shared libraries' memory region - memory allocation on most OS's is performed by a simple bar that is raised when you ask for more memory. When you ask for memory and the bar conflicts with other uses, then it simply fails. Most of the details that follow are about this.
You need to avoid needing so much memory in a 32bit process. This is the fundamental challenge. It is trivial to get a 64bit VM that will allow you to make use of so much more memory than would be otherwise accessible - it is just simply usable in this situation.
If you are using a 32bit process, there is a high probability that you are encountering the effective address space limit of the 32bit process. For windows, this is a maximum of about 3GB - anything above this is reserved for I/O space and the kernel. You can move this, but it has a tendency to break applications/drivers that are designed for the 32bit OS.
For Linux, you end up with ~3GB of usable addressable RAM per process, the rest is used up by things like the kernel and mapped in shared libraries. The limit is referred to as the 'address space limit', and I presume it can be tuned.
How to avoid it? Well, for the most part, you can't, it's a physical limitation of the 32bit address space and the needs of having the kernel/IO in the same address space as the process for a 32bit OS.
With 64 bit OS's you have (most of) all of the 64 bit address space to play around with, which is extensively more than you need to use.
When you start a JVM it allocates it maximum size immediately. How much of that memory is used doesn't really matter. Your application can address about 3 GB of which about 2.3 GB you have allocated to heap and perm gen. The rest is available for shared libraries (typically around 200 MB) and thread stacks.
Worrying about why you can't use the full 3 GB of address isn't very useful when the solution is relatively trivial (use a 64-bit JVM) I am assuming you don't have any shared libraries which are only available in 32-bit. However if you do have additional shared libraries they can easily be using 100s of MB.
I am using 32 bit win.7 and using Eclipse. Also having 4GB RAM.
I want to allocate my java application a maximum heapsize of around 3 GB, but I am able to allocate maximum 1.5GB through VM arguments -Xmx1056m.
What should I do? If I Install a 64 bit win.7. it would be able then to allocate 3GB heapsize to my app?
A regular 32-bit Windows process can only address 2GB of memory, even if you have more memory available. You can find the memory limits for different Windows versions here.
Since the VM need memory for more things than just the heap, the max heap size will be slightly less than the maxmimum memory available to the process. Usually, you can tweak the heap up to around 1.6GB for a 32-bit Windows VM.
You need a 64bit OS and 64bit VM to allocate this much RAM.
I don't have the link right now that describes the JVM memory management process. The limitation you have stumbled upon is a limitation of how Java performs garbage collection. The Java memory heap must be a contigious block. The garbage collection algorithms were optimized for this design limitation so they can perform efficiently. In a 32bit operating system, you are not in control of what memory addresses device drivers are loaded into. On Windows, the OS will only reallocate the device driver base address stored in the DLL if it conflicts with an already loaded code. Other operating systems may reallocate all device drivers on load so they live in a contiguous block near the kernel.
Basically, the limitation is the same for 64bit and 32bit operating systems. It's just on a 64bit OS there are several more address ranges to choose from. Make sure you use a 64bit JVM to match the OS. That's why the 64bit OS is able to find a larger contigious block of memory addresses than the 32bit OS.
EDIT: Additionally the 32bit JVM has a max heap size limit of 2GB.
REFERENCE:
http://publib.boulder.ibm.com/infocenter/javasdk/tools/index.jsp?topic=/com.ibm.java.doc.igaa/_1vg00014884d287-11c3fb28dae-7ff6_1001.html
Java maximum memory on Windows XP
http://forums.sun.com/thread.jspa?messageID=2715152#2715152
What you will need is not only a 64bit OS and a 64bit VM, but also more memory.
On a 32bits Windows system the virtual address space is split with 2 GB for kernel operations and 2 GB for user applications. So you're screwed.
There's one possible but very unlikely workaround: you can enable the /3GB switch to raise this limitation and have the system allocate 1GB of virtual address space for for kernel operations and 3GB for user applications (if they are /LARGEADDRESSPACEAWARE).
Unfortunately, the 32bits Sun/Oracle HotSpot JVM isn't LARGEADDRESSAWARE (that I know of), and other 32bits JVM likely aren't either.
But think about it: even if you were able to do that, you would use all the memory available for you system. Nothing would be left for other programs after you've allocated your 3GB of heap for your JVM. Your system would be swapping to disk all the time. It would be unusable.
Just get a 64bis OS with more RAM. That's all there is for you, short of finding ways to have your program use less memory.
I have defined -Xmsx 1.3GB in the java VM parameters and my Eclipse does not allow more than this, when running the application I got the below exception:
Exception in thread "Thread-3" java.lang.OutOfMemoryError: Java heap space
What can I do?
You can set the maximum memory eclipse uses with -mx1300m or the like. This limitation will be because you are running 32-bit java on Windows. On a 64-bit OS, you won't have this problem.
However, its the maximum memory size you set for each application in eclipse which matters. What have you set in your run options in eclipse?
Your question is very unclear:
Are you running the application in a new JVM?
Did you set the -Xmx / -Xms parameters in the launcher for the child JVM?
If the answer to either of those questions is "no", then try doing ... both. (In particular, if you don't set at least -Xmx for the child JVM, you'll get the default heap size which is relatively small.)
If the answer to both of those questions is "yes", then the problem is that you are running into the limits of your hardware and/or operating system configuration:
On a typical 32bit Windows, a user process can only address a total 2**31 bytes of virtual memory, and some of that will be used by the JVM binaries, native libraries and various non-heap memory allocations. (On a 32 bit Linux, I believe you can have up to 2**31 + 2**30). The "fix" for this is to use a 64bit OS and a 64bit JVM.
In addition, a JVM is limited on the amount of memory that it can request by the resources of the OS'es virtual memory subsystem. This is typically bounded by the sum of the available RAM and the size of the disc files / partitions used for paging. The "fix" for this is to increase the size of the paging file / partition. Adding more RAM would probably be a good idea too.
You may want to look at the aggressive Heap option http://java.sun.com/docs/hotspot/gc1.4.2/#4.2.2.%20AggressiveHeap|outline
It solved a similiar issue for me.
On a 64-bit Windows machine with 12GB of RAM and 33GB of Virtual Memory (per Task Manager), I'm able to run Java (1.6.0_03-b05) with an impossible -Xmx setting of 3.5TB but it fails with 35TB. What's the logic behind when it works and when it fails? The error at 35TB seems to imply that it's trying to reserve space at startup. Why would it do that for -Xmx (as opposed to -Xms)?
C:\temp>java -Xmx3500g ostest
os.arch=amd64
13781729280 Bytes RAM
C:\temp>java -Xmx35000g ostest
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
On Solaris (4GB RAM, Java 1.5.0_16), I pretty much gave up at 1 PB on how high I can set -Xmx. I don't understand the logic for when it will error out on the -Xmx setting.
devsun1.mgo:/export/home/mgo> java -d64 -Xmx1000000g ostest
os.arch=sparcv9
4294967296 Bytes RAM
At least with the Sun 64-bit VM 1.6.0_17 for Windows, ObjectStartArray::initialize will allocate 1 byte for each 512 bytes of heap on VM startup. Starting the VM with 35TB heap will cause the VM to allocate 70GB immediately and hence fail on your system.
The 32-bit VM (and so I suppose the 64-bit VM) from Sun does not take account for available physical memory when calculating the maximum heap, but is only limited by the 2GB addressable memory on Windows and Linux or 4GB on Solaris or possibly failing to allocate enough memory at startup for the management area.
If you think about it, checking the sanity of the max heap value against available physical memory does not make much sense. X GB of physical memory does not mean that X GB is available to the VM when required, it can just as well have been used by other processes, so the VM needs a way to cope with the situation that more heap is required than available from the OS anyway. If the VM is not broken, OutOfMemoryErrors are thrown if memory cannot be allocated from the OS, just as if the max heap size has been reached.
According to this thread on Sun's java forums (the OP has 16GB of physical memory):
You could specify -Xmx20g, but if the total of the memory needed by all the processes on your machine ever exceeds the physical memory on your machine, you are likely to end up paging. Some applications can survive running in paged memory, but the JVM isn't one of them. Your code might run okay, but, for example, garbage collections will be abysmally slow.
UPDATE: I googled a bit further and, according to the Frequently Asked Questions About the Java HotSpot VM and more precisely How large a heap can I create using a 64-bit VM?
How large a heap can I create using a 64-bit VM?
On 64-bit VMs, you have 64 bits of
addressability to work with resulting
in a maximum Java heap size limited
only by the amount of physical memory
and swap space your system provides.
See also Why can't I get a larger
heap with the 32-bit JVM?
I don't know why you are able to start a JVM with a heap >45GB. This is a bit confusing...
Just to re-enforce Pascal's answer--Be very careful in windows when specifying a high max memory size. I was working on a server project that required as much physical memory as possible, but once you are over physical ram, abysmal performance is not a good description of what happens--Hung machine might be better.
What happens (at least this is my evaluation of it after days of examining logs and re-running tests) is, Windows runs out of ram and asks all apps to free up what they can. When it asks Java, Java kicks off a GC. The GC touches all of memory (causing anything that has been swapped out to be swapped in). This in turn causes windows to run out of memory. Windows then sends a message to all apps asking them to free up what they can.... (recurse indefinitely)
This may not ACTUALLY be what is going on, but the fact that Java GC touches Very Old Memory at times makes it incompatible with paging.