java process on windows using less memory than specified in -xms? - java

I'm starting my server with "java -xms 1280m -xmx 1280m" command. On Linux machines, this works fine and I see the process using almost the same amount of memory. On Windows machines, however, I see the java process using much less than 1280m - around 500-600m. I gathered this data from the windows task manager, if that matters. The two windows machines I checked are both Windows 2003 servers and have 2GB and 3GB RAMs respectively.
I always thought that specifying the initial heap size with -xms will force java to use at least that much of memory. Am I wrong? Or, is this a peculiarity with java on Windows?

Look closer. The task manager is often misleading - by default it will not show how much memory a process has allocated. Rather what is shown as "memory used" is the amount of physical memory swapped in for that process.
In the View menu, chose "Select columns" and add "Size of virtual memory". There's your memory. Your application obviously never really uses more than 500-600m, so its never swapped in.

The windows task manager has been designed for end users, not for programmers. The latter usually prefer the Process Explorer (procexp.exe) from the Sysinternals suite. That, combined with vmmap.exe will show you exactly what is going on.

Finally back at a computer and ran a couple of quick tests.
On my windows XP machine running java -xms gives the output Unrecognised option
When running java -Xms I get an invalid intial heap size which is correct as I'm not giving any value, but it accepts and recognises the option.
So it seems my comment was valid and you'll need to sort the casing on your command.

In addition to what Kevin D said about capitalization, note that 32-bit Windows systems generally have an upper-bound on the max heap size. It tends to vary based on a lot of factors but I've often seen it right around the 1280m that you are trying. I doubt that is the issue here but it could be a related issue.

Related

64-bit JVM limited to 300GB of memory?

I am attempting to run a Java application on a cluster computing environment (IBM LSF running CentOS release 6.2 Final) that can provide me with up to 1TB of RAM space.
I could create a JVM with up to 300GB of maximum memory (Xmx), although I need more than that (I can provide details, if requested).
However, it seems to be impossible to create a JVM with more than 300GB of maximum memory using the Xmx option. To be more specific, I get the classic error message:
Error occurred during initialization of VM.
Could not reserve enough space for object heap.
The details of my (64-bit) JVM are below:
OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
I've also tried with a Java 7 64-bit JVM but I've had exactly the same problem.
Moreover, I tried to create a JVM to run a HelloWorld.jar, but still JVM creation fails if you ask for more than -Xmx300G, so I don't think it has anything to do with the specific application.
Does anyone have any idea why I cannot create a JVM with more than 300G of max memory?
Can anyone please suggest a solution/workaround?
I can think of a couple of possible explanations:
Other applications on your system are using so much memory that there isn't 300Gb available right now.
There could be a resource limit on the per-process memory size. You can check this using ulimit. (Note that according to this bug, you will get the error message if the per-process resource limit stops the JVM allocating the heap regions.)
It is also possible that this is an "over commit" issue; e.g. if your application is running in a virtual and the system as a whole cannot meet the demand because there is too much competition from other virtuals.
A couple of the other ideas suggested are (IMO) unlikely:
Switching the JRE is unlikely to make any difference. I've never heard or seen of arbitrary memory limits in specific 64 bit JVMs.
It is unlikely to be due to not having enough contiguous memory. Certainly contiguous physical memory is not required. The only possibility might be contiguous space on the swap device, but I don't recall that being an issue for typical Linux OSes.
Can anyone please suggest a solution/workaround?
Check the ulimit.
Write a tiny C program that attempts to malloc lots of memory and see how much that can allocate before it fails.
Ask the system (or hypervisor) administrator for help.
(edited, see added section on swap space)
SHMMAX and SHMALL
Since you are using CentOS, you may have run into a similar issue about the SHMMAX and SHMALL kernel setting as described here for configuring the Oracle DB. Under that same link is an example calculation for getting and setting the correct SHMALL setting.
Contiguous memory
Certain users have already reported that not enough contiguous memory is available, others have said it is irrelevant.
I am not certain whether the JVM on CentOS requires a contiguous block of memory. According to SAS, fragmented memory can prevent your JVM to startup with a large max Xmx or start Xms memory setting, but other claims on the internet say it doesn't matter. I tried to proof or unproof that claim on my 48GB Windows workstation, but managed to start the JVM with an initial and max setting of 40GB. I am pretty sure that no contiguous block of that size was available, but JVMs on different OS's may behave differently, because the memory management can be different per OS (i.e., Windows typically hides the physical addresses for individual processes).
Finding the largest contiguous memory block
Use /proc/meminfo to find the largest contiguous memory block available, see the value under VmAllocChunk. Here's a guide and explanation of all values. If the value you see there is smaller than 300GB, try a value that falls right under the value of VmAllocChunk.
However, usually this number is higher than the physically available memory (because it is the virtual memory value available), it may give you a false positive. It is the value you can reserve, but once you start using it, it may require swapping. You should therefore also check the MemFree and the Inactive values. Conversely, you can also look at the whole list and see what values do not surpass 300GB.
Other tuning options you can check for 64 bit JVM
I am not sure why you seem to hit a memory limit issue at 300GB. For a moment I thought you might have hit a maximum of pages. With the default of 4kB, 300GB gives 78,643,200 pages. Doesn't look like some well-known magical number. If, for instance, 2^24 is the maximum, then 16,777,216 pages, or 64GB should be your theoretical allocatable maximum.
However, suppose for the sake of argument that you need larger pages (which is, as it turns out, better for performance of large memory Java applications), you should consult this manpage on JBoss, which explains how to use -XX:+UseLargePages and set kernel.shmmax (there it is again), vm.nr_hugepages and vm.huge_tlb_shm_group (not sure the latter is required).
Stress your system
Others have suggested this already as well. To find out that the problem lies with the JVM and not with the OS, you should stresstest it. One tool you could use is Stresslinux. In this tutorial, you find some options you can use. Of particular interest to you is the following command:
stress --vm 2 --vm-bytes 300G --timeout 30s --verbose
If that command fails, or locks your system, you know that the OS is limiting the use of that amount of memory. If it succeeds, we should try to tweak the JVM such that it can use the available memory.
EDIT Apr6: check swap space
It is not uncommon that systems with very large internal memory sizes, use little or no swap space. For many applications this may not be a problem, but the JVM requires the swap available swap space to be larger than the requested memory size. According to this bug report, the JVM will try to increase the swap space itself, however, as some answers in this SO thread suggested, the JVM may not always be capable of doing so.
Hence: check the currently available swap space with cat /proc/swaps # free and, if it is smaller than 300GB, follow the instructions on this CentOS manpage to increase the swap space for your system.
Note 1: we can deduct from bugreport #4719001 that a contiguous block of available swap space is not a necessity. But if you are unsure, remove all swap space and recreate it, which should remove any fragmentation.
Note 2: I have seen several posts like this one reporting 0MB swap space and being able to run the JVM. That is probably due to the fact that the JVM increases the swap space itself. Still doesn't hurt to try to increase the swap space by hand to find out whether it fixes your issue.
Premature conclusion
I realize that non of the above is an out-of-the-box answer to your question. I hope it gives you some pointers though to what you can try to get your JVM working. You might also try other JVM's, if the problem turns out to be a limit of the JVM you are currently using, but from what I have read so far, no limit should be imposed for 64 bit JVM's.
That you get the error right on initialization of the JVM leads me to believe that the problem is not with the JVM, but with the OS not being able to comply to the reservation of the 300GB of memory.
My own tests showed that the JVM can access all virtual memory, and doesn't care about the amount of physical memory available. It would be odd if the virtual memory is lower than the physical memory, but the VmAllocChunk setting should give you a hint in that direction (it is usually much larger).
If you have a look at the FAQ section of Java HotSpot VM, its mentioned that on 64-bit VMs, there are only 64 address bits to work with and hence the maximum Java heap size is dependent on the amount of physical memory & swap space present on the system.
If you calculate theoretically then you can have a memory of 18446744073709551616 MB, but there are above limitation to it.
You have to use -Xmx command to define maximum heap size for JVM, By default, Java uses 64 + 30% = 83.2MB on 64-bit JVMs.
I tried below command on my machine and it looked to work fine.
java -Xmx500g com.test.TestClass
I also tried to define maximum heap in terabytes but it doesn't work.
Run ulimit -a as the JVM Process's user and verify that your kernel isn't limiting your max memory size. You may need to edit /etc/security/limit.conf
According to this discussion, LSF does not pool node memory into a single shared space. You are using something else for that. Read that something's documentation, because it is possible it cannot do what you are asking it to do. In particular, it may not be able to allocate a single contiguous region of memory that spans all the nodes. Usually that's not necessary, as an application will make many calls to malloc. But the JVM, to simplify things for itself, wants to allocate (or reserve) a single contiguous region for the entire heap by effectively calling malloc just once. Or it could be something else related to whatever you are using to emulate a giant shared memory machine.

JVM allocates way more than necessary?

My java heap is allocating at around 123 MB. I need this to be less. I have a 1 GB limit and both programs running are servers. One runs at 953 MB. The server JAR I am trying to run should only take up 10 MB, or less. How can I make ubuntu respond the same as other OS's I have tested the JAR on? My code can be found at GitHub.
Java Version: JDK/JRE-7
Out-of-the-box Java on *nix can look a little scary when you just look at it via top. The java executable often puts up huge numbers under the VIRT column, like 900m. Why is my small Java program using 900m of RAM?
Actually, it's probably not using 900m of RAM. The JVM has told the OS "I might use this much memory... be prepared". But it's probably not actually using anywhere near that much physical RAM -- and if it's a small program, it'll never come anywhere near that. Any physical RAM that java is not actually using is still freely available to other processes on the system.
For a more accurate picture of how much physical RAM the java process is using, look under top's RES column. Though, a full discussion of *nix memory management and profiling Java is probably outside the scope of this answer. I'd encourage you to try Googling the topic and developing specific questions based on the material you find.
Most of the time your Java programs (and other programs running along side them) are going to do just fine using Java's default memory settings. Sometimes you need to limit (or increase) the maximum amount of heap memory that JVM is allowed to allocate. This is the most commonly tuned Java memory setting, and it is usually set with the -Xmx command-line argument. You can read more about it here and here.
Sometimes it can be a little bit tricky figuring out where to modify java's command-line options if your Java program is being magically started for you, e.g., as a system service, or part of some larger script. Googling Xmx will probably get you started on the conventional way of modifying java arguments for that product.
For example Google search: ubuntu tomcat Xmx
Gives links that point us in the direction of /etc/default/tomcat6.

How to set Java program memory limit on HP-UX (-Xmx doesn't work!)?

In our scenario, we launch 20+ java processes to deal with our business, we find each process eats 500M+ memory, so we consumed several G memory on the server in total, the customer complains their server become slowly once launch our processes.
I had a trial, even for a simplest "HelloWorld" program in HP-UX, it eats 500M memory! If I set -Xmx for it, seems likely it can't be cut down to less than 320M. Actually, we hope our each process just consume 64M memory.
So, any one know how to set memory limit for Java program to 64M-128M on HP-UX (java6)???
Looks relevant, I'd say. Discusses memory management and Java on HPUX
How are you measuring the amount of memory the java processes are using? top/process-lists aren't always telling you what you might think. It might be worth running jvisualvm (part of the java6 jdk) to see what the memory break down is. If you are running a trivial hello world program then something is going wrong if it really is using this amount of memory.

Java using too much memory on Linux?

I was testing the amount of memory java uses on Linux. When just staring up an application that does absolutely NOTHING it already reports that 11 MB is in use. When doing the same on a Windows machine about 6 MB is in use. These were measured with the top command and the windows task manager. The VM on linux I use is the 1.6_0_11 one, and the hotspot VM is Server 11.2. Starting the application using -client did not influence anything.
Why does java take this much memory? How can I reduce this?
EDIT: I measure memory using the windows task manager and in Linux I open the terminal and type top.
Also, I am only interested in how to reduce this or if I even CAN reduce this. I'll decide for myself whether a couple of megs is a lot or not. It's just that the difference of 5 MB between windows and Linux is strange, and I want to know if I am able to do this on Linux too.
If you think 11MB is "too much" memory... you'd better avoid using Java entirely. Seriously, the JVM needs to do quite a lot of stuff (bytecode verifier, GC, loading all the essential classes), and in an age where average desktop machines have 4GB of RAM, keeping the base JVM overhead (and memory use in generay) very low is simply not a design priority.
If you need your app to run on an embedded system (pretty much the only case where 11 MB might legitimately be considered "too much"), then there are special JVMs designed for such sytems that use less RAM - but at the cost of lacking many of the features and/or performance of mainstream JVMs.
You can control the heap size otherwise default values will be used, java -X gives you an explanation of the meaning of these switches
i.g.
set JAVA_OPTS="-Xms6m -Xmx6m"
java ${JAVA_OPTS} MyClass
The question you might really be asking is, "Does windows task manager and Linux top report memory in the same way?" I'm sure there are others that can answer this question better than I, but I suspect that you may not be doing an apples to apples comparison.
Try using the jconsole application on each respective machine to do a more granular inspection. You'll find jconsole on your sdk under the bin directory.
There is also a very extensive discussion of java memory management at http://www.ibm.com/developerworks/linux/library/j-nativememory-linux/
The short answer is that how memory is being allocated is a more complex answer than just looking at a single figure at the top of a user simplifed system utility.
Both Top and TaskManager will report how much memory has been allocated to a process, not how much the process is actually using, so I would say it's not an apples to apples comparison. Regardless, in the age of Gigs of memory what's a couple megs here or there on startup?
Linux and Windows are radically different operating systems and use RAM very differently. Windows kind of allocates as you go, and Linux caches more at once, and prepares for the future, so that the next operations are smooth.
This explanation is not quite right, but it's close enough for you.

From what Linux kernel/libc version is Java Runtime.exec() safe with regards to memory?

At work one of our target platforms is a resource constrained mini-server running Linux (kernel 2.6.13, custom distribution based on an old Fedora Core). The application is written in Java (Sun JDK 1.6_04). The Linux OOM killer is configured to kill processes when memory usage exceeds 160MB. Even during high load our application never go over 120MB and together with some other native processes that are active we stay well within the OOM limit.
However, it turns out that the Java Runtime.getRuntime().exec() method, the canonical way to execute external processes from Java, has a particularly unfortunate implementation on Linux that causes spawned child processes to (temporarily) require the same amount of memory as the parent process since the address space is copied. The net result is that our application gets killed by the OOM killer as soon as we do Runtime.getRuntime().exec().
We currently work around this by having a separate native program do all external command execution and we communicate with that program over a socket. This is less than optimal.
After posting about this problem online I got some feedback indicating that this should not occur on "newer" versions of Linux since they implement the posix fork() method using copy-on-write, presumably meaning it will only copy pages that it needs to modify when it is required instead of the entire address space immediately.
My questions are:
Is this true?
Is this something in the kernel, the libc implementation or somewhere else entirely?
From what version of the kernel/libc/whatever is copy-on-write for fork() available?
This is pretty much the way *nix (and linux) have worked since the dawn of time(or atleat the dawn of mmus).
To create a new process on *nixes you call fork(). fork() creates a copy of the calling process with all its memory mappings, file descriptors, etc. The memory mappings are done copy-on-write so (in optimal cases) no memory is actually copied, only the mappings. A following exec() call replaces the current memory mapping with that of the new executable. So, fork()/exec() is the way you create a new process and that's what the JVM uses.
The caveat is with huge processes on a busy system, the parent might continue to run for a little while before the child exec()'s causing a huge amount of memory to be copied cause of the copy-on-write. In VMs , memory can be moved around a lot to facilitate a garbage collector which produces even more copying.
The "workaround" is to do what you've already done, create an external lightweight process that takes care of spawning new processes - or use a more lightweight approach than fork/exec to spawn processes (Which linux does not have - and would anyway require a change in the jvm itself). Posix specifies the posix_spawn() function, which in theory can be implemented without copying the memory mapping of the calling process - but on linux it isn't.
Well, I personally doubt that this is true, since Linux's fork() is done via copy-on-write since God knows when (at least, 2.2.x kernels had it, and it was somewhere in the 199x).
Since OOM killer is believed to be a rather crude instrument which is known to misfire (f.e., it does not necessary kills the process that actually allocated most of the memory) and which should be used only as a last resport, it is not clear to me why you have it configured to fire on 160M.
If you want to impose a limit on memory allocation, then ulimit is your friend, not OOM.
My advice is to leave OOM alone (or disable it altogether), configure ulimits, and forget about this problem.
Yes, this absolutely is the case with even new versions of Linux (we're on 64-bit Red Hat 5.2). I have been having a problem with slow running subprocesses for about 18 months, and could never figure out the problem until I read your question and ran a test to verify it.
We have a 32 GB box with 16 cores, and if we run the JVM with settings like -Xms4g and -Xmx8g and run subprocesses using Runtime.exec() with 16 threads, we are not be able to run our process faster than about 20 process calls per second.
Try this with the simple "date" command in Linux about 10,000 times. If you add profiling code in to watch what is happening, it starts off quickly but slows down over time.
After reading your question, I decided to try lowering my memory settings to -Xms128m and -Xmx128m. Now our process runs at about 80 process calls per second. The JVM memory settings was all I changed.
It doesn't seem to be sucking up memory in such a way that I ever ran out of memory, even when I tried it with 32 threads. It's just the extra memory has to be allocated in some way, which causes a heavy startup (and maybe shutdown) cost.
Anyway, it seems like there should be a setting to disable this behavior Linux or maybe even in the JVM.
1: Yes.
2: This is divided into two steps: Any system call like fork() is wrapped by the glibc to the kernel. The kernel part of the system-call is in kernel/fork.c
3: I don't know. But I would bet that your kernel has it.
The OOM killer kicks in when Low memory is threatened on 32bit boxes. I've never had an issue with this, but there are ways to keep OOM at bay. This problem could be some OOM configuration issue.
Since you are using a Java application, you should consider moving to 64bit Linux. That should definitely fix it. Most 32bit apps can run on a 64bit kernel with no issues as long as relevant libraries are installed.
You could also try the PAE kernel for 32 bit fedora.

Categories

Resources