i made a server side aplication that uses 18mb of non-heap and around 6mbs of head of a max of 30mbs. i set the max of heap with -Xms and Xmx, the problem is that when i run the program on ubuntu server it takes arround 170mbs instead of 18+30 or atleast 100Mbs in the max. Some one know how to put VM only getting 100MBs?
The JVM uses heap, other memory like thread stacks and share libraries. The shared libraries can be relatively large but they don't use real memory unless they are actually used. If you run JVMs they are shared between them but you cannot see this in the process information.
In a modern PC, 1 GB of memory costs around $100 so reducing every last MB many not be seen to be as important as it used to be.
In response to your comment
i have made some tests with Jconsole
and VMvisual. Xmx 40mbs Xms 25. the
problem is that iam restricted to
512mbs since its a VPS and i can't pay
for it now. The other thing is that
with 100mbs each i could put atleast 3
process's running.
The problem is, you are going about it the wrong way. Don't try and get your VM super small so you can run 3 VMs. Combine everything into 1 VM. If you have 512 memory, then make 1 VM with 256MB of heap, and let it do everything. You can have 10s or 100s of threads in a single VM. In most cases, this will perform better and use less total memory than trying to run many small VMs.
Related
I am attempting to run a Java application on a cluster computing environment (IBM LSF running CentOS release 6.2 Final) that can provide me with up to 1TB of RAM space.
I could create a JVM with up to 300GB of maximum memory (Xmx), although I need more than that (I can provide details, if requested).
However, it seems to be impossible to create a JVM with more than 300GB of maximum memory using the Xmx option. To be more specific, I get the classic error message:
Error occurred during initialization of VM.
Could not reserve enough space for object heap.
The details of my (64-bit) JVM are below:
OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
I've also tried with a Java 7 64-bit JVM but I've had exactly the same problem.
Moreover, I tried to create a JVM to run a HelloWorld.jar, but still JVM creation fails if you ask for more than -Xmx300G, so I don't think it has anything to do with the specific application.
Does anyone have any idea why I cannot create a JVM with more than 300G of max memory?
Can anyone please suggest a solution/workaround?
I can think of a couple of possible explanations:
Other applications on your system are using so much memory that there isn't 300Gb available right now.
There could be a resource limit on the per-process memory size. You can check this using ulimit. (Note that according to this bug, you will get the error message if the per-process resource limit stops the JVM allocating the heap regions.)
It is also possible that this is an "over commit" issue; e.g. if your application is running in a virtual and the system as a whole cannot meet the demand because there is too much competition from other virtuals.
A couple of the other ideas suggested are (IMO) unlikely:
Switching the JRE is unlikely to make any difference. I've never heard or seen of arbitrary memory limits in specific 64 bit JVMs.
It is unlikely to be due to not having enough contiguous memory. Certainly contiguous physical memory is not required. The only possibility might be contiguous space on the swap device, but I don't recall that being an issue for typical Linux OSes.
Can anyone please suggest a solution/workaround?
Check the ulimit.
Write a tiny C program that attempts to malloc lots of memory and see how much that can allocate before it fails.
Ask the system (or hypervisor) administrator for help.
(edited, see added section on swap space)
SHMMAX and SHMALL
Since you are using CentOS, you may have run into a similar issue about the SHMMAX and SHMALL kernel setting as described here for configuring the Oracle DB. Under that same link is an example calculation for getting and setting the correct SHMALL setting.
Contiguous memory
Certain users have already reported that not enough contiguous memory is available, others have said it is irrelevant.
I am not certain whether the JVM on CentOS requires a contiguous block of memory. According to SAS, fragmented memory can prevent your JVM to startup with a large max Xmx or start Xms memory setting, but other claims on the internet say it doesn't matter. I tried to proof or unproof that claim on my 48GB Windows workstation, but managed to start the JVM with an initial and max setting of 40GB. I am pretty sure that no contiguous block of that size was available, but JVMs on different OS's may behave differently, because the memory management can be different per OS (i.e., Windows typically hides the physical addresses for individual processes).
Finding the largest contiguous memory block
Use /proc/meminfo to find the largest contiguous memory block available, see the value under VmAllocChunk. Here's a guide and explanation of all values. If the value you see there is smaller than 300GB, try a value that falls right under the value of VmAllocChunk.
However, usually this number is higher than the physically available memory (because it is the virtual memory value available), it may give you a false positive. It is the value you can reserve, but once you start using it, it may require swapping. You should therefore also check the MemFree and the Inactive values. Conversely, you can also look at the whole list and see what values do not surpass 300GB.
Other tuning options you can check for 64 bit JVM
I am not sure why you seem to hit a memory limit issue at 300GB. For a moment I thought you might have hit a maximum of pages. With the default of 4kB, 300GB gives 78,643,200 pages. Doesn't look like some well-known magical number. If, for instance, 2^24 is the maximum, then 16,777,216 pages, or 64GB should be your theoretical allocatable maximum.
However, suppose for the sake of argument that you need larger pages (which is, as it turns out, better for performance of large memory Java applications), you should consult this manpage on JBoss, which explains how to use -XX:+UseLargePages and set kernel.shmmax (there it is again), vm.nr_hugepages and vm.huge_tlb_shm_group (not sure the latter is required).
Stress your system
Others have suggested this already as well. To find out that the problem lies with the JVM and not with the OS, you should stresstest it. One tool you could use is Stresslinux. In this tutorial, you find some options you can use. Of particular interest to you is the following command:
stress --vm 2 --vm-bytes 300G --timeout 30s --verbose
If that command fails, or locks your system, you know that the OS is limiting the use of that amount of memory. If it succeeds, we should try to tweak the JVM such that it can use the available memory.
EDIT Apr6: check swap space
It is not uncommon that systems with very large internal memory sizes, use little or no swap space. For many applications this may not be a problem, but the JVM requires the swap available swap space to be larger than the requested memory size. According to this bug report, the JVM will try to increase the swap space itself, however, as some answers in this SO thread suggested, the JVM may not always be capable of doing so.
Hence: check the currently available swap space with cat /proc/swaps # free and, if it is smaller than 300GB, follow the instructions on this CentOS manpage to increase the swap space for your system.
Note 1: we can deduct from bugreport #4719001 that a contiguous block of available swap space is not a necessity. But if you are unsure, remove all swap space and recreate it, which should remove any fragmentation.
Note 2: I have seen several posts like this one reporting 0MB swap space and being able to run the JVM. That is probably due to the fact that the JVM increases the swap space itself. Still doesn't hurt to try to increase the swap space by hand to find out whether it fixes your issue.
Premature conclusion
I realize that non of the above is an out-of-the-box answer to your question. I hope it gives you some pointers though to what you can try to get your JVM working. You might also try other JVM's, if the problem turns out to be a limit of the JVM you are currently using, but from what I have read so far, no limit should be imposed for 64 bit JVM's.
That you get the error right on initialization of the JVM leads me to believe that the problem is not with the JVM, but with the OS not being able to comply to the reservation of the 300GB of memory.
My own tests showed that the JVM can access all virtual memory, and doesn't care about the amount of physical memory available. It would be odd if the virtual memory is lower than the physical memory, but the VmAllocChunk setting should give you a hint in that direction (it is usually much larger).
If you have a look at the FAQ section of Java HotSpot VM, its mentioned that on 64-bit VMs, there are only 64 address bits to work with and hence the maximum Java heap size is dependent on the amount of physical memory & swap space present on the system.
If you calculate theoretically then you can have a memory of 18446744073709551616 MB, but there are above limitation to it.
You have to use -Xmx command to define maximum heap size for JVM, By default, Java uses 64 + 30% = 83.2MB on 64-bit JVMs.
I tried below command on my machine and it looked to work fine.
java -Xmx500g com.test.TestClass
I also tried to define maximum heap in terabytes but it doesn't work.
Run ulimit -a as the JVM Process's user and verify that your kernel isn't limiting your max memory size. You may need to edit /etc/security/limit.conf
According to this discussion, LSF does not pool node memory into a single shared space. You are using something else for that. Read that something's documentation, because it is possible it cannot do what you are asking it to do. In particular, it may not be able to allocate a single contiguous region of memory that spans all the nodes. Usually that's not necessary, as an application will make many calls to malloc. But the JVM, to simplify things for itself, wants to allocate (or reserve) a single contiguous region for the entire heap by effectively calling malloc just once. Or it could be something else related to whatever you are using to emulate a giant shared memory machine.
I have a Java SE desktop application which uses a lot of memory (1,1 GB would be desired). All target machines (Win 7, Win Vista) have plenty of physical memory (at least 4GB, most of them have more). There is also enough free memory.
Now, when the machines have some uptime and a lot of programs were started and terminated, the memory becomes fragmented (this is what I assume). This leads to the following error when the JVM is started:
JVM creation failed
Error occurred during initialization of VM
Could not reserve enough space for object heap
Even closing all running programs doesn't help in such a situation (despite Task Manager and other tools report enough free memory). The only thing thas helps is to reboot the machine and fire up the Java applicaton as one of the first programs launched.
As far as I've investigated, the Oracle VM requires one contiguous chunk of memory.
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
I start my JVM with the following arguments:
-J-client -J-Xss2m -J-Xms512m -J-Xmx1100m -J-XX:PermSize=64m -J-Dsun.zip.disableMemoryMapping=true
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
Use an OS which doesn't get fragmented virtual memory. e.g. 64-bit windows or any version of UNIX.
BTW It is hard for me to imagine how this is possible in the first place but I know it to be the case. Each process has its own virtual memory so its arrangement of virtual memory shouldn't depend on anything which is already running or has run before.
I believe it might be a hang over from the MS-DOS TSR days. Shared libraries loaded are given absolute addresses (added to the end of the signed address space, 2 GB, the high half is reserved for the OS and the last 512 MB for the BIOS) in memory meaning they must use the same address range in every program they are used in. Over time the maximum address is determined by the lowest shared library loaded or used (I don't know which one by I suspect the lowest loaded)
Using Oracle Java 1.7.0_05 on Ubuntu Linux 3.2.0-25-virtual, on an amazon EC2 instance having 7.5 GB of memory, we start three instances of java, each using the switch -Xmx 2000m.
We use the default Ubuntu EC2 AMI configuration of no swap space.
After running these instances for some weeks, one of them freezes -- possibly out of memory. But my question isn't about finding our memory leak.
When we try to restart the app, java gives us a message that it cannot allocate the 2000 mb of memory. We solved the problem by rebooting the server.
In other words, 2000 + 2000 + 2000 > 7500?
We have seen this issue twice, and I'm sorry to report we don't have good diagnostics. How could we run out of space with only two remaining java processes each using a max of 2000 mb? How should we proceed to diagnose this problem the next time it occurs? I wish I had a "free -h" output, taken while we cannot start the program, to show here.
TIA.
-Xmx sets the maximum size of the JVM heap, not the maximum size of the java process, which allocates more memory besides the heap available to the application: its own memory, the Permanent generation, what's allocated inside JNI libraries, etc.
There may be other processes using memory therefore the JVM cannot be started with 2G. If you really need that much memory for 3 Java processes each and you only have 7.5 total you might want to change your EC2 configuration to have more memory. Your just leaving 1.5 for everything else include the kernal, Oracle etc.
My Linux server need to be able to handle 30+ eclipse instances for developers. I did a quick test of running 10 eclipse instances. The Java process associated with each eclipse initially around 200MB RSS memory, increased up to around 550MB, when more projects are loaded.
But Java process doesn't seem to release memory, after closing/deleting all projects within eclipse instances. I still see it uses over 550MB RSS.
How can I change Eclipse or Java settings so that memory foot print got reduced when developers closed down projects or being idle for a while?
Thanks
You may want to experiment with these (and other) JVM tuning options to make the JVM less reluctant to return memory to the OS:
-XX:MaxHeapFreeRatio Maximum percentage of heap free after GC to avoid shrinking. Default is 70.
-XX:MinHeapFreeRatio Minimum percentage of heap free after GC to avoid expansion. Default is 40.
However, I suspect that you won't see the eclipse process shrink to anywhere near its initial size, since eclipse is a huge, complex application that probably lazy-loads (but does not unload, once used) a lot of classes and associated data structures.
I've never seen Java release memory.
I don't think you will get any value out of trying to get it to release memory with Eclipse, I've watched that little memory counter for YEARS and never once see the allocated memory drop.
You might try one of these.
After each session, exit the JVM and restart.
Set your -Xmx lower.
Separate your instances into categories with high -Xmx and low -Xmx and let the user determine which one he wants.
As a side-thought, if it really mattered to you, you MIGHT be able to run multiple eclipse instances under one VM. It would probably be WAY too much work (man-weeks to man-years), but if you could get it right you could reduce overhead by like 150-200mb/instance. The disadvantage would be that a VM crash (Pretty rare these days) would kill everyone.
Testing this theory would be a matter of calling eclipse's main from within an existing JVM and trying to get it to display somewhere useful. The rest of the man-year is spent trying to figure out where they used evil static variables or singletons and changing them to something else.
Switch the Java to use the G1 garbage collector with the HeapFreeRatio parameters. Use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory will be released back to the operating system.
I would suggest checking on garbage collection, setting right options or even forcing GC periodically might increase time till eclipse memory usage grows high.
Following link might be useful http://www.eclipsezone.com/eclipse/forums/t93757.html
We have Glassfish application server running in Linux servers.
Each Glassfish installation hosts 3 domains. Each domain has a JVM configuration such as -Xms 1GB and -XmX 2GB. That means if all these three domains are running at max memory, server should be able to allocate total 6GB to the JVMs
With that math,each of our server has 8GB RAM (2 GB Buffer)
First of all - is this a good approach? I did not think so, because when we analyzed memory utilization on this server over past few months, it was only up to 1GB;
Now there are requests to add an additional domain to these servers - does that mean to add additional 2 GB RAM just to be safe or based on trend, continue with whatever memory the server has?
A few rules of thumb:
You really want the sum of your XmXs and the RAM needed for the applications you need to run on the box constantly (including the OS) to be lower than your physical RAM available. Otherwise, something will be swapped to disk, and when that "something" needs to run, things will slow down dramatically. If things go constantly in and out of swap, nothing will perform.
You might be able to get by with lower XmX on some of your application servers (your question seems to imply some of your JVMs have too much RAM allocated) . I believe Tomcat can start with as few as 64mb of XmX, but many applications will run out of memory pretty quickly with that setup. Tuning your memory allocation and the garbage collector might be worth it. Frankly, I'm not very up to date with GC performance lately (I haven't had to tune anything to get decent performance in 4-5 years), so I don't know how valuable will that be. GC collection pauses used to be bad, and bigger heaps meant longer pauses...
Any memory you spare will not be "wasted". The OS will use it for disk cache and that might be a benefit.
In any case, the answer is normally... you need to run tests. Being able to run stress tests is really invaluable, and I suggest you spend some time writing one and running it. It will allow you to take educated decisions in this matter.