Java -Xmx option on Linux not limiting memory consumption - java

Using Oracle Java 1.7.0_05 on Ubuntu Linux 3.2.0-25-virtual, on an amazon EC2 instance having 7.5 GB of memory, we start three instances of java, each using the switch -Xmx 2000m.
We use the default Ubuntu EC2 AMI configuration of no swap space.
After running these instances for some weeks, one of them freezes -- possibly out of memory. But my question isn't about finding our memory leak.
When we try to restart the app, java gives us a message that it cannot allocate the 2000 mb of memory. We solved the problem by rebooting the server.
In other words, 2000 + 2000 + 2000 > 7500?
We have seen this issue twice, and I'm sorry to report we don't have good diagnostics. How could we run out of space with only two remaining java processes each using a max of 2000 mb? How should we proceed to diagnose this problem the next time it occurs? I wish I had a "free -h" output, taken while we cannot start the program, to show here.
TIA.

-Xmx sets the maximum size of the JVM heap, not the maximum size of the java process, which allocates more memory besides the heap available to the application: its own memory, the Permanent generation, what's allocated inside JNI libraries, etc.

There may be other processes using memory therefore the JVM cannot be started with 2G. If you really need that much memory for 3 Java processes each and you only have 7.5 total you might want to change your EC2 configuration to have more memory. Your just leaving 1.5 for everything else include the kernal, Oracle etc.

Related

Java OutofMemory Error in Ubuntu Even if enough memory available

I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys

JVM heap size issues

I am just started R&D on JVM heap size and observed some strange behavior.
My system RAM size is 4 GB
OS is 64-bit windows 7
Java version is 1.7
Here is the observations:
I wrote a sample main program which starts & immediately went to wait state.
When I run the program from eclipse with -Xms1024m -Xmx1024m parameters it can able to run 3 times/parallel-process from eclipse i.e. 3 parallel process only as my RAM is 4GB only. This is expected behavior.
Hence I had rerun the same with -Xms512m -Xmx512m parameters and it can able to run 19 times/parallel-process from eclipse even my RAM is 4GB. HOW?
I used VisualVM tool to cross-check and I can see 19 process ids and each process id is allocated with 512m even my RAM size is 4GB, but HOW?
I have goggled it & gone through lot oracle documentation about the Memory management & optimization but those articles not answered my question.
Thanks in advance.
Thanks,
Baji
I wonder why you couldn't start more than 3 processes with -Xmx1024M -Xms1024M. I tested it on my 4GB Linux system, and I could start at least 6 such processes; I didn't try to start more.
Anyway, this is due to the fact that the program doesn't actually use that memory if you just start it with -Xmx1024M -Xms1024M. These just specify the maximum size of the heap. If you don't allocate that amount of data, this memory won't get actually used then.
"top" shows the virtual set size to be more than 1024M for such processes, but it isn't actually using that memory. It has just allocated address space but never used it.
The actual memory in your PC and the memory that is allocated must not match each other. In fact your operating system is moving data from the RAM ontothe hdd when your ram is full. This is called swapping / Ram-paging.
Thats why windows has the swap-file on the hdd and unix has a swap partition.

java.lang.OutOfMemoryError: Java heap space in every 2-3 Hours

In our application we have both, Apache Server (for the front end only) & JBoss 4.2 (for the business / backend end). We are using Ubuntu 12 as server OS. Our application is throwing java.lang.OutOfMemoryError: "Java heap space" repeatedly. (It throws OOMEs for an hour or so and then goes back to working normally for next 2-3 hours. Then it repeats the pattern.) Our Java memory settings are
-Xms512m -Xmx1024m
Our server has 6 GB of Ram physically. Please guide us do we need to increase java Heap size. If yes, what should be the ideal size considering physical 6GB of Ram.
Are you sure you dont have memory leaks? Also if you are using high memory using api like POI for doc or itext for PDF the you are utilizing code to keep memory footprint low. You can use a profiler to see what exactly is happening. If you still need to increase increase step by step till it gets to a appropriate value.
like
-Xms512m -Xmx1024m
then
-Xms512m -Xmx2048m
so on ...
I would check whether you have a memory leak e.g. are there objects building up and not being freed.
You can do that with a profiler e.g. visualvm or jmap -histo:live might be enough.
If you don't have a memory leak and the memory usage is valid I would try increasing the maximum to the maximum amount of memory you would want the JVM to use e.g perhaps 4 GB.

How do you deal with Java applications on the client requiring a lot of memory ("-J-Xmx"?

I have a Java SE desktop application which uses a lot of memory (1,1 GB would be desired). All target machines (Win 7, Win Vista) have plenty of physical memory (at least 4GB, most of them have more). There is also enough free memory.
Now, when the machines have some uptime and a lot of programs were started and terminated, the memory becomes fragmented (this is what I assume). This leads to the following error when the JVM is started:
JVM creation failed
Error occurred during initialization of VM
Could not reserve enough space for object heap
Even closing all running programs doesn't help in such a situation (despite Task Manager and other tools report enough free memory). The only thing thas helps is to reboot the machine and fire up the Java applicaton as one of the first programs launched.
As far as I've investigated, the Oracle VM requires one contiguous chunk of memory.
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
I start my JVM with the following arguments:
-J-client -J-Xss2m -J-Xms512m -J-Xmx1100m -J-XX:PermSize=64m -J-Dsun.zip.disableMemoryMapping=true
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
Use an OS which doesn't get fragmented virtual memory. e.g. 64-bit windows or any version of UNIX.
BTW It is hard for me to imagine how this is possible in the first place but I know it to be the case. Each process has its own virtual memory so its arrangement of virtual memory shouldn't depend on anything which is already running or has run before.
I believe it might be a hang over from the MS-DOS TSR days. Shared libraries loaded are given absolute addresses (added to the end of the signed address space, 2 GB, the high half is reserved for the OS and the last 512 MB for the BIOS) in memory meaning they must use the same address range in every program they are used in. Over time the maximum address is determined by the lowest shared library loaded or used (I don't know which one by I suspect the lowest loaded)

Java Memory Ocupation

i made a server side aplication that uses 18mb of non-heap and around 6mbs of head of a max of 30mbs. i set the max of heap with -Xms and Xmx, the problem is that when i run the program on ubuntu server it takes arround 170mbs instead of 18+30 or atleast 100Mbs in the max. Some one know how to put VM only getting 100MBs?
The JVM uses heap, other memory like thread stacks and share libraries. The shared libraries can be relatively large but they don't use real memory unless they are actually used. If you run JVMs they are shared between them but you cannot see this in the process information.
In a modern PC, 1 GB of memory costs around $100 so reducing every last MB many not be seen to be as important as it used to be.
In response to your comment
i have made some tests with Jconsole
and VMvisual. Xmx 40mbs Xms 25. the
problem is that iam restricted to
512mbs since its a VPS and i can't pay
for it now. The other thing is that
with 100mbs each i could put atleast 3
process's running.
The problem is, you are going about it the wrong way. Don't try and get your VM super small so you can run 3 VMs. Combine everything into 1 VM. If you have 512 memory, then make 1 VM with 256MB of heap, and let it do everything. You can have 10s or 100s of threads in a single VM. In most cases, this will perform better and use less total memory than trying to run many small VMs.

Categories

Resources