JVM heap size issues - java

I am just started R&D on JVM heap size and observed some strange behavior.
My system RAM size is 4 GB
OS is 64-bit windows 7
Java version is 1.7
Here is the observations:
I wrote a sample main program which starts & immediately went to wait state.
When I run the program from eclipse with -Xms1024m -Xmx1024m parameters it can able to run 3 times/parallel-process from eclipse i.e. 3 parallel process only as my RAM is 4GB only. This is expected behavior.
Hence I had rerun the same with -Xms512m -Xmx512m parameters and it can able to run 19 times/parallel-process from eclipse even my RAM is 4GB. HOW?
I used VisualVM tool to cross-check and I can see 19 process ids and each process id is allocated with 512m even my RAM size is 4GB, but HOW?
I have goggled it & gone through lot oracle documentation about the Memory management & optimization but those articles not answered my question.
Thanks in advance.
Thanks,
Baji

I wonder why you couldn't start more than 3 processes with -Xmx1024M -Xms1024M. I tested it on my 4GB Linux system, and I could start at least 6 such processes; I didn't try to start more.
Anyway, this is due to the fact that the program doesn't actually use that memory if you just start it with -Xmx1024M -Xms1024M. These just specify the maximum size of the heap. If you don't allocate that amount of data, this memory won't get actually used then.
"top" shows the virtual set size to be more than 1024M for such processes, but it isn't actually using that memory. It has just allocated address space but never used it.

The actual memory in your PC and the memory that is allocated must not match each other. In fact your operating system is moving data from the RAM ontothe hdd when your ram is full. This is called swapping / Ram-paging.
Thats why windows has the swap-file on the hdd and unix has a swap partition.

Related

Java OutofMemory Error in Ubuntu Even if enough memory available

I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys

"Insufficient memory" on Jenkins server startup

First time user of Jenkins here, and having a bit of trouble getting it started. From the Linux shell I run a command like:
java -Xms512m -Xmx512m -jar jenkins.war
and consistently get an error like:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# An error report file with more information is saved as:
# /home/twilliams/.jenkins/hs_err_pid36290.log
First, the basics:
Jenkins 1.631
Running via the jetty embedded in the war file
OpenJDK 1.7.0_51
Oracle Linux (3.8.13-55.1.5.el6uek.x86_64)
386 GB ram
40 cores
I get the same problem with a number of other configurations as well: using Java Hotspot 1.8.0_60, running through Apache Tomcat, and using all sorts of different values for -Xms/-Xmx/-Xss and similar options.
I've done a fair bit of research and think I know what the problem is, but am at a loss as how to solve it. I suspect that I'm running into the virtual memory overcommit issue mentioned here; the relevant bits from ulimit:
--($:)-- ulimit -a
...
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 8388608
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) 8388608
...
If I double the virtual memory limit as root, I can start Jenkins, but I'd rather not run Jenkins as the root user.
Another workaround: a soon-to-be-decommissioned machine with 48 GB ram and 24 cores can start Jenkins without issue, though (I suspect) just barely: according to htop, its virtual memory footprint is just over 8 GB. I suspect, as a result, that the memory overcommit issue is scaling with the number of processors on the machine, presumably the result of Jenkins starting a number of threads proportional to the number of processors present on the host machine. I roughly captured the thread count via ps -eLf | grep jenkins | wc -l and found that the thread count spikes at around 114 on the 40 core machine, and 84 on the 24 core machine.
Does this explanation seem sound? Provided it does...
Is there any way to configure Jenkins to reduce the number of threads it spawns at startup? I tried the arguments discussed here but, as advertised, they didn't seem to have any effect.
Are there any VMs available that don't suffer from the overcommit issue, or some other configuration option to address it?
The sanest option at this point may be to just run Jenkins in a virtualized environment to limit the resources at its disposal to something reasonable, but at this point I'm interested in this problem on an intellectual level and want to know how to get this recalcitrant configuration to behave.
Edit
Here's a snippet from the hs_error.log file, which guided my initial investigation:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
Here are a couple of command lines I tried, all with the same result.
Starting with a pitiful amount of heap space:
java -Xms2m -Xmx2m -Xss228k -jar jenkins.war
Starting with significantly more heap space:
java -Xms2048m -Xmx2048m -Xss1m -XX:ReservedCodeCacheSize=512m -jar jenkins.war
Room to grow:
java -Xms2m -Xmx1024m -Xss228k -jar jenkins.war
A number of other configurations were tried as well. Ultimately I don't think that the problem is heap exhaustion here - it's that the JVM is trying to reserve too much virtual memory for itself (in which to store the heap, thread stacks, etc) than allowed by the ulimit settings. Presumably this is the result of the overcommit issue linked earlier, such that if Jenkins is spawning 120 threads, it's erroneously trying to reserve 120x as much VM space as the master process originally occupied.
Having done what I could with the other options suggested in that log, I'm trying to figure out how to reduce the thread count in Jenkins to test the thread VM overcommit theory.
Edit #2
Per MichaƂ Grzejszczak, this is an issue with the glibc distributed with Red Hat Enterprise Linux 6 as discussed here. The issue can be worked around via explicit setting of the environment variable MALLOC_ARENA_MAX, in my case export MALLOC_ARENA_MAX=2. Without explicit setting of this variable, the JVM will apparently attempt to spawn (8 x cpu core count) threads, each consuming 64M. My 40 core case would have required northward of 10 gigs of virtual ram, exceeding (by itself) the ulimit on my machine of 8 gigs. Setting this to 2 reduces VM consumption to around 128 megs.
Jenkins memory footprint is related more to the number and size of projects it is managing than the number of CPUs or available memory. Jenkins should run fine on 1GB of heap memory unless you have gigantic projects on it.
You may have misconfigured the JVM though. -Xmx and -Xms parameters govern heap space JVM can use. -Xmx is a limit for heap memory, -Xms is a starting value for heap memory. Heap is a single memory area for entire JVM. You can easily monitor it by various tools like JConsole or VisualVM.
On the other hand -Xss is not related to heap. It is the size of a thread stack for all threads in this JVM process. As Java programs tend to create numerous threads setting this parameter too big can prevent your program from launching. Typically this value is in the range of 512kb. Entering here 512m instead makes it impossible for JVM to start. Make sure your settings do not contain any such mistakes (and post your memory config too).

java.lang.OutOfMemoryError: Java heap space in every 2-3 Hours

In our application we have both, Apache Server (for the front end only) & JBoss 4.2 (for the business / backend end). We are using Ubuntu 12 as server OS. Our application is throwing java.lang.OutOfMemoryError: "Java heap space" repeatedly. (It throws OOMEs for an hour or so and then goes back to working normally for next 2-3 hours. Then it repeats the pattern.) Our Java memory settings are
-Xms512m -Xmx1024m
Our server has 6 GB of Ram physically. Please guide us do we need to increase java Heap size. If yes, what should be the ideal size considering physical 6GB of Ram.
Are you sure you dont have memory leaks? Also if you are using high memory using api like POI for doc or itext for PDF the you are utilizing code to keep memory footprint low. You can use a profiler to see what exactly is happening. If you still need to increase increase step by step till it gets to a appropriate value.
like
-Xms512m -Xmx1024m
then
-Xms512m -Xmx2048m
so on ...
I would check whether you have a memory leak e.g. are there objects building up and not being freed.
You can do that with a profiler e.g. visualvm or jmap -histo:live might be enough.
If you don't have a memory leak and the memory usage is valid I would try increasing the maximum to the maximum amount of memory you would want the JVM to use e.g perhaps 4 GB.

setting up 64bit windows 7 for large JRE7 heap size

I have been trying to run java with 4G max and min heap size on a 64bit win 7 machine but when I check task manager I only see about 2G for java.exe. I read there are windows restrictions as well. How do I set up windows 7 and jre7 x64 so that I can run jave with 4G heap size?
Thanks.
What parameters are you using?
http://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html
For example related to your case:
-Xms : sets initial Java heap size
-Xmx : sets maximum Java heap size
If you are using the -Xmx parameter then you will see 4G in your task manager, ONLY IF your application really needs it. On the other hand if your are using the -Xms parameter (in that case you also need to need to set -Xmx parameter at an equal or larger value) then you should expect to see that value on the task manager. So, only if you set -Xms4096M and -Xmx4096M and the JVM fails to start then you have an issue. If it starts normally then you do not have a problem.
Also regardless of the RAM you are having (not in your case since we are only talking 4G) even Windows 7 (64bit) have different limits according to their edition.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366778%28v=vs.85%29.aspx#physical_memory_limits_windows_7
Couple of times in my past experience, I observed that 64-bit OS can also have 32-bit JVM and it leads to the confusion.
Apart from this, I don't think you will be able to allocate full 4 GB space to JVM, there are other programs which consumes the memory also.
You can think of allocating 4 GB only if you are running on >6 GB machine.

Java -Xmx option on Linux not limiting memory consumption

Using Oracle Java 1.7.0_05 on Ubuntu Linux 3.2.0-25-virtual, on an amazon EC2 instance having 7.5 GB of memory, we start three instances of java, each using the switch -Xmx 2000m.
We use the default Ubuntu EC2 AMI configuration of no swap space.
After running these instances for some weeks, one of them freezes -- possibly out of memory. But my question isn't about finding our memory leak.
When we try to restart the app, java gives us a message that it cannot allocate the 2000 mb of memory. We solved the problem by rebooting the server.
In other words, 2000 + 2000 + 2000 > 7500?
We have seen this issue twice, and I'm sorry to report we don't have good diagnostics. How could we run out of space with only two remaining java processes each using a max of 2000 mb? How should we proceed to diagnose this problem the next time it occurs? I wish I had a "free -h" output, taken while we cannot start the program, to show here.
TIA.
-Xmx sets the maximum size of the JVM heap, not the maximum size of the java process, which allocates more memory besides the heap available to the application: its own memory, the Permanent generation, what's allocated inside JNI libraries, etc.
There may be other processes using memory therefore the JVM cannot be started with 2G. If you really need that much memory for 3 Java processes each and you only have 7.5 total you might want to change your EC2 configuration to have more memory. Your just leaving 1.5 for everything else include the kernal, Oracle etc.

Categories

Resources