I am trying to run java command in linux server it was running well but today when I tried to run java I got some error-
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
my memory space is -
root#vps [~]# free -m
total used free
Mem: 8192 226 7965
-/+ buf: 226 7965
Swap: 0 0 0
How can I solve this problem?
The machine did not have enough memory at that time to service the JVM's request for memory to start the program. I expect that you have 8 Gb of memory in the machine and that you use a 64-bit JVM.
I would suggest you add some swap space to the system to let it handle spikes in memory usage, and then figure out where the spike came from.
Which VM are you using? What is the maximum memory size you are trying to use?
If you are using 32-bit JVM on Windows and you are using close to the maximum it can access on your system, it can be impacted by memory fragmentation. You may have a similar problem.
Related
I have two linux machines (both are VMs), one having 12GB memory and other having 8GB memory.
I tried to start the same java program on both machines, with maximum max heap size possible (using -Xmx flag). Following are the results I got.
12GB machine: 9460MB
8GB machine: 4790MB
If I specify a max heap size beyond above limits, I get below error.
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
I checked the free memory in two systems (using free command), and I got following.
12GB machine: approximately 3GB free.
8GB machine: approximately 4GB free.
My question is, what determines the maximum max heap size a java program can be started with, which would not result in above mentioned error? (System had sufficient memory to allocate 1073741824 bytes of memory when the program gave above error)
I have found interesting comments from JDK bug ( The bug in JDK 9 version and not in 8. It says bug was fixed in 8.x version but does not tell minor build number.
If virtual memory has been limited with "ulimit -v", and the server has a lot of RAM, then the JVM can not start without extra command line arguments to the GC.
// After "ulimit -v" The jvm does not start with default command line.
$ ulimit -S -v 4194304
$ java -version
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
Comments:
The problem seems to be that we must specify MALLOC_ARENA_MAX.
If I set the environment variable MALLOC_ARENA_MAX=4, then the jvm can start without any extra arguments.
I guess that this is not something that can be fixed from the jvm. If so we can close this bug.
When using "UseConcMarkSweepGC" then the command line above does not work.
I have tried to add MaxMetaspaceSize=128m, but it does not help.
I am sure there are an argument that makes it work, but I have not found one.
Configuring the GC with limited virtual memory is not very user friendly.
Change parameters to as per your requirement and try this one.
ulimit -S -v 4194304
java -XX:MaxHeapSize=512m -XX:InitialHeapSize=512m -XX:CompressedClassSpaceSize=64m -XX:MaxMetaspaceSize=128m -XX:+UseConcMarkSweepGC -version
The memory you have available is the combination of free RAM plus swap space. It also depends on whether the system has overcommit enabled — if so, the kernel will allow programs to allocate more memory than is actually available (within reasonable limits), since programs often allocate more than they're really going to use.
Note that overcommit is enabled by default. To disable it, write 2 into /proc/sys/vm/overcommit_memory. (Strangely, a value of 0 does not mean "no overcommit".) But it's a good idea to read the overcommit documentation first.
I did some experiments with the clues given by ravindra and found that the maximum max heap size has a direct relationship with the total virtual memory available in the system.
Total virtual memory in the system can be found (in KB) with:
ulimit-v
Total virtual memory can be altered with:
ulimit -v <new amount in KB>
The maximum max heap size possible was approximately 2GB less than the virtual memory. If you specify unlimited virtual memory using ulimit -v unlimited, You can specify any large value for the max heap size.
I have a VPS with 20GB RAM, Ubuntu OS. I am trying to allocate 10GB RAM as the maximum heap to java using JAVA_TOOL_OPTIONS but I couldn't. Please see the attached screenshot. It shows available memory as 17GB. Its working when I try to set to 7GB. But heap error occurs only when it is > 7GB. I have already installed glassfish and allocated 3Gb to its cluster. Its working fine. But why I am not able to allocate greater than 7GB when I have 17GB RAM free.
TOP
ULIMITS
Java -version
OverCommit memory
My Hardware is Virtual Hosted. Below is the configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 16
Vendor ID: GenuineIntel
CPU family: 6
Model: 26
Stepping: 5
CPU MHz: 2266.802
BogoMIPS: 4533.60
Virtualization: VT-x
If I had to guess, you don't have a contiguous block of RAM that's 7GB, which does seem weird, but without knowing more about your VM's allocation it's hard to say.
Here's what Oracle has to say on the matter (http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_oom):
The VM prints "OutOfMemoryError" and exits. Increasing max heap size
doesn't help. What's going on?
The Java HotSpot VM cannot expand its heap size if memory is
completely allocated and no swap space is available. This can occur,
for example, when several applications are running simultaneously.
When this happens, the VM will exit after printing a message similar
to the following.
Exception java.lang.OutOfMemoryError: requested bytes
-Xmx-Xms-Xmx
For more information, see the evaluation section of bug 4697804.
I think you may be out of swap space. When I add up the memory in the "virt" column, it comes to 40+ Gb.
Why it's taking that much swap space ? What needs to be done in order to fix this ?
Well, according to top you are running:
Glassfish - 9.1G
MySQL daemon - 5.4G
Hudson - 8.9G
Nexus - 6G
Glassfish - 6.9G (2nd instance)
and sundry other stuff. The "virt" is their total virtual memory footprint, and some of that will be code segments which may be shared.
They mostly seem to have a small "res" (resident memory) at the moment which is why there is so much free RAM. However, if a few of them sprang into life at the same time the system the demand for RAM would skyrocket, and the system might start to thrash.
My recommendation would be to move the Hudson and Nexus services to a separate VM. Or if that is not possible, increase the size of your swap space ... and hope that you don't thrash.
This is true. But is this a normal behaviour?
Yes.
is this how memory allocation works?
Yes. This is indeed how virtual memory works.
I am confused with Resident memory, virtual memory and physical memory now.
Rather than explain it in detail, I suggest you start by reading the Wikipedia page on virtual memory.
The reason why I wasn't able to allocate more than 5G is because of the fact that privvmpages is set to 5G.
We can get that information in linux by this command "cat /proc/user_beancounters"
Also, in VPS, hosting provider will not allow us to customize this value. We have to either go for large virtual or dedicated server to increase this limit.
This was the root cause. However, Stephen and Robin's explanations on the Virtual Memory and RES memory were spot on. Thanks Guys
First time user of Jenkins here, and having a bit of trouble getting it started. From the Linux shell I run a command like:
java -Xms512m -Xmx512m -jar jenkins.war
and consistently get an error like:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# An error report file with more information is saved as:
# /home/twilliams/.jenkins/hs_err_pid36290.log
First, the basics:
Jenkins 1.631
Running via the jetty embedded in the war file
OpenJDK 1.7.0_51
Oracle Linux (3.8.13-55.1.5.el6uek.x86_64)
386 GB ram
40 cores
I get the same problem with a number of other configurations as well: using Java Hotspot 1.8.0_60, running through Apache Tomcat, and using all sorts of different values for -Xms/-Xmx/-Xss and similar options.
I've done a fair bit of research and think I know what the problem is, but am at a loss as how to solve it. I suspect that I'm running into the virtual memory overcommit issue mentioned here; the relevant bits from ulimit:
--($:)-- ulimit -a
...
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 8388608
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) 8388608
...
If I double the virtual memory limit as root, I can start Jenkins, but I'd rather not run Jenkins as the root user.
Another workaround: a soon-to-be-decommissioned machine with 48 GB ram and 24 cores can start Jenkins without issue, though (I suspect) just barely: according to htop, its virtual memory footprint is just over 8 GB. I suspect, as a result, that the memory overcommit issue is scaling with the number of processors on the machine, presumably the result of Jenkins starting a number of threads proportional to the number of processors present on the host machine. I roughly captured the thread count via ps -eLf | grep jenkins | wc -l and found that the thread count spikes at around 114 on the 40 core machine, and 84 on the 24 core machine.
Does this explanation seem sound? Provided it does...
Is there any way to configure Jenkins to reduce the number of threads it spawns at startup? I tried the arguments discussed here but, as advertised, they didn't seem to have any effect.
Are there any VMs available that don't suffer from the overcommit issue, or some other configuration option to address it?
The sanest option at this point may be to just run Jenkins in a virtualized environment to limit the resources at its disposal to something reasonable, but at this point I'm interested in this problem on an intellectual level and want to know how to get this recalcitrant configuration to behave.
Edit
Here's a snippet from the hs_error.log file, which guided my initial investigation:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
Here are a couple of command lines I tried, all with the same result.
Starting with a pitiful amount of heap space:
java -Xms2m -Xmx2m -Xss228k -jar jenkins.war
Starting with significantly more heap space:
java -Xms2048m -Xmx2048m -Xss1m -XX:ReservedCodeCacheSize=512m -jar jenkins.war
Room to grow:
java -Xms2m -Xmx1024m -Xss228k -jar jenkins.war
A number of other configurations were tried as well. Ultimately I don't think that the problem is heap exhaustion here - it's that the JVM is trying to reserve too much virtual memory for itself (in which to store the heap, thread stacks, etc) than allowed by the ulimit settings. Presumably this is the result of the overcommit issue linked earlier, such that if Jenkins is spawning 120 threads, it's erroneously trying to reserve 120x as much VM space as the master process originally occupied.
Having done what I could with the other options suggested in that log, I'm trying to figure out how to reduce the thread count in Jenkins to test the thread VM overcommit theory.
Edit #2
Per Michał Grzejszczak, this is an issue with the glibc distributed with Red Hat Enterprise Linux 6 as discussed here. The issue can be worked around via explicit setting of the environment variable MALLOC_ARENA_MAX, in my case export MALLOC_ARENA_MAX=2. Without explicit setting of this variable, the JVM will apparently attempt to spawn (8 x cpu core count) threads, each consuming 64M. My 40 core case would have required northward of 10 gigs of virtual ram, exceeding (by itself) the ulimit on my machine of 8 gigs. Setting this to 2 reduces VM consumption to around 128 megs.
Jenkins memory footprint is related more to the number and size of projects it is managing than the number of CPUs or available memory. Jenkins should run fine on 1GB of heap memory unless you have gigantic projects on it.
You may have misconfigured the JVM though. -Xmx and -Xms parameters govern heap space JVM can use. -Xmx is a limit for heap memory, -Xms is a starting value for heap memory. Heap is a single memory area for entire JVM. You can easily monitor it by various tools like JConsole or VisualVM.
On the other hand -Xss is not related to heap. It is the size of a thread stack for all threads in this JVM process. As Java programs tend to create numerous threads setting this parameter too big can prevent your program from launching. Typically this value is in the range of 512kb. Entering here 512m instead makes it impossible for JVM to start. Make sure your settings do not contain any such mistakes (and post your memory config too).
I know this is a common question/problem. I'm wondering where to get started with it.
Running java on windows server 2008, we have 65GB memory, and it shows 25GB free. (Currently a couple of guys are running processes).
systeminfo | grep -i memory
shows:
Total Physical Memory: 65, 536 MB
Available Physical Memory: 26,512MB
Virtual Memory: Max Size 69,630 MB
Virtual Memory: Available 299 MB
Virtual Memory: In Use: 69, 331 MB.
Really just wondering how I go about solving this problem.
Where do I start?
What does it mean that more virtual memory is being
used than physical memory, and is this why java won't start?
Does
java want to use virtual memory rather than physical memory?
java -version
gives me:
Error occured during initialization of VM
could not reserve enough space for object heap
More specific questions:
Why doesn't the JVM want to use the free phsyical memory?
How much memory does a java command (like java -version) want to use if you don't specify Xms parameters?
Would simply assigning more virtual memory be a good solution to the problem?
I got the same issue. From the analysis, we found that the machine have low swap space.
Please increase the swap space and verify.
As I discovered when I had a similar problem (though with a lot less memory on the system -- see Cannot run a 64-bit JVM in 64-bit Windows 7 with a large heap size), on Windows the JVM will try to allocate a contiguous block of memory.
So my bet is that while you have enough total memory, you don't have enough contiguous memory.
At least to see java version run
java -Xmx64m -version
this should show you the version if needed. Then you can try increasing Xmx and see at what value it fails
I have a Jetty server that I use for websocket connections for an app I am working on. The only issue is that Jetty is consuming way too much virtual memory (!2.5GB of virtual memory) and around 650RES.
My issue is that as mentioned above, most of the memory (around 12gb) is not the heap size so analyzing it and understanding what is happening is harder.
Do you have any tips on how to understand where the 12gb consumption is coming from and how to figure out memory leaks or any other issues with the server?
I wanted to clerify what I mean by virtual memory (because my understanding could be wrong). Virtual memory is "VIRT" when I run top. Here is what I get:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-------------------------------------------------------------
9442 root 20 0 12.6g 603m 10m S 0 1.3 1:50.06 java
Thanks!
Please paste the JVM Options you use on startup. You can adjust the maximum memory used by the JVM with the -Xmx option as already mentioned.
Your application has been using only 603MB reserved memory. So doesn't look like it should concern you. You can get some detailed information about memory usage by either using "jmap", enable jmx and connect via jconsole or use a profiler. If you want to stay in *nix land you can also try "free" if your OS supports it.
In your case Jetty is NOT occupying 12,5 gig of memory. It's occupying 603MB. Google for "virtual memory linux" for example and you should get plenty of information about the difference between virtual and reserved memory.
Virtual memory has next to no cost in a 64-bit environment so I am not sure what the concern is. The resident memory is 650 MB or a mere 1.3% of MEM. It's not clear it is using much memory.
The default maximum heap size is 1/4 of the main memory for 64-bit JVMs. If you have 48 GB of memory you might find the default heap size is 12 GB and with some shared libraries, threads etc this can result in a virtual memory size of 12.5 GB. This doesn't mean you have a memory leak, or that you even have a problem but if you would prefer you can reduce the maximum heap size.
BTW: You can buy 32 GB for less than $200. If you are running low on memory, I would buy some more.