First time user of Jenkins here, and having a bit of trouble getting it started. From the Linux shell I run a command like:
java -Xms512m -Xmx512m -jar jenkins.war
and consistently get an error like:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# An error report file with more information is saved as:
# /home/twilliams/.jenkins/hs_err_pid36290.log
First, the basics:
Jenkins 1.631
Running via the jetty embedded in the war file
OpenJDK 1.7.0_51
Oracle Linux (3.8.13-55.1.5.el6uek.x86_64)
386 GB ram
40 cores
I get the same problem with a number of other configurations as well: using Java Hotspot 1.8.0_60, running through Apache Tomcat, and using all sorts of different values for -Xms/-Xmx/-Xss and similar options.
I've done a fair bit of research and think I know what the problem is, but am at a loss as how to solve it. I suspect that I'm running into the virtual memory overcommit issue mentioned here; the relevant bits from ulimit:
--($:)-- ulimit -a
...
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) 8388608
stack size (kbytes, -s) 8192
virtual memory (kbytes, -v) 8388608
...
If I double the virtual memory limit as root, I can start Jenkins, but I'd rather not run Jenkins as the root user.
Another workaround: a soon-to-be-decommissioned machine with 48 GB ram and 24 cores can start Jenkins without issue, though (I suspect) just barely: according to htop, its virtual memory footprint is just over 8 GB. I suspect, as a result, that the memory overcommit issue is scaling with the number of processors on the machine, presumably the result of Jenkins starting a number of threads proportional to the number of processors present on the host machine. I roughly captured the thread count via ps -eLf | grep jenkins | wc -l and found that the thread count spikes at around 114 on the 40 core machine, and 84 on the 24 core machine.
Does this explanation seem sound? Provided it does...
Is there any way to configure Jenkins to reduce the number of threads it spawns at startup? I tried the arguments discussed here but, as advertised, they didn't seem to have any effect.
Are there any VMs available that don't suffer from the overcommit issue, or some other configuration option to address it?
The sanest option at this point may be to just run Jenkins in a virtualized environment to limit the resources at its disposal to something reasonable, but at this point I'm interested in this problem on an intellectual level and want to know how to get this recalcitrant configuration to behave.
Edit
Here's a snippet from the hs_error.log file, which guided my initial investigation:
# There is insufficient memory for the Java Runtime Environment to continue.
# pthread_getattr_np
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
Here are a couple of command lines I tried, all with the same result.
Starting with a pitiful amount of heap space:
java -Xms2m -Xmx2m -Xss228k -jar jenkins.war
Starting with significantly more heap space:
java -Xms2048m -Xmx2048m -Xss1m -XX:ReservedCodeCacheSize=512m -jar jenkins.war
Room to grow:
java -Xms2m -Xmx1024m -Xss228k -jar jenkins.war
A number of other configurations were tried as well. Ultimately I don't think that the problem is heap exhaustion here - it's that the JVM is trying to reserve too much virtual memory for itself (in which to store the heap, thread stacks, etc) than allowed by the ulimit settings. Presumably this is the result of the overcommit issue linked earlier, such that if Jenkins is spawning 120 threads, it's erroneously trying to reserve 120x as much VM space as the master process originally occupied.
Having done what I could with the other options suggested in that log, I'm trying to figure out how to reduce the thread count in Jenkins to test the thread VM overcommit theory.
Edit #2
Per Michał Grzejszczak, this is an issue with the glibc distributed with Red Hat Enterprise Linux 6 as discussed here. The issue can be worked around via explicit setting of the environment variable MALLOC_ARENA_MAX, in my case export MALLOC_ARENA_MAX=2. Without explicit setting of this variable, the JVM will apparently attempt to spawn (8 x cpu core count) threads, each consuming 64M. My 40 core case would have required northward of 10 gigs of virtual ram, exceeding (by itself) the ulimit on my machine of 8 gigs. Setting this to 2 reduces VM consumption to around 128 megs.
Jenkins memory footprint is related more to the number and size of projects it is managing than the number of CPUs or available memory. Jenkins should run fine on 1GB of heap memory unless you have gigantic projects on it.
You may have misconfigured the JVM though. -Xmx and -Xms parameters govern heap space JVM can use. -Xmx is a limit for heap memory, -Xms is a starting value for heap memory. Heap is a single memory area for entire JVM. You can easily monitor it by various tools like JConsole or VisualVM.
On the other hand -Xss is not related to heap. It is the size of a thread stack for all threads in this JVM process. As Java programs tend to create numerous threads setting this parameter too big can prevent your program from launching. Typically this value is in the range of 512kb. Entering here 512m instead makes it impossible for JVM to start. Make sure your settings do not contain any such mistakes (and post your memory config too).
Related
I have two linux machines (both are VMs), one having 12GB memory and other having 8GB memory.
I tried to start the same java program on both machines, with maximum max heap size possible (using -Xmx flag). Following are the results I got.
12GB machine: 9460MB
8GB machine: 4790MB
If I specify a max heap size beyond above limits, I get below error.
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
I checked the free memory in two systems (using free command), and I got following.
12GB machine: approximately 3GB free.
8GB machine: approximately 4GB free.
My question is, what determines the maximum max heap size a java program can be started with, which would not result in above mentioned error? (System had sufficient memory to allocate 1073741824 bytes of memory when the program gave above error)
I have found interesting comments from JDK bug ( The bug in JDK 9 version and not in 8. It says bug was fixed in 8.x version but does not tell minor build number.
If virtual memory has been limited with "ulimit -v", and the server has a lot of RAM, then the JVM can not start without extra command line arguments to the GC.
// After "ulimit -v" The jvm does not start with default command line.
$ ulimit -S -v 4194304
$ java -version
Error occurred during initialization of VM
Could not allocate metaspace: 1073741824 bytes
Comments:
The problem seems to be that we must specify MALLOC_ARENA_MAX.
If I set the environment variable MALLOC_ARENA_MAX=4, then the jvm can start without any extra arguments.
I guess that this is not something that can be fixed from the jvm. If so we can close this bug.
When using "UseConcMarkSweepGC" then the command line above does not work.
I have tried to add MaxMetaspaceSize=128m, but it does not help.
I am sure there are an argument that makes it work, but I have not found one.
Configuring the GC with limited virtual memory is not very user friendly.
Change parameters to as per your requirement and try this one.
ulimit -S -v 4194304
java -XX:MaxHeapSize=512m -XX:InitialHeapSize=512m -XX:CompressedClassSpaceSize=64m -XX:MaxMetaspaceSize=128m -XX:+UseConcMarkSweepGC -version
The memory you have available is the combination of free RAM plus swap space. It also depends on whether the system has overcommit enabled — if so, the kernel will allow programs to allocate more memory than is actually available (within reasonable limits), since programs often allocate more than they're really going to use.
Note that overcommit is enabled by default. To disable it, write 2 into /proc/sys/vm/overcommit_memory. (Strangely, a value of 0 does not mean "no overcommit".) But it's a good idea to read the overcommit documentation first.
I did some experiments with the clues given by ravindra and found that the maximum max heap size has a direct relationship with the total virtual memory available in the system.
Total virtual memory in the system can be found (in KB) with:
ulimit-v
Total virtual memory can be altered with:
ulimit -v <new amount in KB>
The maximum max heap size possible was approximately 2GB less than the virtual memory. If you specify unlimited virtual memory using ulimit -v unlimited, You can specify any large value for the max heap size.
I am just started R&D on JVM heap size and observed some strange behavior.
My system RAM size is 4 GB
OS is 64-bit windows 7
Java version is 1.7
Here is the observations:
I wrote a sample main program which starts & immediately went to wait state.
When I run the program from eclipse with -Xms1024m -Xmx1024m parameters it can able to run 3 times/parallel-process from eclipse i.e. 3 parallel process only as my RAM is 4GB only. This is expected behavior.
Hence I had rerun the same with -Xms512m -Xmx512m parameters and it can able to run 19 times/parallel-process from eclipse even my RAM is 4GB. HOW?
I used VisualVM tool to cross-check and I can see 19 process ids and each process id is allocated with 512m even my RAM size is 4GB, but HOW?
I have goggled it & gone through lot oracle documentation about the Memory management & optimization but those articles not answered my question.
Thanks in advance.
Thanks,
Baji
I wonder why you couldn't start more than 3 processes with -Xmx1024M -Xms1024M. I tested it on my 4GB Linux system, and I could start at least 6 such processes; I didn't try to start more.
Anyway, this is due to the fact that the program doesn't actually use that memory if you just start it with -Xmx1024M -Xms1024M. These just specify the maximum size of the heap. If you don't allocate that amount of data, this memory won't get actually used then.
"top" shows the virtual set size to be more than 1024M for such processes, but it isn't actually using that memory. It has just allocated address space but never used it.
The actual memory in your PC and the memory that is allocated must not match each other. In fact your operating system is moving data from the RAM ontothe hdd when your ram is full. This is called swapping / Ram-paging.
Thats why windows has the swap-file on the hdd and unix has a swap partition.
I know this is a common question/problem. I'm wondering where to get started with it.
Running java on windows server 2008, we have 65GB memory, and it shows 25GB free. (Currently a couple of guys are running processes).
systeminfo | grep -i memory
shows:
Total Physical Memory: 65, 536 MB
Available Physical Memory: 26,512MB
Virtual Memory: Max Size 69,630 MB
Virtual Memory: Available 299 MB
Virtual Memory: In Use: 69, 331 MB.
Really just wondering how I go about solving this problem.
Where do I start?
What does it mean that more virtual memory is being
used than physical memory, and is this why java won't start?
Does
java want to use virtual memory rather than physical memory?
java -version
gives me:
Error occured during initialization of VM
could not reserve enough space for object heap
More specific questions:
Why doesn't the JVM want to use the free phsyical memory?
How much memory does a java command (like java -version) want to use if you don't specify Xms parameters?
Would simply assigning more virtual memory be a good solution to the problem?
I got the same issue. From the analysis, we found that the machine have low swap space.
Please increase the swap space and verify.
As I discovered when I had a similar problem (though with a lot less memory on the system -- see Cannot run a 64-bit JVM in 64-bit Windows 7 with a large heap size), on Windows the JVM will try to allocate a contiguous block of memory.
So my bet is that while you have enough total memory, you don't have enough contiguous memory.
At least to see java version run
java -Xmx64m -version
this should show you the version if needed. Then you can try increasing Xmx and see at what value it fails
I am trying to run java command in linux server it was running well but today when I tried to run java I got some error-
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
my memory space is -
root#vps [~]# free -m
total used free
Mem: 8192 226 7965
-/+ buf: 226 7965
Swap: 0 0 0
How can I solve this problem?
The machine did not have enough memory at that time to service the JVM's request for memory to start the program. I expect that you have 8 Gb of memory in the machine and that you use a 64-bit JVM.
I would suggest you add some swap space to the system to let it handle spikes in memory usage, and then figure out where the spike came from.
Which VM are you using? What is the maximum memory size you are trying to use?
If you are using 32-bit JVM on Windows and you are using close to the maximum it can access on your system, it can be impacted by memory fragmentation. You may have a similar problem.
I have a system which cannot provide more than 1.5 Gb for Java process. Thus i need an exact way to specify java process settings, including all memory kinds inside java and possible fork.
One specific java process and system to illustrate my problem:
My current environment is java 1.6.0_18 under Ubuntu Linux 9.10.
I start large java server process with following JVM Options:
"-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"
Now, "top" command reports that the process uses 1.6gb memory...
Questions:
1 - how the maximal space used by java process is calculated? Please provide exact formula if possible.
( Smth. Like: max.heap + max.perm + stack + jvm space = maximal space )
2 - what is the infamous fork behavior under linux in my case? Will the forked JVM occupy extra 1.6 gb (resulting in total 3.2 Gb of used memory)?
3 - Which options must be used to absolutely ensure that no more than 1.5gb is used at any time?
thank you
#rancidfishbreath: "ulimit" will ensure that java cannot take more than specified amount of memory. My purpose is to ensure that java doesn't ever try to do that.
top reports 1.6GB because PermSize is ON TOP of the heap-size maximum heap size. In your case you set MaxPermSize to 512m and Xmx to 1024m. This amounts to 1536m. Just like in other languages, an absolutely precise number can not be calculated unless you know precisely how many threads are started, how many file handles are used, etc. The stack size per thread depends on the OS and JDK version, in your case its 1024k (if it is a 64bit machine). So if you have 10 threads you use 10240k extra as the stack is not allocated from the heap (Xmx). Most applications that behave nicely work perfectly when setting a lower stack and MaxPermSize. Try to set the ThreadStackSize to 128k and if you get a StackOverflowError (i.e. if you do lots of deep recursions) you can increase it in small steps until the problem disappears.
So my answer is essentially that you can not control it down to the MB how much the Java process will use, but you come fairly close by setting i.e. -Xmx1024m -XX:MaxPermSize=384m and -XX:ThreadStackSize=128k -XX:+UseCompressedOops. Even if you have lots of threads you will still have plenty of headroom until you reach 1.5GB. The UseCompressedOops tells the VM to use narrow pointers even when running on a 64bit JVM, thus saving some memory.
At high level JVM address space is divided in three main parts:
kernel space: ~1GB, also depends on platform, windows its more than 1GB
Java Heap: Java heap specified by user using the -Xmx, -XX:MaxPermSize, etc...
Rest of virtual address space goes to native usage of JVM, to accomodate the malloc/calloc done by JVM, native threads stack: thread respective the java threads and addition JVM native threads for GC, etc...
So you have (4GB - kernel space 1-1.25GB) ~2.75GB to play with,so you can set your java/native heap accordingly. But generally we should keep atleast 500MB for JVM native heap else there is a chances that you get native OOM. So we need to do a trade off here based on your application's java heap utilization.