my problem is, I have an executable jar file on a ubuntu linux server which starts 41 threads. Now I want to start a second jar file which creates a simular amount of threads and it doesnt work. I get the error:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
Even when I try t enter java -version I get this error.
I lookt at my memory limit and It only uses 10% of the cores and 2 of 8GB ram.
When I enter ulimit -a I got 62987 proccesses per user
And when I look in /proc/sys/kernel/pid_max I got 32768.
I dont know what I should do can someone help me?
There is not enough detail in your question to give a definite answer or solution.
The problem is almost certainly not an OS imposed limit on the number of threads. It is most likely memory related.
You say that 2GB out of 8GB of RAM is in use, but you don't say how you are getting that figure. There are many different ways of measuring memory usage and they mean different things.
When a JVM starts a new thread, it goes to the operating system and asks for a block of memory to hold the thread stack. The default thread stack size is platform specific, but it is typically 1GB. This can be modified by a JVM command line option, or by the application using a Thread constructor that has a stack size parameter. Note that the stack segment is NOT allocated in the Java heap.
So here are some of the possible explanations.
One possibility is that you are running a 32 bit JVM. On a Linux platform, that will limit you to a 4GB address space, and architectural issues will limit the JVM to less than that of actual usable space. If you hit this limit, the OS will refuse a JVM's request for a stack segment. (Check that you have a 64 bit Jave installation and that you haven't given the -d32 command line option.)
A second possibility is that you don't have enough swap space. The OS will only allocate a memory segment if it has enough physical RAM and swap space (page file space) to accommodate the segment. If it figures out that there isn't enough space to hold all of the pages for all of the applications currently running, it will refuse a JVM's request for a stack segment.
A third possibility is that you have configured your JVM with a really large heap, and that is reserving all of the available virtual memory at the system level.
A fourth possibility is that you have accidentally configured a non-default stack size using an -Xss.
A final possibility is that you are actually running more than the 41 threads that you think.
Related
I'd like to run a .jar on Apache server. (My host provides cPanel support and so on...)
When I try to run it with:
java -jar "JAR_FILE_PATH"
after a while I get the error:
java.lang.OutOfMemoryError: unable to create new native thread
I have also tried to run with -Xmx16m - Xmx2G, but I got the same error.
Maybe there are some cmds that I could configure the defaults with, but I am still noob, this is the first time...:)
Does anyone have any idea?
OutOfMemoryError: unable to create new native thread means you have not enough native memory to spawn a thread. Note, it is completely different thing from heap space.
The most typical case to get that error is to run out of stack space, address space or max user processes. Almost all of the cases are related with creating too many threads.
stack space on Linux can be checked via ulimit -s. Each Java thread consumes certain amount of stack size. It can be configured via -Xss or -XX:ThreadStackSize=...
You can check the default stack size for your platform via java -XX:+PrintFlagsFinal -version | grep ThreadStackSize. You can squeeze more threads by reducing ThreadStackSize at a cost of StackOverflowError in case your application would indeed require deep stacks.
address space is an issue in case you use 32bit JVM. As per your screenshot, you are using 32bit JVM, so you are limited with 4G total address space. That includes all the things (heap, non-heap, stack, native, etc memory). That is the more you allocate for heap (-Xmx) the less threads you can create.
The workaround there is either use 64bit JVM, or reduce heap size, etc.
max user processes on Linux can be monitored via ulimit -u. In case you have "default" limit of 1000 processes, you can easily hit that as each thread counts toward that limit AFAIK. The solution there is to increase the limit.
In general, you likely want to add -XX:+HeapDumpOnOutOfMemoryError, collect heap dump, and check the number of threads used. If the number of threads is sane (e.g. less than 100), then you want to check configuration (1-3 above). In case the number of threads is insane (e.g. exceeds 100), you might want to find a defect and fix that.
I am attempting to run a Java application on a cluster computing environment (IBM LSF running CentOS release 6.2 Final) that can provide me with up to 1TB of RAM space.
I could create a JVM with up to 300GB of maximum memory (Xmx), although I need more than that (I can provide details, if requested).
However, it seems to be impossible to create a JVM with more than 300GB of maximum memory using the Xmx option. To be more specific, I get the classic error message:
Error occurred during initialization of VM.
Could not reserve enough space for object heap.
The details of my (64-bit) JVM are below:
OpenJDK Runtime Environment (IcedTea6 1.10.6) (rhel-1.43.1.10.6.el6_2-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b11, mixed mode)
I've also tried with a Java 7 64-bit JVM but I've had exactly the same problem.
Moreover, I tried to create a JVM to run a HelloWorld.jar, but still JVM creation fails if you ask for more than -Xmx300G, so I don't think it has anything to do with the specific application.
Does anyone have any idea why I cannot create a JVM with more than 300G of max memory?
Can anyone please suggest a solution/workaround?
I can think of a couple of possible explanations:
Other applications on your system are using so much memory that there isn't 300Gb available right now.
There could be a resource limit on the per-process memory size. You can check this using ulimit. (Note that according to this bug, you will get the error message if the per-process resource limit stops the JVM allocating the heap regions.)
It is also possible that this is an "over commit" issue; e.g. if your application is running in a virtual and the system as a whole cannot meet the demand because there is too much competition from other virtuals.
A couple of the other ideas suggested are (IMO) unlikely:
Switching the JRE is unlikely to make any difference. I've never heard or seen of arbitrary memory limits in specific 64 bit JVMs.
It is unlikely to be due to not having enough contiguous memory. Certainly contiguous physical memory is not required. The only possibility might be contiguous space on the swap device, but I don't recall that being an issue for typical Linux OSes.
Can anyone please suggest a solution/workaround?
Check the ulimit.
Write a tiny C program that attempts to malloc lots of memory and see how much that can allocate before it fails.
Ask the system (or hypervisor) administrator for help.
(edited, see added section on swap space)
SHMMAX and SHMALL
Since you are using CentOS, you may have run into a similar issue about the SHMMAX and SHMALL kernel setting as described here for configuring the Oracle DB. Under that same link is an example calculation for getting and setting the correct SHMALL setting.
Contiguous memory
Certain users have already reported that not enough contiguous memory is available, others have said it is irrelevant.
I am not certain whether the JVM on CentOS requires a contiguous block of memory. According to SAS, fragmented memory can prevent your JVM to startup with a large max Xmx or start Xms memory setting, but other claims on the internet say it doesn't matter. I tried to proof or unproof that claim on my 48GB Windows workstation, but managed to start the JVM with an initial and max setting of 40GB. I am pretty sure that no contiguous block of that size was available, but JVMs on different OS's may behave differently, because the memory management can be different per OS (i.e., Windows typically hides the physical addresses for individual processes).
Finding the largest contiguous memory block
Use /proc/meminfo to find the largest contiguous memory block available, see the value under VmAllocChunk. Here's a guide and explanation of all values. If the value you see there is smaller than 300GB, try a value that falls right under the value of VmAllocChunk.
However, usually this number is higher than the physically available memory (because it is the virtual memory value available), it may give you a false positive. It is the value you can reserve, but once you start using it, it may require swapping. You should therefore also check the MemFree and the Inactive values. Conversely, you can also look at the whole list and see what values do not surpass 300GB.
Other tuning options you can check for 64 bit JVM
I am not sure why you seem to hit a memory limit issue at 300GB. For a moment I thought you might have hit a maximum of pages. With the default of 4kB, 300GB gives 78,643,200 pages. Doesn't look like some well-known magical number. If, for instance, 2^24 is the maximum, then 16,777,216 pages, or 64GB should be your theoretical allocatable maximum.
However, suppose for the sake of argument that you need larger pages (which is, as it turns out, better for performance of large memory Java applications), you should consult this manpage on JBoss, which explains how to use -XX:+UseLargePages and set kernel.shmmax (there it is again), vm.nr_hugepages and vm.huge_tlb_shm_group (not sure the latter is required).
Stress your system
Others have suggested this already as well. To find out that the problem lies with the JVM and not with the OS, you should stresstest it. One tool you could use is Stresslinux. In this tutorial, you find some options you can use. Of particular interest to you is the following command:
stress --vm 2 --vm-bytes 300G --timeout 30s --verbose
If that command fails, or locks your system, you know that the OS is limiting the use of that amount of memory. If it succeeds, we should try to tweak the JVM such that it can use the available memory.
EDIT Apr6: check swap space
It is not uncommon that systems with very large internal memory sizes, use little or no swap space. For many applications this may not be a problem, but the JVM requires the swap available swap space to be larger than the requested memory size. According to this bug report, the JVM will try to increase the swap space itself, however, as some answers in this SO thread suggested, the JVM may not always be capable of doing so.
Hence: check the currently available swap space with cat /proc/swaps # free and, if it is smaller than 300GB, follow the instructions on this CentOS manpage to increase the swap space for your system.
Note 1: we can deduct from bugreport #4719001 that a contiguous block of available swap space is not a necessity. But if you are unsure, remove all swap space and recreate it, which should remove any fragmentation.
Note 2: I have seen several posts like this one reporting 0MB swap space and being able to run the JVM. That is probably due to the fact that the JVM increases the swap space itself. Still doesn't hurt to try to increase the swap space by hand to find out whether it fixes your issue.
Premature conclusion
I realize that non of the above is an out-of-the-box answer to your question. I hope it gives you some pointers though to what you can try to get your JVM working. You might also try other JVM's, if the problem turns out to be a limit of the JVM you are currently using, but from what I have read so far, no limit should be imposed for 64 bit JVM's.
That you get the error right on initialization of the JVM leads me to believe that the problem is not with the JVM, but with the OS not being able to comply to the reservation of the 300GB of memory.
My own tests showed that the JVM can access all virtual memory, and doesn't care about the amount of physical memory available. It would be odd if the virtual memory is lower than the physical memory, but the VmAllocChunk setting should give you a hint in that direction (it is usually much larger).
If you have a look at the FAQ section of Java HotSpot VM, its mentioned that on 64-bit VMs, there are only 64 address bits to work with and hence the maximum Java heap size is dependent on the amount of physical memory & swap space present on the system.
If you calculate theoretically then you can have a memory of 18446744073709551616 MB, but there are above limitation to it.
You have to use -Xmx command to define maximum heap size for JVM, By default, Java uses 64 + 30% = 83.2MB on 64-bit JVMs.
I tried below command on my machine and it looked to work fine.
java -Xmx500g com.test.TestClass
I also tried to define maximum heap in terabytes but it doesn't work.
Run ulimit -a as the JVM Process's user and verify that your kernel isn't limiting your max memory size. You may need to edit /etc/security/limit.conf
According to this discussion, LSF does not pool node memory into a single shared space. You are using something else for that. Read that something's documentation, because it is possible it cannot do what you are asking it to do. In particular, it may not be able to allocate a single contiguous region of memory that spans all the nodes. Usually that's not necessary, as an application will make many calls to malloc. But the JVM, to simplify things for itself, wants to allocate (or reserve) a single contiguous region for the entire heap by effectively calling malloc just once. Or it could be something else related to whatever you are using to emulate a giant shared memory machine.
I have a Java SE desktop application which uses a lot of memory (1,1 GB would be desired). All target machines (Win 7, Win Vista) have plenty of physical memory (at least 4GB, most of them have more). There is also enough free memory.
Now, when the machines have some uptime and a lot of programs were started and terminated, the memory becomes fragmented (this is what I assume). This leads to the following error when the JVM is started:
JVM creation failed
Error occurred during initialization of VM
Could not reserve enough space for object heap
Even closing all running programs doesn't help in such a situation (despite Task Manager and other tools report enough free memory). The only thing thas helps is to reboot the machine and fire up the Java applicaton as one of the first programs launched.
As far as I've investigated, the Oracle VM requires one contiguous chunk of memory.
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
I start my JVM with the following arguments:
-J-client -J-Xss2m -J-Xms512m -J-Xmx1100m -J-XX:PermSize=64m -J-Dsun.zip.disableMemoryMapping=true
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
Use an OS which doesn't get fragmented virtual memory. e.g. 64-bit windows or any version of UNIX.
BTW It is hard for me to imagine how this is possible in the first place but I know it to be the case. Each process has its own virtual memory so its arrangement of virtual memory shouldn't depend on anything which is already running or has run before.
I believe it might be a hang over from the MS-DOS TSR days. Shared libraries loaded are given absolute addresses (added to the end of the signed address space, 2 GB, the high half is reserved for the OS and the last 512 MB for the BIOS) in memory meaning they must use the same address range in every program they are used in. Over time the maximum address is determined by the lowest shared library loaded or used (I don't know which one by I suspect the lowest loaded)
I wrote Thread pool class referring http://www.informit.com/articles/article.aspx?p=30483&seqNum=5
Environment: Windows7 4 cp
Executed my program with 70,000 Thread in Windows 7, under JDK 1.5 it went through successfully. Not used vm arguments.
The same Code i tried to execute with 5,000 Thread in Linux enterprise edition which is under Virtual Box with 4GB base memory. with vm arguments -xms512m -xmx1024m. It executes till 2156 threads and throws an exception
Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at testthreadpool.ThreadPool.(ThreadPool.java:38)
at testthreadpool.TestThreadPool.main(TestThreadPool.java:16)
But the same code run perfectly in windows7.
May i know why this error occurs. Does this java code need 1GB memory to run Just 5,000 Threads?...
My actual requirement is to hold a ThreadPool with 10,000 Workthread.
My actual requirement is to hold a
ThreadPool with 10,000 Workthread.
I think you need to revisit your requirement. That in no way is a good idea, and is catastrophic to performance.
As #Yann points out, using 10,000 threads is a really bad idea ... unless you have a machine with thousands of cores. You should take a serious look at your application design.
In the short term, try tuning the default thread stack size with the -Xss... JVM parameter. Also note that stacks are not allocated in heap memory, so your -Xms512m -Xmx1024m option is not reserving space for stacks. On the contrary, it is reserving space that cannot then be used for stacks.
Finally, there may be other things (other than memory for thread stacks) that will limit the number of threads that your application can create.
Threads require a stack, that has to have an initial size. For threads, the initial stack size is by default the stack size reource limit, as shown by ulimit -s, but can be changed by a call to pthread_attr_setstacksize(). (See this other SO question).
Are you on 64-bit?
Don't expect a 32-bit machine to be able to run lots of threads. You may also wish to tweak the stack size. Starting lots of threads uses lots of memory for stacks, and you can't get around that unless you can tolerate smaller stacks.
Checking on x86_64, Linux seems to default to 8M stacks, which means that 1k threads takes 8G stack, so you really want to be careful with that.
I have a system which cannot provide more than 1.5 Gb for Java process. Thus i need an exact way to specify java process settings, including all memory kinds inside java and possible fork.
One specific java process and system to illustrate my problem:
My current environment is java 1.6.0_18 under Ubuntu Linux 9.10.
I start large java server process with following JVM Options:
"-Xms512m -Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m"
Now, "top" command reports that the process uses 1.6gb memory...
Questions:
1 - how the maximal space used by java process is calculated? Please provide exact formula if possible.
( Smth. Like: max.heap + max.perm + stack + jvm space = maximal space )
2 - what is the infamous fork behavior under linux in my case? Will the forked JVM occupy extra 1.6 gb (resulting in total 3.2 Gb of used memory)?
3 - Which options must be used to absolutely ensure that no more than 1.5gb is used at any time?
thank you
#rancidfishbreath: "ulimit" will ensure that java cannot take more than specified amount of memory. My purpose is to ensure that java doesn't ever try to do that.
top reports 1.6GB because PermSize is ON TOP of the heap-size maximum heap size. In your case you set MaxPermSize to 512m and Xmx to 1024m. This amounts to 1536m. Just like in other languages, an absolutely precise number can not be calculated unless you know precisely how many threads are started, how many file handles are used, etc. The stack size per thread depends on the OS and JDK version, in your case its 1024k (if it is a 64bit machine). So if you have 10 threads you use 10240k extra as the stack is not allocated from the heap (Xmx). Most applications that behave nicely work perfectly when setting a lower stack and MaxPermSize. Try to set the ThreadStackSize to 128k and if you get a StackOverflowError (i.e. if you do lots of deep recursions) you can increase it in small steps until the problem disappears.
So my answer is essentially that you can not control it down to the MB how much the Java process will use, but you come fairly close by setting i.e. -Xmx1024m -XX:MaxPermSize=384m and -XX:ThreadStackSize=128k -XX:+UseCompressedOops. Even if you have lots of threads you will still have plenty of headroom until you reach 1.5GB. The UseCompressedOops tells the VM to use narrow pointers even when running on a 64bit JVM, thus saving some memory.
At high level JVM address space is divided in three main parts:
kernel space: ~1GB, also depends on platform, windows its more than 1GB
Java Heap: Java heap specified by user using the -Xmx, -XX:MaxPermSize, etc...
Rest of virtual address space goes to native usage of JVM, to accomodate the malloc/calloc done by JVM, native threads stack: thread respective the java threads and addition JVM native threads for GC, etc...
So you have (4GB - kernel space 1-1.25GB) ~2.75GB to play with,so you can set your java/native heap accordingly. But generally we should keep atleast 500MB for JVM native heap else there is a chances that you get native OOM. So we need to do a trade off here based on your application's java heap utilization.