We are getting "java.lang.OutOfMemoryError : unable to create new native Thread" on 8GB RAM VM after 32k threads (ps -eLF| grep -c java)
However, "top" and "free -m" shows 50% free memory available. JDk is 64 bit and tried with both HotSpot and JRockit.Server has Linux 2.6.18
We also tried OS stack size (ulimit -s) tweaking and max process(ulimit -u) limits, limit.conf increase but all in vain.
Also we tried almost all possible of heap size combinations, keeping it low, high etc.
The script we use to run application is
/opt/jrockit-jdk1.6/bin/java -Xms512m -Xmx512m -Xss128k -jar JavaNatSimulator.jar /opt/tools/jnatclients/natSimulator.properties
We have tried editing /etc/security/limits.conf and ulimit but still that same
[root#jboss02 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 72192
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 72192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
This is not a memory problem even though the exception name highly suggests so, but an operating system resource problem. You are running out of native threads, i.e. how many threads the operating system will allow your JVM to use.
This is an uncommon problem, because you rarely need that many. Do you have a lot of unconditional thread spawning where the threads should but doesn't finish?
You might consider rewriting into using Callable/Runnables under the control of an Executor if at all possible. There are plenty of standard executors with various behavior which your code can easily control.
(There are many reasons why the number of threads is limited, but they vary from operating system to operating system)
I encountered same issue during the load test, the reason is because of JVM is unable to create a new Java thread further. Below is the JVM source code
if (native_thread->osthread() == NULL) {
// No one should hold a reference to the 'native_thread'.
delete native_thread;
if (JvmtiExport::should_post_resource_exhausted()) {
JvmtiExport::post_resource_exhausted(
JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR |
JVMTI_RESOURCE_EXHAUSTED_THREADS,
"unable to create new native thread");
} THROW_MSG(vmSymbols::java_lang_OutOfMemoryError(), "unable to create new native thread");
} Thread::start(native_thread);`
Root cause : JVM throws this exception when
JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR (resources exhausted (means memory
exhausted) ) or JVMTI_RESOURCE_EXHAUSTED_THREADS (Threads exhausted).
In my case Jboss is creating too many threads , to serve the request, but all the threads are blocked . Because of this, JVM is exhausted with threads as well with memory (each thread holds memory , which is not released , because each thread is blocked).
Analyzed the java thread dumps observed nearly 61K threads are blocked by one of our method, which is causing this issue . Below is the portion of Thread dump
"SimpleAsyncTaskExecutor-16562" #38070 prio=5 os_prio=0 tid=0x00007f9985440000 nid=0x2ca6 waiting for monitor entry [0x00007f9d58c2d000]
java.lang.Thread.State: BLOCKED (on object monitor)
If jvm is started via systemd, there might be a maxTasks per process limit (tasks actually mean threads) in some linux OS.
You can check this by running "service status" and check if there is a maxTasks limit. If there is, you can remove it by editing /etc/systemd/system.conf, adding a config: DefaultTasksMax=infinity
It's likely that your OS does not allow the number of threads you're trying to create, or you're hitting some limit in the JVM. Especially if it's such a round number as 32k, a limit of one kind or another is a very likely culprit.
Are you sure you truly need 32k threads? Most modern languages have some kind of support for pools of reusable threads - I'm sure Java has something in place too (like ExecutorService, as user Jesper mentioned). Perhaps you could request threads from such a pool, instead of manually creating new ones.
I would recommend to also look at the Thread Stack Size and see if you get more threads created. The default Thread Stack Size for JRockit 1.5/1.6 is 1 MB for 64-bit VM on Linux OS. 32K threads will require a significant amount of physical and virtual memory to honor this requirement.
Try to reduce the Stack Size to 512 KB as a starting point and see if it helps creating more threads for your application. I also recommend to explore horizontal scaling e.g. splitting your application processing across more physical or virtual machines.
When using a 64-bit VM, the true limit will depend on the OS physical and virtual memory availability and OS tuning parameters such as ulimitc. I also recommend the following article as a reference:
OutOfMemoryError: unable to create new native thread – Problem Demystified
I had the same problem due to ghost processes that didn't show up when using top in bash. This prevented the JVM to spawn more threads.
For me, it resolved when listing all java processes with jps (just execute jps in your shell) and killed them separately using the kill -9 pid bash command for each ghost process.
This might help in some scenarios.
This error can surface because of following two reasons:
There is no room in the memory to accommodate new threads.
The number of threads exceeds the Operating System limit.
I doubt that number of thread have exceeded the limit for the java process
So possibly chances are the issue is because of memory
One point to consider is
threads are not created within the JVM heap. They are created outside
the JVM heap. So if there is less room left in the RAM, after the JVM
heap allocation, application will run into
“java.lang.OutOfMemoryError: unable to create new native thread”.
Possible Solution is to reduce the heap memory or increase the overall ram size
You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus its recommend to find out those limits by running a test similar to the below link example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:
A new Java thread is requested by an application running inside the
JVM
JVM native code proxies the request to create a new native
thread to the OS The OS tries to create a new native thread which
requires memory to be allocated to the thread
The OS will refuse
native memory allocation either because the 32-bit Java process size
has depleted its memory address space – e.g. (2-4) GB process size
limit has been hit – or the virtual memory of the OS has been fully
depleted
The java.lang.OutOfMemoryError: Unable to create new native
thread error is thrown.
Reference: https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread
To find which processes are creating threads try:
ps huH
I normally redirect output to a file and analysis the file offline (is thread count for each process is as expected or not)
I had the same problem in a centOS/Red Hat machine. You are reaching the threads limit, for the user, process, or an overall limit
In my case there was a limit on the number of threads a user can have. Which can be checked with, the line saying max user processes
ulimit -a
You can see how many threads are running using this command
$ ps -elfT | wc -l
To get how many threads your process is running (you can get your process pid using top or ps aux):
$ ps -p <PROCESS_PID> -lfT | wc -l
The /proc/sys/kernel/threads-max file provides a system-wide limit for the number of threads. The root user can change that value
To change the limits (in this case to 4096 threads):
$ ulimit -u 4096
You can find more info here for Red Hat/centOs http://www.mastertheboss.com/jboss-server/jboss-monitoring/how-to-solve-javalangoutofmemoryerror-unable-to-create-new-native-thread
If your Job is failing because of OutOfMemmory on nodes you can tweek your number of max maps and reducers and the JVM opts for each. mapred.child.java.opts (the default is 200Xmx) usually has to be increased based on your data nodes specific hardware.
This link might be helpful... pls check
your JBoss configuration has some issues,
/opt/jrockit-jdk1.6/bin/java -Xms512m -Xmx512m
Xms and Xmx are limiting your JBoss memory usage, to the configured value, so from the 8Gb you have the server is only ussing 512M + some extra for his own purpose, increase that number, remember to leave some free for the OS and other stuff running there and may be you get it running despite de unsavoury code.
Fixing the code would be nice too, if you can.
I had this same issue and it turned out to be an improper usage of an java API. I was initializing a builder in a batch processing method that was that not supposed to be initiallized more than once.
Basically I was doing something like:
for (batch in batches) {
process_batch(batch)
}
def process_batch(batch) {
var client = TransportClient.builder().build()
client.processList(batch)
}
when I should have done this:
for (batch in batches) {
var client = TransportClient.builder().build()
process_batch(batch, client)
}
def process_batch(batch, client) {
client.processList(batch)
}
Are you starting your java app with system.d? This is for you!
I recently stumbled over DefaultTasksMax [1] which for some reason was limited to 60 on my machine - not enough for my new keycloak installation.
Keycloak crashes with java.lang.OutOfMemoryError : unable to create new native Thread as soon as it hits the '60' limit (ps -elfT | grep keycloak|wc -l).
Solution
1. Look up your system.d settings
systemctl show --property DefaultTasksMax
In my case. This printed 60
2. Provide a higher value
editor /etc/systemd/system.conf
Edit:
DefaultTasksMax=128
You can also set a similar value TaskMax in your Unit-File. See [2].
3. Reload, Check, Restart
systemctl daemon-reload
systemctl show --property DefaultTasksMax
systemctl start keycloak
[1] https://www.freedesktop.org/software/systemd/man/systemd-system.conf.html
[2] https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
First of all I wouldn't blame that much the OS/VM.. rather the developer who wrote the code that creates sooo many Threads.
Basically somewhere in your code (or 3rd party) a lot of threads are created without control.
Carefully review the stacktraces/code and control the number of threads that get created. Normally your app shouldn't need a large amount of threads, if it does it's a different problem.
my problem is, I have an executable jar file on a ubuntu linux server which starts 41 threads. Now I want to start a second jar file which creates a simular amount of threads and it doesnt work. I get the error:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
Even when I try t enter java -version I get this error.
I lookt at my memory limit and It only uses 10% of the cores and 2 of 8GB ram.
When I enter ulimit -a I got 62987 proccesses per user
And when I look in /proc/sys/kernel/pid_max I got 32768.
I dont know what I should do can someone help me?
There is not enough detail in your question to give a definite answer or solution.
The problem is almost certainly not an OS imposed limit on the number of threads. It is most likely memory related.
You say that 2GB out of 8GB of RAM is in use, but you don't say how you are getting that figure. There are many different ways of measuring memory usage and they mean different things.
When a JVM starts a new thread, it goes to the operating system and asks for a block of memory to hold the thread stack. The default thread stack size is platform specific, but it is typically 1GB. This can be modified by a JVM command line option, or by the application using a Thread constructor that has a stack size parameter. Note that the stack segment is NOT allocated in the Java heap.
So here are some of the possible explanations.
One possibility is that you are running a 32 bit JVM. On a Linux platform, that will limit you to a 4GB address space, and architectural issues will limit the JVM to less than that of actual usable space. If you hit this limit, the OS will refuse a JVM's request for a stack segment. (Check that you have a 64 bit Jave installation and that you haven't given the -d32 command line option.)
A second possibility is that you don't have enough swap space. The OS will only allocate a memory segment if it has enough physical RAM and swap space (page file space) to accommodate the segment. If it figures out that there isn't enough space to hold all of the pages for all of the applications currently running, it will refuse a JVM's request for a stack segment.
A third possibility is that you have configured your JVM with a really large heap, and that is reserving all of the available virtual memory at the system level.
A fourth possibility is that you have accidentally configured a non-default stack size using an -Xss.
A final possibility is that you are actually running more than the 41 threads that you think.
This is a tricky one and is a little hard to explain but I will give it a shot to see if anyone out there has had a similar issue + fix.
Quick background:
Running a large Java Spring App on Tomcat in a Docker container. Other containers are simple, 1 for a JMS Queue and the other for Mysql. I run on Windows and have given Docker as much CPU as I have (and memory too). I have set JAVA_OPTS for Catalina to max out memory as well as memory limits in my docker-compose, but the issue seems to be CPU related.
When the app is idling it normally is sitting around 103% CPU (8 Cores, 800% max). There is a process we use which (using a Thread Pool) runs some workers to go out and run some code. On my local host (no docker in between) it runs very fast and flies, spitting out logs at a good clip.
Problem:
When running in Docker watching docker stats -a I can see the CPU start to ramp up when this process begins. Meanwhile in the logs, everything is flying by like expected while the CPU grows and grows. It seems to get close to 700% and then it kind of dies, but it doesn't. When it hits this threshold I see the CPU drop drastically down to < 5% where it stays for a little while. At this time logs stop printing, so I assume nothing is happening. Eventually it will kick back in and go back ~120% and continue its process like nothing happened sometimes respiking to ~400%.
What I am trying
I have played around with the memory settings to no success but it seems more like a CPU issue. I know Java in Docker is a bit wonky but I have given it all the room I can on my beefy dev box where locally this process runs without a hitch. I find it odd the CPU spikes then dies, but the container itself doesn't die or reset. Has anyone seen a similar issue or know some ways to further attack this CPU issue with Docker?
Thanks.
There is an issue in resource allocation in JVM containers, which occurs as it is referring to the overall system matrices instead of container matrices. In JAVA 7 and 8, JVM ergonomics are applying the systems’ (instance) matrices such as the number of cores and memory instead of docker allocated resources (cores and memory). As a result of that, the JVM initialized a number of parameters based on core count and Memory as below.
JVM memory footprints
-Perm/metaspace
-JIT Bytecode
-Heap Size (JVM ergonomics ¼ of instance memory)
CPU
-No. JIT compiler threads
-No. Garbage Collection threads
-No. Thread in the common fork-join pool
Therefore, the containers tend to become unresponsive due to high CPU or terminate the container by OOM kill. The reason for this is the container CGGroups and Namespaces are ignored by JVM in order to limit the Memory and CPU cycles. Therefore, JVM tends to get more resources of instance instead of limiting the separate allocation of docker allocated resources.E
Example
Assume two containers are running on 4 cores instance with 8GB memory. When it comes to the docker initialization point, assume that the docker is with 1GB memory and 2048 CPU cycles as a hard limit. Here, each and every container see 4 cores and those JVM allocate Memory, JIT compilers and GC threads separately according to their stats. However, the JVM will see the overall number of cores on that instance (4) and use that value to initialize the number of default threads that we have seen earlier. Accordingly, the JVM matrices of two containers will be as mentioned below.
-4 * 2 Jit Compiler Threads
-4 * 2 Garbage Collection threads
-2 GB Heap size * 2 (¼ of Instance full memory instead of docker allocated memory)
In terms of Memory
As per the above example, the JVM will gradually increase the heap usage as the JVM sees 2GB heap max size, which is a quarter of Instance memory (8GB). Once the memory usage of a container reaches to the Hard limit of 1GB, the container will be terminated by OOM kill.
In terms of CPU
As per the above example, one JVM has initialized with 4 Garbage Collection Threads and 4 JIT compiler. However, the docker allocates only 2048 CPU cycles. Therefore, it leads to high CPU, more context switching and unresponsive container, and finally will terminate the container due to high CPU.
Solution
Basically, there are two processes namely, CGGroups and Namespaces, which handling that kind of situation on the OS level. However, JAVA 7 and 8 do not accept the CGgroup and Namespaces, but the releases after jdk_1.8.131 are able to enable CGroup limit by JVM parameter (-XX:+UseCGroupMemoryLimitForHeap, -XX:+UnlockExperimentalVMOptions). However, it’s only providing solutions for memory issues but no concern on CPU set issue.
With OpenJDK 9, the JVM will automatically detect CPUsets. Especially in orchestration, it further able to manually overwrite the default parameters for CPU set thread counts as per the count of CPU cycles on the container by using JVM flags (XX:ParallelGCThreads, XX:ConcGCThreads).
I'd like to run a .jar on Apache server. (My host provides cPanel support and so on...)
When I try to run it with:
java -jar "JAR_FILE_PATH"
after a while I get the error:
java.lang.OutOfMemoryError: unable to create new native thread
I have also tried to run with -Xmx16m - Xmx2G, but I got the same error.
Maybe there are some cmds that I could configure the defaults with, but I am still noob, this is the first time...:)
Does anyone have any idea?
OutOfMemoryError: unable to create new native thread means you have not enough native memory to spawn a thread. Note, it is completely different thing from heap space.
The most typical case to get that error is to run out of stack space, address space or max user processes. Almost all of the cases are related with creating too many threads.
stack space on Linux can be checked via ulimit -s. Each Java thread consumes certain amount of stack size. It can be configured via -Xss or -XX:ThreadStackSize=...
You can check the default stack size for your platform via java -XX:+PrintFlagsFinal -version | grep ThreadStackSize. You can squeeze more threads by reducing ThreadStackSize at a cost of StackOverflowError in case your application would indeed require deep stacks.
address space is an issue in case you use 32bit JVM. As per your screenshot, you are using 32bit JVM, so you are limited with 4G total address space. That includes all the things (heap, non-heap, stack, native, etc memory). That is the more you allocate for heap (-Xmx) the less threads you can create.
The workaround there is either use 64bit JVM, or reduce heap size, etc.
max user processes on Linux can be monitored via ulimit -u. In case you have "default" limit of 1000 processes, you can easily hit that as each thread counts toward that limit AFAIK. The solution there is to increase the limit.
In general, you likely want to add -XX:+HeapDumpOnOutOfMemoryError, collect heap dump, and check the number of threads used. If the number of threads is sane (e.g. less than 100), then you want to check configuration (1-3 above). In case the number of threads is insane (e.g. exceeds 100), you might want to find a defect and fix that.
Hello I am facing following Error. What i have read by googling this is that I am running out of native memory. Any help in resolving this will be highly appreciated. Please note that i am using 32-bit Windows 7.
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.start(JIoEndpoint.java:478)
You are running out of threads, which is not directly related to available memory.
There is an upper limit to the number of threads you can create in Java on a given platform (usually given by the operating system).
My guess would be that this message shows after a while and you have a servlet that does not finish correctly.
http://docs.oracle.com/javase/1.3/docs/tooldocs/solaris/java.html
Look at the java -Xmx option; you may need to increase the heap size.
So if you are running out memory which is limiting the number of threads then you can adjust the stack-space associated with each thread with the following JVM option.
-XX:ThreadStackSize=128k
The default stack size is 512k or 1024k (I think) depending on if you are running a 32 or 64-bit JVM.
If you are running out of the number of threads then you may need to create ExecutorService thread-pools or other mechanisms to run many jobs on fewer threads.
Here's a good link for more information: What is the limit to the number of threads you can create?