jWebSocket java.lang.OutOfMemoryError: unable to create new native thread - java

I'm trying to run jWebSocket server on CentOS 5.8 (1and1 VPS). Just after start of the server and few requests from client (reloading webpage) I get this error:
Exception in thread "jWebSocket TCP-Connector 01.33719.16" java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:691)
at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943)
at java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1555)
at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:333)
at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:546)
at java.util.concurrent.ScheduledThreadPoolExecutor.submit(ScheduledThreadPoolExecutor.java:646)
at org.jwebsocket.tcp.TimeoutOutputStreamNIOWriter.sendPacket(TimeoutOutputStreamNIOWriter.java:215)
at org.jwebsocket.tcp.TCPConnector.sendPacket(TCPConnector.java:279)
at org.jwebsocket.server.BaseServer.sendPacket(BaseServer.java:186)
at org.jwebsocket.server.TokenServer.sendPacketData(TokenServer.java:405)
at org.jwebsocket.server.TokenServer.sendTokenData(TokenServer.java:388)
at org.jwebsocket.server.TokenServer.sendToken(TokenServer.java:312)
at org.jwebsocket.plugins.TokenPlugIn.sendToken(TokenPlugIn.java:174)
at org.jwebsocket.plugins.system.SystemPlugIn.sendWelcome(SystemPlugIn.java:397)
at org.jwebsocket.plugins.system.SystemPlugIn.connectorStarted(SystemPlugIn.java:261)
at org.jwebsocket.plugins.BasePlugInChain.connectorStarted(BasePlugInChain.java:126)
at org.jwebsocket.server.TokenServer.connectorStarted(TokenServer.java:170)
at org.jwebsocket.engines.BaseEngine.connectorStarted(BaseEngine.java:93)
at org.jwebsocket.tcp.TCPEngine.connectorStarted(TCPEngine.java:320)
at org.jwebsocket.tcp.TCPConnector$ClientProcessor.run(TCPConnector.java:502)
at java.lang.Thread.run(Thread.java:722)
But when I run jWebSocket on my computer everything is working ok. I made my own virtual server using virtualbox and CentOS 5.8 fresh install and it is working there too.
I noticed that java on 1and1 VPS uses a lot of memory ~1GB (10 times more then on my computer or virtualbox). On 1and1 VPS I have 2GB of RAM (there is an error) and on Virtualbox jWebSocket is running just fine with only 512MB of RAM.
What may be the cause of this out of memory error? Please share if you have any suggestions. I don't know what to do with this any more.

check the link: http://devgrok.blogspot.sk/2012/03/resolving-outofmemoryerror-unable-to.html
sounds like they had to increase in linux max process per user limit.
At least the similarity with your problem I see is that you get the same exception + both run linux :)

It is 1and1 (http://www.1and1.pl/) virtual hosting problem. They use Parallels Virtuozzo Containers with so called User Beancounters or UBC parameters. It is a set of limits witch was the cause of my problems. You can see this limits in /proc/user_beancounters. When one of the limits called "numproc" is reached my application can't create new threads.
EDIT
At that time jWebSocket was designed in way that it was creating one thread for every incoming token and not destroying it when this thread was no more needed.
I'm not using jWebSocket any more, I replaced it with websocket serwer written in PHP.

Related

JVM only using half the cores on a server

I have a number of Java processes using OpenJDK 11 running on Windows Server 2019. The server has two physical processors and 36 total cores; it is an HP machine. When I start my processes, I see work allocation in Task Manager across all the cores. This is good. However after the processes run for some period of time, not a consistent amount of time, the machine begins to only utilize only half the cores.
I am working off a few theories:
The JDK has some problem that is preventing it from consistently accessing all the cores.
Something with Windows Server 2019 is causing a problem, limiting Java from accessing all the cores.
There is a thermal management problem and one processor is getting too hot and the OS is directing all the processing to the other processor.
There is some issue with hyper-threading and the 'logical' processors that is causing the process to not be able to utilize all the cores.
I've tried searching for JDK issues and haven't found anything like this mentioned. I went down to the server and while it's running a little warm, it didn't appear excessively hot. I have not yet tried disabling hyper-threading. I have tried a number of parameters to force the JVM to use all the cores and indeed the process initially does use all the cores; I can see the activity in Task Manager.
Anyone have any thoughts? This is a really baffling problem and I'd appreciate any ideas.
UPDATE: I am able to make it use the other processor by using the Task Manager to assign one of the java.exe processes to the other processor. This is also working from the java invocation on the command line as well with an argument for which socket to use.
Now that said, this feels like a hack. I don't see why I should have to manually assign a socket to each of my java processes; that job should be left to the OS. I'm still not sure exactly where the problem is, if it's the OS or what.

Hi CPU on tomcat8 irrespective of war deployed [duplicate]

This question already has an answer here:
tomcat8 at 100% cpu when trying https on port 80
(1 answer)
Closed 5 years ago.
I have started experiencing a strange issue with my tomcat8 server. I use it to exclusively run two applications - libresonic (a music streaming app) and guacamole (remote desktop gateway)
I am experiencing the tomcat process taking 100% of available CPU after the server has been running for a few hours with either application deployed. In order to troubleshoot I have done the following:
Spun up a vanilla Debian 8.6 Virtual Machine using KVM and installed:
Tomcat8
jdk-8 - 1.8.0_111
If I leave the tomcat instance running with no applications deployed the server and CPU usage remain inactive
If I deploy one of the applications (it doesn't matter which one), after a few hours the CPU usage climbs to 100%. Killing and restarting the tomcat server causes the CPU usage to drop, and then climb back to 100% after a few hours
Note that memory usage remains steady with plenty of free memory, so I don't believe this is a GC issue. Nothing related to memory is reported in the logs.
Catalina.out does not report any errors
I have taken threaddumps during the period of high CPU when each application is deployed. Other than being able to identify the threads that are in runnable state and consuming CPU, I cannot establish the root cause or ideas to rectify/fix the issue.
Can someone help? Threaddumps are linked below
Download threaddumps
Perhaps it's related to this case (https://bz.apache.org/bugzilla/show_bug.cgi?id=57544). I have actually the same symptoms with tomcat 8.0.14. What is your tomcat version?

Java process hangs, no thread dump could be taken

I am facing with a strange case. I'd be glad if you could share your comments.
We have solution running on Java 1.6.085 and sometimes Java process is getting hang in production. The solution is running on Linux server.
I investigated GC logs, there is no Full GC. Pause times also look reasonable.
Then we tried to take a thread dump when case happens however kill -3, ./jstack or ./jstack -F do not work. No thread dump could be taken. What could be the reason for that ? Any ideas on investigating the issue ?
BR
-emre
After a while it is understood that the issue occured due to pstack and qdb commands which are executed on java process for operational purposes. Somehow pstack and qdb suspends the java process. Therefore we couldnt be able to take thread or heap dump
We're using jConsole with the topthreads plugin to analyze such cases. The plugin uses JMX to check the thread runtimes and displays their CPU usage since start of the tracking procedure as well as the current stack trace for each thread.
To connect our servers from a local machine we use tunnels in putty, i.e. we first connect to the server via putty and then connect jConsole to a local port which is tunneled to the server.

OutOfMemoryError on OpenShift

I have a Tomcat Java application running on OpenShift (1 small gear) that consists of two main parts: A cron job that runs every minute, parses information from the web and saves it into a MongoDB database, and some servlets to access that data.
After deploying the app, it runs fine, but sooner or later the server will stop and I cannot access the servlets anymore (the HTTP request takes very long, and if it finishes, it returns a Proxy Error). I can only force stop the app using the rhc command line and restart it.
When I look at the jbossews.log file, I see multiple occurences of this error:
Exception in thread "http-bio-127.5.35.129-8080-Acceptor-0" java.lang.OutOfMemoryError:
unable to create new native thread
Is there anything I can do to prevent this error without needing to upgrade to a larger gear with more memory?
According your description I can understand that some memory leak issue is their with your app . That may be because that you are not stooping your threads.
Sometimes what happen is thread will not stop automatically then we need to stop the thread explicitly.
I guess, itss not a memory problem, but OS resource problem. You are running out of native threads, max threads your JVM can have.
You can increase it by this way
ulimit -s newvalue

Tomcat java threads spinning on futex() calls

I have a simple 3 tier setup of an Apache server which sends requests to a Tomcat server, which queries a (MySQL) database to generate HTML results. I'm finding that as soon as Tomcat is started, there are threads in the java process that are spinning away making futex() calls. After a few dozens web requests, the threads trying to serve requests get caught in the same futex() loops, and it stops answering all requests -- they time out on the client side.
I have tried this in Tomcat 6 and Tomcat 7. I have tried it with Oracle's Java 1.7.0_45 and OpenJDK 1.6.0. This VM is a 64 bit Redhat 6 system, and I have tried with their released 2.6.32-358.23.2 kernel and their 2.6.32-431.3.1 kernel, and all combinations are showing these system calls in strace, and eventually locking up.
futex(an addr, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {a timestamp}, ffffffff)
= -1 ETIMEDOUT (Connection timed out)
futex(an addr, FUTEX_WAKE_PRIVATE, 1) = 0
The JVM does this with the default memory, or even if I increase the available mem to 3GB (of the 4GB on the machine). I ran with a GC logger, and GC printed a few minor collections, and was not doing one when the lockup occurred. This machine was created in Jan 2014, so is not in any "leap second" situations.
So my questions would be: why is Java making all of these futex() calls in a fast loop even when the JVM should be "idle"? are they normal? should they be getting the timeout? and is there a known fix?
Thank you for info insights.
i have the same problem,i suspect its the "leap second" caused so. my java processes have a high cpu load for a long time.i get a file "leap-a-day.c" from http://marc.info/?t=134138331900001&r=1&w=2,and run as "./leap-a-day -s",and cpu load suddenly become low,i dont know why....my os is redhat as6.5

Categories

Resources