OutOfLangMemoryError caused by Apache Axis2 - java

I asked this before but got no response - maybe it was too long - so i'm rephrasing the question:
After about 3 days from starting an application that uses Apache Axis2 v.1.5.4, OutOfLangMemoryError start to occur (heap size = 2048 MB) resulting either in degrading the application server (WAS v.7.0.0.7) performance or stopping the logical server (process still exists).
For some reasons, i have to put a timer = 1 second on the web service invocation process, in peak time, timeouts occur (either in establishment or reading).
Looking in the javacores and the heapdumps thrown by the server:
It seems that there are hung Axis2 threads:
"Axis2 Task" TID:0x00000000E4076200, j9thread_t:0x0000000122C2B100, state:P, prio=5.
at sun/misc/Unsafe.park(Native Method)
at java/util/concurrent/locks/LockSupport.park(LockSupport.java:173)
at java/util/concurrent/SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:433)
at java/util/concurrent/SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:334)
at java/util/concurrent/SynchronousQueue.take(SynchronousQueue.java:868)
at java/util/concurrent/ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
at java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java/lang/Thread.run(Thread.java:735)
How to ensure that Axis2 threads are terminated, whether a response was returned or not i.e. exception occurred?

I'd recommend that you point Visual VM 1.3.2, with all plugins installed, at your application. It'll show you what's happening in your generational heap memory and all the threads that are started. I can't give you the answer, but Visual VM will make the process more transparent.

Related

Standalone tomcat 9 spikes CPU to 50% every 10 seconds while my web application is idle

I am using Tomcat 9.0.0.M22 with jdk1.8.0_131 on Windows Server 2012 R2 and I have a Spring Boot web application deployed on it, the issue is that every 10 seconds the commons daemon service runner spikes the cpu to 50% although my deployed web application is idle then decreases to 0% and this behavior continue to happen every 10 seconds.
In my application I don't have any job that runs every 10 seconds, and also when I run my web application on Tomcat from Eclipse I didn't notice the same behavior, so I am guessing that this is a Tomcat built in thread.
do a series of thread dumps as soon as cpu load spikes
you can use jdk/bin/jvisualvm to connect to your tomcat and repeatedly press thread dump button on upper right of threads tab or if you prefer command line (e.g. via script), you can also use jdk/bin/jcmd <pid-of-your-tomcat> Thread.Print >> dumps.txt
each dump shows all threads existing at that moment and a stack trace for each thread showing what is being executed
this should give you some hints what's involved creating that load
Without more information this is just guessing but this could be the garbage collector trying to do it's job every ten seconds but not being able to evict any items because they are all still needed. You could try to increase the memory for Tomcat (-Xmx).
With that much info it's pretty tough, couple of points you can think of:
As #jorg pointed out, you can take thread dumps that will give you
insights about any blocking threads.
You said it's working fine on local system, it doesn't necessarily mean that code is optimal for the server platform. Double check configurations, maxThreads etc.
Optimize Server JVM by eliminating any excessive garbage collection. Starting the JVM with a higher maximum heap memory (-Xmx) will decrease the frequency with which garbage collection occurs.
Existing monitoring tools can leverage your analysis (jVisualVM) .
I was able to stop this behavior completely by setting reloadable="false" in context.xml.

Wildfly 8 CPU, Memory issue

While perform a load testing on a application in Wildfly-8.0, both memory and cpu are hiked up. After stopped the testing both memory and cpu went down to 50% but the server fails to accept any request even from the server hosted machine facing the same problem with wildfly console
No clue has been found when we monitor the object created in server through Visual VM, so is this issue with wildfly-8.0 version since we wonder why the application server doesn't accept any request even after the resource consumption went below 50%.
First, check the log files. Look for any unexplained exceptions. (OOME's in particular can lead to lockups.)
Next, use jstack or similar to get a dump of the thread stacks. Check that the listener thread is still alive and that there idle worker threads ready to process requests.
There are a variety of things that can cause lockups under heavy load. Common syndromes include:
OOMEs causing threads to die, leaving data structures locked, or other threads waiting for notify events or similar that are never going to arrive.
Synchronization or similar problems triggered by the load.

Tomcat java threads spinning on futex() calls

I have a simple 3 tier setup of an Apache server which sends requests to a Tomcat server, which queries a (MySQL) database to generate HTML results. I'm finding that as soon as Tomcat is started, there are threads in the java process that are spinning away making futex() calls. After a few dozens web requests, the threads trying to serve requests get caught in the same futex() loops, and it stops answering all requests -- they time out on the client side.
I have tried this in Tomcat 6 and Tomcat 7. I have tried it with Oracle's Java 1.7.0_45 and OpenJDK 1.6.0. This VM is a 64 bit Redhat 6 system, and I have tried with their released 2.6.32-358.23.2 kernel and their 2.6.32-431.3.1 kernel, and all combinations are showing these system calls in strace, and eventually locking up.
futex(an addr, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, {a timestamp}, ffffffff)
= -1 ETIMEDOUT (Connection timed out)
futex(an addr, FUTEX_WAKE_PRIVATE, 1) = 0
The JVM does this with the default memory, or even if I increase the available mem to 3GB (of the 4GB on the machine). I ran with a GC logger, and GC printed a few minor collections, and was not doing one when the lockup occurred. This machine was created in Jan 2014, so is not in any "leap second" situations.
So my questions would be: why is Java making all of these futex() calls in a fast loop even when the JVM should be "idle"? are they normal? should they be getting the timeout? and is there a known fix?
Thank you for info insights.
i have the same problem,i suspect its the "leap second" caused so. my java processes have a high cpu load for a long time.i get a file "leap-a-day.c" from http://marc.info/?t=134138331900001&r=1&w=2,and run as "./leap-a-day -s",and cpu load suddenly become low,i dont know why....my os is redhat as6.5

Configuring Jetty for high request volume

In our application we need to handle request volumes in excess of 5,000 requests per second. We've been told that this is feasible with Jetty in our type of application (where we must expose a JSON-HTTP API to a remote system, which will then initiate inbound requests and connections to us).
We receive several thousand inbound HTTP connections, each of which is persistent and lasts about 30 seconds. The remote server then fires requests at us as quickly as we can respond to them on each of these connections. After 30 seconds the connection is closed and another is opened. We must respond in less than 100ms (including network transit time).
Our server is running in EC2 with 8GB of RAM, 4GB of which is allocated to our Java VM (past research suggested that you should not allocate more than half the available RAM to the JVM).
Here is how we currently initialize Jetty based on various tips we've read around the web:
Server server = new Server();
SelectChannelConnector connector = new SelectChannelConnector();
connector.setPort(config.listenPort);
connector.setThreadPool(new QueuedThreadPool(5120));
connector.setMaxIdleTime(600000);
connector.setRequestBufferSize(10000);
server.setConnectors(new Connector[] { connector });
server.setHandler(this);
server.start();
Note that we originally had just 512 threads in our threadpool, we tried increasing to 5120 but this didn't noticeably help.
We find with this setup we struggle to handle more than 300 requests per second. We don't think the problem is our handler as it is just doing some quick calculations, and a Gson serialization/deserialization.
When we manually do a HTTP request of our own while it's trying to handle this load we find that it can take several seconds before it begins to respond.
We are using Jetty version 7.0.0.pre5.
Any suggestions, either for a solution, or techniques to isolate the bottleneck, would be appreciated.
First, Jetty 7.0.0.pre5 is VERY old. Jetty 9 is now out, and has many performance optimisations.
Download a newer version of the 7.x line at
https://www.eclipse.org/jetty/previousversions.html
This following advice is documented at
Eclipse.org / Jetty - HowTo: High Load
Eclipse.org / Jetty - HowTo: Garbage Collection
Lies, Damned Lies, and Benchmarks
Be sure you read them.
Next, the threadpool size is for handling accepted requests, 512 is high. 5120 is ridiculous.
Pick a number higher than 50, and less than 500.
If you have a Linux based EC2 node, be sure you configure the networking for maximum benefit at the OS level. (See the document titled "High Load" in the above mentioned list for details)
Be sure you are using a recent JRE/JDK, such as Oracle Java 1.6u38 or 1.7u10. Also, if you have a 64 bit OS, use the 64 bit JRE/JDK.
Set your acceptor count, SelectChannelConnector.setAcceptors(int) to be a a value between 1 and (number_of_cpu_cores - 1).
Lastly, setup optimized Garbage Collection, and turn on GC Logging to see if the problems you are having are with jetty, or with Java's GC. If you see via the GC logging that there are massive GC "stop the world" events taking lots of time, then you know one more cause for your performance issues.

Configure Tomcat for multiple simultaneous SOAP requests

I'm very much a Tomcat newbie, so I'm guessing that the answer to this is pretty straightforward, but Google is not being friendly to me today.
I have a Java web application mounted on Apache Tomcat. Whilst the application has a front page (for diagnostic purposes), the application is really all about a SOAP interface. No client will ever need to look up the server's web page. The clients send SOAP requests to the server, which parses the requests and then looks up results in a database. The results are then passed back to the clients, again over SOAP.
In its default configuration, Tomcat appears to queue requests. My experiment consisted of installing the client on two separate machines pointing at the same server and running a search at exactly the same time (well, one was 0.11 seconds after the other, but you get the picture).
How do I configure the number of concurrent request threads?
My ideal configuration would be to have X request threads, each of which recycles itself (i.e. calls destructor and constructor and recycles its memory allocation) every Y minutes, or after Z requests, whichever is the sooner. I'm told that one can configure IIS to do this (although I also have no experience with IIS), but how would you do this with Tomcat?
I'd like to be able to recycle threads because Tomcat seems to be grabbing memory when a request comes in and not releasing it, which means that I get occasional (but not consistent) Java Heap Space errors when we are approaching the memory limit (which I have already configured to be 1GB on a 2GB server). I'm not 100% sure if this is due to a memory leak in my application, or just that the tools that I'm using use a lot of memory.
Any advice would be gratefully appreciated.
Thanks,
Rik
Tomcat, by default, can handle up to 150 concurrent HTTP requests - this is totally configurable and obviously varies depending on your server spec and application.
However, if your app has to handle 'bursts' of connections, I'd recommend looking into Tomcat's min and max "spare" threads. These are threads actively waiting for a connection. If there aren't enough waiting threads, Tomcat has to allocate more (which incurs a slight overhead), so you might see a delay.
Also, have a look at my answer to this question which covers how to configure the connector:
Tomcat HTTP Connector Threads
In addition, look at basic JVM tuning - especially in relation to heap allocation overhead and GC pause times.

Categories

Resources