Wildfly 8 CPU, Memory issue - java

While perform a load testing on a application in Wildfly-8.0, both memory and cpu are hiked up. After stopped the testing both memory and cpu went down to 50% but the server fails to accept any request even from the server hosted machine facing the same problem with wildfly console
No clue has been found when we monitor the object created in server through Visual VM, so is this issue with wildfly-8.0 version since we wonder why the application server doesn't accept any request even after the resource consumption went below 50%.

First, check the log files. Look for any unexplained exceptions. (OOME's in particular can lead to lockups.)
Next, use jstack or similar to get a dump of the thread stacks. Check that the listener thread is still alive and that there idle worker threads ready to process requests.
There are a variety of things that can cause lockups under heavy load. Common syndromes include:
OOMEs causing threads to die, leaving data structures locked, or other threads waiting for notify events or similar that are never going to arrive.
Synchronization or similar problems triggered by the load.

Related

Server Overloaded

I am running a Tomcat/Spring server on my local machine, and I'm running another program which sends POST requests to the server one after the other in quick succession. Each request causes an update on a remote Postgresql database, and finishes (flushes a response code) before the next one is sent.
After about 100 requests, the server starts taking longer to respond. After about 200 requests, the server stops responding and takes all the available processing until I manually kill it, either in Eclipse or the Windows Task Manager.
At one point the server spat out an error about garbage collection, but I haven't seen it the last few times I've tried this. When I saw this garbage collection error, I added a 5-second pause every 20 requests to try to give the server time for garbage collection, but it doesn't seem to have helped.
How can I track down and resolve the cause of this server overload?
Did you monitored memory usage? But here are some ways to track down the problem:
Track its memory usage with time in task manager.
Try take thread dumps from time to time.
Use visualVM to track cpu usage.
See if DB connections are getting closed.
There could be many more things that can cause it but visualVM can be your tool to start diagnosing these issues.

How to improve a Java application working with BasicDataSource object?

We have an issue in our server at job and I'm trying to understand what is happening. It's a Java application that runs in a linux server, the application recieve inforamtion form TCP socket and analyse them and after analyse write into the database.
Sometimes the quantity of packets is too many and the Java application need to write many times into the database per second (like 100 to 500 times).
I try to reproduce the issue in my own computer and look how the application works with JProfiler.
The memory look always going up, is it a memory leak (sorry I'm not a Java programmer, i'm C++ programmer)?
After 133 minute
After 158 minute
I have many locked thread, does it means that the application did not programmed correctly?
Is it too many connection to the database (the application use BasicDataSource class to use a connection pool)?
The program don't have FIFO to manage database writing for continual information entering from TCP port. My questions are (remeber that I'm not a Java programmer and I don't know if this is way that a Java application should work or the program can be programmed more efficient)
Do you think that something is wrong with the code that are not correctly managing write, read, updates on the database and cosume too many memory and CPU time, or is it the way that it works in BasicDataSource class?
How do you think I can improve it (if you think it's an issue) this issue, by creating a FIFO and removing the part of code that create too many threads? Or the threads is not the application threads himself and thats the BasicDataSource threads?
There are several areas to dig into, but first I would try and find what is actually blocking the threads in question. I'll assume everything before the app is being looked at as well, so this is from the app down.
I know the graphs show free memory but they are just point in time so I can't see a trend. GC logging is available, I haven't used JProfiler much though so I am not sure how to point you to it in that tool. I know in DynaTrace I can see GC events and their duration as well as any other blocking events and their root cause as well. If this isn't available there are command line switches to log GC activity to see its duration and frequency. That is one area that could block.
I would also look at how many connections you have in your pool. If there are 100-500 requests/second trying to write and they are stacking up because you don't have enough connections to work them then that could be a problem as well. The image shows all transactions but doesn't speak to the pool size. Transactions blocked with nowhere to go could lead to your memory jumps as well.
There is also the flip side that your database can't handle the traffic and is pegged, and that is what is blocking the connections as well so you would want to monitor that end of things and see if that is a possible cause of the blocking.
There is also the chance that the blocking is occurring from the SQL being run as well, waiting for page locks to be released, etc.
Lots of areas to look at, but I would address and verify one layer at a time starting with the app and working down.

XDBC app server showing only 2 active threads over MarkLogic XDBC admin console

Our multi-threaded Java application is using the Java XCC library. Over MarkLogic admin console under status tab only 2 threads are shown as active while the application is running, that is the most probable reason of bottleneck in our project. Please advise what is wrong here?
To effectively run xcc requests in parallel you need to make sure you are using separate Sessions for each thread. See:
https://docs.marklogic.com/javadoc/xcc/com/marklogic/xcc/Session.html
Having only 2 active threads running is not necessarily a sign of a problem, its possible that your requests are being processed as fast as you issue them and read the response. If your queries are fast enough there is no need for more threads. Without more information about your queryies, response times and server load its not possible to say if there is a bottleneck or not. How many threads are you running ? Compare the response time as you increase threads. Check that you have sufficient network IO so that your requests are not bottlenecked in the network layer.
I suggest profiling your queries and using the Performance History console to see if the server is running at high utilization. Try increasing the number of client threads, possibly running them from different servers.

Need help analyze Java Thread dump

I am using samurai tool to analyze thread dump. It looks like it has many blocked threads. I have no clue to derive anything from the thread dump.
I have an SQL query in my Java application that runs on weblogic that takes enormous time to complete. After running this query by clicking on my Java application button several times hangs my JVM.
Thread dumps can be found # : http://www.megafileupload.com/en/file/379103/biserver2-txt.html
Can you help me understand what does the thread dump say ?
The amount of data you provide is a bit overwhelming, so let's just give you a hint how to proceed. For the analysis I use open source threadlogic application based on TDA. It takes few seconds to parse 3 MiB worth of data but in nicely shows 22 different stack trace dumps in one file:
Drilling down to reveals really disturbing list of warnings and alerts.
I don't have time to examine all of them, but here is a list of those marked as FATAL (keep in mind that false-positives are also to be expected):
Wait for SLSB Beans
Description: Waiting for Stateless Session Bean (SLSB) instance from the SLSB Free pool
Advice: Beans all in use, free pool size size insufficient
DEADLOCK
Description: Circular Lock Dependency Detected leading to Deadlock
Advice: Deadlock detected with circular dependency in locks, blocked threads will not recover without Server Restart. Fix the order of locking and or try to avoid locks or change order of locking at code level, Report with SR for Server/Product Code
Finalizer Thread Blocked
Description: Finalizer Thread Blocked
Advice: Check if the Finalizer Thread is blocked for a lock which can lead to wasted memory waiting to be reclaimed from Finalizer Queue
WLS Unicast Clustering unhealthy
Description: Unicast messaging among Cluster members is not healthy
Advice: Unicast group members are unable to communicate properly, apply latest Unicast related patches and enable Message Ordering or switch to Multicast
WLS Muxer is processing server requests
Description: WLS Muxer is handling subsystem requests
Advice: WLS Server health is unhealthy as some subsystems are overwhelmed with requests which is leading to the Muxer threads directly handling requests. instead of dispatching to relevant subsystems. There is likely a bug here.
Stuck Thread
Description: Thread is Stuck, request taking very long time to finish
Advice: Check why the thread or call is taking very long??. Is it blocked for unavailable or bad resource or contending for Lock?. Can be ignored if it is doing repeat work in a loop. (like adapter threads polling for events in a infinite loop)...
The issue was with WLDF logging information to log file. Once disabled it helped improve performance enormously. I am not a fan of ThreadLogic as a tool for thread dump analysis. It shows circular deadlock when you have stuck threads no matter how variant the issue is.
Thread dumps are the snapshot of all threads running in the application at given moment. Thread dump will have hundreds/thousands of application threads. It would be hard to scroll through every single line of the stack trace in every single thread. Call Stack Tree consolidates all the threads stack trace into one single tree and gives you one single view. It makes the thread dumps navigation much simpler and easier. Below is the sample call stack tree generated by fastThread.io.
Fig 1: Call stack Tree
You can keep drilling down to see code execution path. Fig 2 shows the drilled down version of a particular branch in the Call Stack Tree diagram.
Fig 2: Drilled down Call Stack Tree
Sample call stack tree generated by FastThread.io

OutOfLangMemoryError caused by Apache Axis2

I asked this before but got no response - maybe it was too long - so i'm rephrasing the question:
After about 3 days from starting an application that uses Apache Axis2 v.1.5.4, OutOfLangMemoryError start to occur (heap size = 2048 MB) resulting either in degrading the application server (WAS v.7.0.0.7) performance or stopping the logical server (process still exists).
For some reasons, i have to put a timer = 1 second on the web service invocation process, in peak time, timeouts occur (either in establishment or reading).
Looking in the javacores and the heapdumps thrown by the server:
It seems that there are hung Axis2 threads:
"Axis2 Task" TID:0x00000000E4076200, j9thread_t:0x0000000122C2B100, state:P, prio=5.
at sun/misc/Unsafe.park(Native Method)
at java/util/concurrent/locks/LockSupport.park(LockSupport.java:173)
at java/util/concurrent/SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:433)
at java/util/concurrent/SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:334)
at java/util/concurrent/SynchronousQueue.take(SynchronousQueue.java:868)
at java/util/concurrent/ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
at java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java/lang/Thread.run(Thread.java:735)
How to ensure that Axis2 threads are terminated, whether a response was returned or not i.e. exception occurred?
I'd recommend that you point Visual VM 1.3.2, with all plugins installed, at your application. It'll show you what's happening in your generational heap memory and all the threads that are started. I can't give you the answer, but Visual VM will make the process more transparent.

Categories

Resources