Server Overloaded - java

I am running a Tomcat/Spring server on my local machine, and I'm running another program which sends POST requests to the server one after the other in quick succession. Each request causes an update on a remote Postgresql database, and finishes (flushes a response code) before the next one is sent.
After about 100 requests, the server starts taking longer to respond. After about 200 requests, the server stops responding and takes all the available processing until I manually kill it, either in Eclipse or the Windows Task Manager.
At one point the server spat out an error about garbage collection, but I haven't seen it the last few times I've tried this. When I saw this garbage collection error, I added a 5-second pause every 20 requests to try to give the server time for garbage collection, but it doesn't seem to have helped.
How can I track down and resolve the cause of this server overload?

Did you monitored memory usage? But here are some ways to track down the problem:
Track its memory usage with time in task manager.
Try take thread dumps from time to time.
Use visualVM to track cpu usage.
See if DB connections are getting closed.
There could be many more things that can cause it but visualVM can be your tool to start diagnosing these issues.

Related

Wildfly 8 CPU, Memory issue

While perform a load testing on a application in Wildfly-8.0, both memory and cpu are hiked up. After stopped the testing both memory and cpu went down to 50% but the server fails to accept any request even from the server hosted machine facing the same problem with wildfly console
No clue has been found when we monitor the object created in server through Visual VM, so is this issue with wildfly-8.0 version since we wonder why the application server doesn't accept any request even after the resource consumption went below 50%.
First, check the log files. Look for any unexplained exceptions. (OOME's in particular can lead to lockups.)
Next, use jstack or similar to get a dump of the thread stacks. Check that the listener thread is still alive and that there idle worker threads ready to process requests.
There are a variety of things that can cause lockups under heavy load. Common syndromes include:
OOMEs causing threads to die, leaving data structures locked, or other threads waiting for notify events or similar that are never going to arrive.
Synchronization or similar problems triggered by the load.

How can I kill a Java program that is left hanging (in Windows )?

Suppose that in Java I have a simple Echo Client/Server pair. I start up the Server process, but I never actually start the Client process.
What I'd like to do, is have a third program (call it "Parent"), that will automatically kill this Server program after 30 seconds of being idle.
Could I use Powershell to do this? Or do I need to use C or other programming?
Yes, as long as you have a good way to:
1) uniquely identify the server process
2) determine server is idle
If there will never be two (or more) instances of the server process, then you can identify the process by name (make sure your process has a unique name!)
Determining that the server is idle may be tricky (hence the comments to implement the server stopping itself without resorting to a "parent" process). However if the server is memory or CPU intensive when it is active you may be able to take advantage of that to distinguish idle from busy. You can use get-process (gps) to determine the process' current CPU and memory use. The trick will be to know how long it's been idle if it currently looks idle. To do this reliably you will need to poll with gps more frequently than it takes the server to process a request. Otherwise you may poll before the server is busy, then the server is busy, and then you poll again when it's idle. But you'll think it was idle the whole time.
You can avoid the dilemma above by having the server change something when it knows it has been idle for 30 seconds, like the window title. (But if you're doing that why not just have the server terminate itself?)
Once the PS script determines the server is idle then get-process -name yourServerProcess|stop-process will stop the server process. Specify "yourServerProcess" without the .EXE at the end. If you get a permissions error, then run PS as administrator.

How to improve a Java application working with BasicDataSource object?

We have an issue in our server at job and I'm trying to understand what is happening. It's a Java application that runs in a linux server, the application recieve inforamtion form TCP socket and analyse them and after analyse write into the database.
Sometimes the quantity of packets is too many and the Java application need to write many times into the database per second (like 100 to 500 times).
I try to reproduce the issue in my own computer and look how the application works with JProfiler.
The memory look always going up, is it a memory leak (sorry I'm not a Java programmer, i'm C++ programmer)?
After 133 minute
After 158 minute
I have many locked thread, does it means that the application did not programmed correctly?
Is it too many connection to the database (the application use BasicDataSource class to use a connection pool)?
The program don't have FIFO to manage database writing for continual information entering from TCP port. My questions are (remeber that I'm not a Java programmer and I don't know if this is way that a Java application should work or the program can be programmed more efficient)
Do you think that something is wrong with the code that are not correctly managing write, read, updates on the database and cosume too many memory and CPU time, or is it the way that it works in BasicDataSource class?
How do you think I can improve it (if you think it's an issue) this issue, by creating a FIFO and removing the part of code that create too many threads? Or the threads is not the application threads himself and thats the BasicDataSource threads?
There are several areas to dig into, but first I would try and find what is actually blocking the threads in question. I'll assume everything before the app is being looked at as well, so this is from the app down.
I know the graphs show free memory but they are just point in time so I can't see a trend. GC logging is available, I haven't used JProfiler much though so I am not sure how to point you to it in that tool. I know in DynaTrace I can see GC events and their duration as well as any other blocking events and their root cause as well. If this isn't available there are command line switches to log GC activity to see its duration and frequency. That is one area that could block.
I would also look at how many connections you have in your pool. If there are 100-500 requests/second trying to write and they are stacking up because you don't have enough connections to work them then that could be a problem as well. The image shows all transactions but doesn't speak to the pool size. Transactions blocked with nowhere to go could lead to your memory jumps as well.
There is also the flip side that your database can't handle the traffic and is pegged, and that is what is blocking the connections as well so you would want to monitor that end of things and see if that is a possible cause of the blocking.
There is also the chance that the blocking is occurring from the SQL being run as well, waiting for page locks to be released, etc.
Lots of areas to look at, but I would address and verify one layer at a time starting with the app and working down.

XDBC app server showing only 2 active threads over MarkLogic XDBC admin console

Our multi-threaded Java application is using the Java XCC library. Over MarkLogic admin console under status tab only 2 threads are shown as active while the application is running, that is the most probable reason of bottleneck in our project. Please advise what is wrong here?
To effectively run xcc requests in parallel you need to make sure you are using separate Sessions for each thread. See:
https://docs.marklogic.com/javadoc/xcc/com/marklogic/xcc/Session.html
Having only 2 active threads running is not necessarily a sign of a problem, its possible that your requests are being processed as fast as you issue them and read the response. If your queries are fast enough there is no need for more threads. Without more information about your queryies, response times and server load its not possible to say if there is a bottleneck or not. How many threads are you running ? Compare the response time as you increase threads. Check that you have sufficient network IO so that your requests are not bottlenecked in the network layer.
I suggest profiling your queries and using the Performance History console to see if the server is running at high utilization. Try increasing the number of client threads, possibly running them from different servers.

Can't find reason for progressive slow down in API response time despite extensive profiling, any ideas?

Our application exposes a JSON-RPC API, using Jetty and Google Gson to parse/generate the JSON code.
A remote system opens several thousand persistent HTTP connections to our application, and starts sending API requests at the rate of about 50 per second. Our application generates a response using CPU only (ie. no disk or database access). Our app is running on an EC2 virtual machine.
When our app first starts its typical response time is 1-2ms, however over the course of several hours this increases steadily until eventually it gets as high as 80ms which is too slow for our application. Here is a graph showing response time in nanoseconds, note the steady increase.
I have used YourKit profiler to capture CPU snapshots shortly after startup and then again later when it's significantly slower. The problem is that no one method seems to account for the slowdown, everything just gets slower with time.
The number of threads and memory usage don't seem to increase either, so I'm now at a loss as to the likely cause of the slowdown.
Does anyone have any ideas as to what the cause might be, or a suggestion as to a more effective way to isolate the problem?

Categories

Resources