Load testing ironmq - java

I'm doing some load testing of ironmq sending 500 messages and afterwards consuming them.
So far I'm able to send 16 msg's pr. sec and consume (read/delete) about 5 msg's pr. sec. using ironAWSEUWest on my local machine.
I use the v. 0.0.18 java client sdk.
output :
[l-1) thread #0 - dataset://foo] dataset://foo?produceDelay=5 INFO Sent: 100 messages so far. Last group took: 6066 millis which is: 16,485 messages per second. average: 16,485
[l-1) thread #0 - dataset://foo] dataset://foo?produceDelay=5 INFO Sent: 200 messages so far. Last group took: 6504 millis which is: 15,375 messages per second. average: 15,911
[l-1) thread #0 - dataset://foo] dataset://foo?produceDelay=5 INFO Sent: 300 messages so far. Last group took: 6560 millis which is: 15,244 messages per second. average: 15,682
[thread #1 - ironmq://testqueue] dataset://foo?produceDelay=5 INFO Received: 100 messages so far. Last group took: 17128 millis which is: 5,838 messages per second. average: 5,838
[l-1) thread #0 - dataset://foo] dataset://foo?produceDelay=5 INFO Sent: 400 messages so far. Last group took: 6415 millis which is: 15,588 messages per second. average: 15,659
[l-1) thread #0 - dataset://foo] dataset://foo?produceDelay=5 INFO Sent: 500 messages so far. Last group took: 7089 millis which is: 14,106 messages per second. average: 15,321
[thread #1 - ironmq://testqueue] dataset://foo?produceDelay=5 INFO Received: 200 messages so far. Last group took: 17957 millis which is: 5,569 messages per second. average: 5,7
[thread #1 - ironmq://testqueue] dataset://foo?produceDelay=5 INFO Received: 300 messages so far. Last group took: 18281 millis which is: 5,47 messages per second. average: 5,622
[thread #1 - ironmq://testqueue] dataset://foo?produceDelay=5 INFO Received: 400 messages so far. Last group took: 18206 millis which is: 5,493 messages per second. average: 5,589
[thread #1 - ironmq://testqueue] dataset://foo?produceDelay=5 INFO Received: 500 messages so far. Last group took: 18136 millis which is: 5,514 messages per second. average: 5,574
Is this the expected throughput ?
When I turn up the load to 1000 messages I receive sporadic errors when reading a batch of 100 messages at the time, and afterwards deleting them one at the time.
[thread #1 - ironmq://testqueue] IronMQConsumer WARN Error occurred during delete of object with messageid : 6033017857819101120. This exception is ignored.. Exchange[Message: <hello>229]. Caused by: [io.iron.ironmq.HTTPException - Message not found]
io.iron.ironmq.HTTPException: Message not found
at io.iron.ironmq.Client.singleRequest(Client.java:194)[ironmq-0.0.18.jar:]
at io.iron.ironmq.Client.request(Client.java:132)[ironmq-0.0.18.jar:]
at io.iron.ironmq.Client.delete(Client.java:105)[ironmq-0.0.18.jar:]
at io.iron.ironmq.Queue.deleteMessage(Queue.java:141)[ironmq-0.0.18.jar:]
It seems that the delete method can fail under load.
The test is part of a Camel component for Ironmq that can be found here https://github.com/pax95/camel-ironmq
The loadtest is here https://github.com/pax95/camel-ironmq/blob/master/src/test/java/org/apache/camel/component/ironmq/integrationtest/LoadTest.java

The network latency will have a lot to do with the message rates you can achieve. From outside an AWS DC, you'll generally see an additional 50-75ms for each operation. If you use concurrent threads, you'll get greater throughput. Also, our public clusters sometimes slows down due to load which is why our "Production" plan customers move to Pro clusters that are much faster.
That said, we've got a very big update coming to all our clusters that will increase performance and throughput significantly. You can actually download an installable version here: http://www.iron.io/mq-enterprise.
Chad

Related

Restriction input queu size at Micronaut(Netty)

When the application is started but not yet warmed(Jit need time), it cannot process the expected RPS.
The problem is in the incoming queue. As the IO thread continues to process requests, there are many requests in the queue that the GC cannot clean up. After the overflow of the Survived generation, GC starts perfom major pause, which slows down the execution of requests even more and after some time the application falls on the OOM.
My application have self warmed readnessProbe (3k random request).
I try to configure count of thread and queue size:
application.yml
micronaut:
server:
port: 8080
netty:
parent:
threads: 2
worker:
threads: 2
executors:
io:
n-threads: 1
parallelism: 1
type: FIXED
scheduled:
n-threads: 1
parallelism: 1
corePoolSize: 1
And some props
System.setProperty("io.netty.eventLoop.maxPendingTasks", "16")
System.setProperty("io.netty.eventexecutor.maxPendingTasks", "16")
System.setProperty("io.netty.eventLoopThreads", "1")
But the queue keeps filling up:
i want to find somw way to restriction input queu size at Micronaut, so that the application does not failed under high load

Concurrency in Spring Boot

I have spring boot application with embedded jetty and its configurations are:
jetty's minThread: 50
jetty's maxThread: 500
jetty's maxQueueSize: 25000 (I changed default queue to LinkedBlockingQueue)
I didn't change acceptors and selectors (since I dont believe on hard coding the value)
With above configuration, I am getting below jmeter test results:
Concurrent Users: 60
summary = 183571 in 00:01:54 = 1611.9/s Avg: 36 Min: 3 Max:
1062 Err: 0 (0.00%)
Concurrent Users: 75
summary = 496619 in 00:05:00 = 1654.6/s Avg: 45 Min: 3 Max:
1169 Err: 0 (0.00%)
If I increase concurrent users, I dont see any improvement. I want to increase concurrency. How to achieve this?
===========================================================================
Updating on 29-March-2019
I was spending more effort on improving business logic. Still no much improvement. Then I decided to develop one hello world spring-boot project.
i.e.,
spring-boot (1.5.9)
jetty 9.4.15
rest controller which has get endpoint
code below:
#GetMapping
public String index() {
return "Greetings from Spring Boot!";
}
Then I tried to benchmark using apachebench
75 concurrent users:
ab -t 120 -n 1000000 -c 75 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 75
Time taken for tests: 37.184 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 26893.28 [#/sec] (mean)
Time per request: 2.789 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3755.61 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 23.5 0 3006
Processing: 0 2 7.8 1 404
Waiting: 0 2 7.8 1 404
Total: 0 3 24.9 2 3007
100 concurrent users:
ab -t 120 -n 1000000 -c 100 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 100
Time taken for tests: 36.708 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27241.77 [#/sec] (mean)
Time per request: 3.671 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3804.27 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 35.7 1 3007
Processing: 0 2 9.4 1 405
Waiting: 0 2 9.4 1 405
Total: 0 4 37.0 2 3009
500 concurrent users:
ab -t 120 -n 1000000 -c 500 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 500
Time taken for tests: 36.222 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27607.83 [#/sec] (mean)
Time per request: 18.111 [ms] (mean)
Time per request: 0.036 [ms] (mean, across all concurrent requests)
Transfer rate: 3855.39 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 14 126.2 1 7015
Processing: 0 4 22.3 1 811
Waiting: 0 3 22.3 1 810
Total: 0 18 129.2 2 7018
1000 concurrent users:
ab -t 120 -n 1000000 -c 1000 http://10.93.243.87:9000/home/
Server Software:
Server Hostname: 10.93.243.87
Server Port: 9000
Document Path: /home/
Document Length: 27 bytes
Concurrency Level: 1000
Time taken for tests: 36.534 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 143000000 bytes
HTML transferred: 27000000 bytes
Requests per second: 27372.09 [#/sec] (mean)
Time per request: 36.534 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 3822.47 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 30 190.8 1 7015
Processing: 0 6 31.4 2 1613
Waiting: 0 5 31.4 1 1613
Total: 0 36 195.5 2 7018
From above test run, I achieved ~27K per second with 75 users itself but it looks increasing the users also increasing the latency. Also, we can clearly note connect time is increasing.
I have requirement for my application to support 40k concurrent users (assume all are using own separate browsers) and request should be finished within 250 milliseconds.
Please help me on this
You can try increasing or decreasing the number of Jetty threads but the application performance will depend on the application logic. If your current bottleneck is the database query you will see hardly any improvements by tuning HTTP layer, especially when testing over local network.
Find the bottleneck in your application, attempt to improve it, and then measure again to confirm it's better. Repeat this three steps until achieving desired performance. Do not tune performance blindly, it's a waste of time.

Embedded Jetty timeout under load

I have an akka (Java) application with camel-jetty consumer. Under some minimum load (about 10 TPS), our client starts seeing HTTP 503 error. I tried to reproduce the problem in our lab, and it seems jetty can't handle overlapping HTTP requests. Below is the output from apache bench (ab):
ab sends 10 requests using one single thread (i.e. one request at a time)
ab -n 10 -c 1 -p bad.txt http://192.168.20.103:8899/pim
Benchmarking 192.168.20.103 (be patient).....done
Server Software: Jetty(8.1.16.v20140903)
Server Hostname: 192.168.20.103
Server Port: 8899
Document Path: /pim
Document Length: 33 bytes
Concurrency Level: 1
Time taken for tests: 0.61265 seconds
Complete requests: 10
Failed requests: 0
Requests per second: 163.23 [#/sec] (mean)
Time per request: 6.126 [ms] (mean)
Time per request: 6.126 [ms] (mean, across all concurrent requests)
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 1.0 1 2
Processing: 3 4 1.8 5 7
Waiting: 2 4 1.8 5 7
Total: 3 5 1.9 6 8
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 8
90% 8
95% 8
98% 8
99% 8
100% 8 (longest request)
ab sends 10 requests using two threads (up to 2 requests at the same time):
ab -n 10 -c 2 -p bad.txt http://192.168.20.103:8899/pim
Benchmarking 192.168.20.103 (be patient).....done
Server Software: Jetty(8.1.16.v20140903)
Server Hostname: 192.168.20.103
Server Port: 8899
Document Path: /pim
Document Length: 33 bytes
Concurrency Level: 2
Time taken for tests: 30.24549 seconds
Complete requests: 10
Failed requests: 1
(Connect: 0, Length: 1, Exceptions: 0)
// obmited for clarity
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.9 1 2
Processing: 3 3005 9492.9 4 30023
Waiting: 2 3005 9492.7 3 30022
Total: 3 3006 9493.0 5 30024
Percentage of the requests served within a certain time (ms)
50% 5
66% 5
75% 7
80% 7
90% 30024
95% 30024
98% 30024
99% 30024
100% 30024 (longest request)
I don't believe jetty is this bad. Hopefully, it's just a configuration issue. This is the setting for my camel consumer URI:
"jetty:http://0.0.0.0:8899/pim?replyTimeout=70000&autoAck=false"
I am using akka 2.3.12 and camel-jetty 2.15.2
Jetty is certain not that bad and should be able to handle 10s of thousands of connections with many thousands of TPS.
Hard to diagnose from what you have said, other than Jetty does not send 503's when it is under load.... unless perhaps if the Denial of Service protection filter is deployed? (and ab would look like a DOS attack.... which it basically is and is not a great load generator for benchmarking).
So you need to track down who/what is sending that 503 and why.
It was my bad code: the sender (client) info was overwritten with overlapping requests. The 503 error message was sent due to Jetty continuation timeout.

Qpid Benchmark: Have questions related to quid benchmark

I am trying to benchmark Qpid with the following use case:
Default Qpid configs are used(ex: 2GB is the max memory set), broker and
client are on the same machine
I have 1 connection and 256 sessions per connection, each sessions has a
producer and consumer. So, there are 256 producers and 256 consumers
All the producers/consumers are created before they start
producing/consuming messages. Each producer/consumer is a thread and they
run parallely
Consumers start consuming(they wait with .receive()). All consumer are durableSubscribers
producers start producing messages, each producer produces only 1
message, so there are 256 messages produced in total
A fanout exchange is used(topic.fanout=fanout://amq.fanout//fanOutTopic),
and there are 256 consumers, each consumer receives 256 messages and so
there are 256*256 messages received in total
Following are the response times(RT's) for the messages:
Response time is defined as the difference in the time when the
message is sent to the broker and the time at which the message is received
at the consumer
min: 144.0 ms
max: 350454.0 ms
average: 151933.02 ms
stddev: 113347.89 ms
95th percentile: 330559.0 ms
Is there any thing that I am doing wrong fundamentally?. I am worried about
the avg response times of "152 secs". Is this expected from qpid?. I see
a pattern here, as the test is running the RT's are increasing linearly
over time.
Thank you,
Siva.

how to improve jboss response time

Environment is JBoss 5.1.0,
RestEasy web service deployed and load test with about 10 transaction per seconds.
Need to improve the response time of Resteasy Web service,
Then change the working thread number to 100.
jboss-5.1.0.GA/server/default/deploy/jbossweb.sar/server.xml
<!-- connectionTimeout and keepAliveTimeout should be greater than the
Heartbeat period between SIPServer and XS.
(Here 60s for an Heartbeat less than 60s) -->
<Connector protocol="${http.connector.class}"
port="${xs.http.port}" address="${jboss.bind.address}"
allowTrace="false" enableLookups="false"
SSLEnabled="false"
connectionTimeout="60000" keepAliveTimeout="60000"
maxKeepAliveRequests="-1" maxThreads="100" compression="off" />
while admin-console shows the result is not good, the max processing time is still great than 1000ms.
Max threads: 100
Current thread count: 4
Current thread busy: 4
Max processing time: 1061 ms
Processing time: 55.7 s
Request count: 657
Error count: 1
Bytes received: 0.08 MB
Bytes sent: 0.86 MB
Question is how to improve the response time of JBoss Web Service.

Categories

Resources