Writing netty performance tests - java

So I have a netty-based websockets client that I am using for performance tests. My idea is that I can use it to simulate 100, 1000, etc simultaneous connections.
I've determined that my current approach to this is not working--the test harness is simply not creating enough websocket connections, althogh it bumps along happily, thinks it's still connected, etc. But my server simply does not show the correct number of connections when I use this test harness. I think most likely this is occurring because I am using various objects in the netty library across multiple threads at once and they don't handle that very well. ClientBootstrap, for example.
This is what I am doing per-thread. Can you tell me where I am going wrong, so that I can fix my test harness?
public void run(){
try{
// client bootstrap. There is one of these per thread. is that part of the problem?
ClientBootstrap bootstrap = new ClientBootstrap(new NIOClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool())));
Channel ch = null;
try{
// set up ssl engine
final SSLEngine engine = createServerContext().createSSLEngine();
engine.setUseClientMode(true);
// there is a new handhsaker per thread, too. They all go to the same uri
final WebSocketClientHandshaker handshaker = new WebSocketClientHandhsakerFactory().newHandshaker(uri, WebSocketVersion.V08, null, false, null);
// set up the pipeline factory and pipeline
bootstrap.setPipelineFactory(new ChannelPipelieFactory(){
#Override
public Channelpipeline getPipeline() throws Exception(){
ChannelPipeline pipeline = Channels.pipeline();
pipeline.addLast("ssl", new SslHandler(engine));
pipeline.addLast("encoder", new HttpRequestEncoder();
pipeline.addLast("decoder", new HttpResponseDecoder();
// WebSocketClientHandler code not included, it's just a custom handler that sends requests via websockets
pipeline.addLast("ws-handler", new WebSocketClientHandler(handshaker);
return pipleline;
}
});
// connect websockets preflight over http
ChannelFuture future = bootstrap.connect(new InetSocketAddress(uri.getHost(), uri.getPort());
future.sync();
// do websockets handshake
ch = future.getChannel();
ChannelFuture handshakeFuture = handshaker.handshake(ch);
handshakeFuture.syncUninterruptably();
Thread.sleep(1000); // i had to add this. Sync should have meant that the above method didn't return until it was complete... but that was a lie. So I sleep for 1 second to solve that problem.
if(!handshakeDuture.isSuccess())
System.out.println("WHOAH errror");
// send message to server
ch.write(new TextWebSocketFrame("Foo"));
// wait for notifications to close
while(!getShutdownNow().get()) // shutdownNow is an atomicBoolean which is set to true when all my threads have been started up and a certain amount of time has passed
Thread.sleep(2000);
// send close; wait for response
ch.write(new CloseWebSocketFrame());
ch.getCloseFuture().awaitUninterruptibly();
}
}
}
}

Related

Vertx services not accepting messages continuously when running on local JVM over a finite set of data when deployed as separate fat-jars

I am getting started with vertx and was trying out point to point messaging on event bus. I have 2 services both created as separate maven projects and deployed as fat-jars
1) Read from a file and send the content as a message over an address - ContentParserService.java
2) Read the message and reply to the incoming message- PingService.java
Both these services are deployed as separate jars kind of a microservice fashion
The code is as follows: ContentParserService.java
#Override
public void start(Future<Void> startFuture) throws Exception {
super.start(startFuture);
// Reference to the eventbus running on JVM
EventBus eventBus = vertx.eventBus();
// Read file using normal java mechanism
try {
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(ClassLoader.getSystemResourceAsStream(
(config().getString("filename")))));
bufferedReader.readLine(); //read first line
String line = null;
while ((line = bufferedReader.readLine()) != null) {
String[] data = line.split(",");
// Create RealEstate Object
RealEstateTransaction realEstateData = createTransactionObject(data);
// Construct Message JSON
JsonObject messageJSON = constructMessageJson(realEstateData);
// Send message to PING address over the Event Bus
eventBus.send("PING", Json.encode(messageJSON), reply -> {
if (reply.succeeded())
System.out.println("Received Reply: " + reply.result().body());
else {
System.out.println("No reply");
}
});
}
} catch (IOException e) {
startFuture.fail(e.getMessage());
}
The code is as follows: PingService.java
#Override
public void start(Future<Void> startFuture) throws Exception {
super.start(startFuture);
System.out.println("Referencing event bus");
// Reference to the event bus running on the JVM
EventBus eventBus = vertx.eventBus();
System.out.println("Creating HttpServer");
// Create HTTP Server to handle incoming requests
HttpServer httpServer = vertx.createHttpServer();
System.out.println("Creating Router");
// Create Router for routing to appropriate endpoint
Router router = Router.router(vertx);
System.out.println("Starting to consume message sent over event bus");
// Consume the incoming message over the address PING
eventBus.consumer("PING", event -> {
System.out.println("Received message: " + event.body());
event.reply("Received at PING address");
});
System.out.println("Receiver ready and receiving messages");
When i run both the services I run on the same machine with the java -jar command for each of the service. What i observed was when i deploy the first jar of ContentParserService, it immediately starts and sends messages over the event bus, but by the time i start the pingservice jar , it is not able to receive any message sent over the event bus because my pingService is a separate fatjar and a microservice in itself. The file that i am reading is a finite lenght csv file of around 200 entries. This case would work if i bundle both the services in a single fat jar.
How should i achieve the different fat jars services able to send message to each other in my case.
This case works when both verticles in the same jar only because there's no network delay. But your usecase for EventBus is incorrect, since it doesn't persist messages, hence cannot replay them. You should start sending messages only when the other side is ready to receive them.
You need to reverse the dependency. In your ContentParserService register for some "ready" event, then start your while loop only when you get it:
vertx.eventBus().consumer("ready", (message) -> {
while ((line = bufferedReader.readLine()) != null) {
...
}
});
Now, what will happen if ContentParserService is actually slower and misses the "ready" event? Use vertx.setPeriodic() for that. So you start your PingService, and periodically tell ContentParserService that you're ready to receive some messages.
Or, as an option, just don't use EventBus at all between you services, and switch to something with persistence, like RabbitMQ or Kafka.

HTTP/2 priority & dependency test with Jetty

Priority & Dependency:
Here I made I simple test. But the result seems not so good.
I tried to make 100 request in a for loop in the same connection(the request url is the same, I am wondering whether this part influence the results).
If the index is i, then my request stream_id is i while the dependent stream_id is 100+i. If our assumption is right, the request can never get response because there is no stream_id from 101 to 200.
But the results shows there is no difference for setting the dependency and not. I got the response data frame one by one without timeout or waiting.
And also some other related test, the start point is to let the stream which depends on other stream to be sent first and the stream dependent later. But the result is same.
I am still thinking the reason of the results. Can anyone help me? Many thanks.
Code here:
public void run() throws Exception
{
host = "google.com";
port = 443;
//client init
HTTP2Client client = new HTTP2Client();
SslContextFactory sslContextFactory = new SslContextFactory(true);
client.addBean(sslContextFactory);
client.start();
//connect init
FuturePromise<Session> sessionPromise = new FuturePromise<>();
client.connect(sslContextFactory, new InetSocketAddress(host, port), new ServerSessionListener.Adapter(), sessionPromise);
Session session = sessionPromise.get(10, TimeUnit.SECONDS);
//headers init
HttpFields requestFields = new HttpFields();
requestFields.put("User-Agent", client.getClass().getName() + "/" + Jetty.VERSION);
final Phaser phaser = new Phaser(2);
//multiple request in one connection
for(int i=0;i<100;i++)
{
MetaData.Request metaData = new MetaData.Request("GET", new HttpURI("https://" + host + ":" + port + "/"), HttpVersion.HTTP_2, requestFields);
PriorityFrame testPriorityFrame = new PriorityFrame(i, 100+i, 4, true);
HeadersFrame headersFrame = new HeadersFrame(0, metaData, testPriorityFrame, true);
//listen header/data/push frame
session.newStream(headersFrame, new Promise.Adapter<Stream>(), new Stream.Listener.Adapter()
{
#Override
public void onHeaders(Stream stream, HeadersFrame frame)
{
System.err.println(frame+"headId:"+frame.getStreamId());
if (frame.isEndStream())
phaser.arrive();
}
#Override
public void onData(Stream stream, DataFrame frame, Callback callback)
{
System.err.println(frame +"streamid:"+ frame.getStreamId());
callback.succeeded();
if (frame.isEndStream())
phaser.arrive();
}
#Override
public Stream.Listener onPush(Stream stream, PushPromiseFrame frame)
{
System.err.println(frame+"pushid:"+frame.getStreamId());
phaser.register();
return this;
}
});
}
phaser.awaitAdvanceInterruptibly(phaser.arrive(), 5, TimeUnit.SECONDS);
client.stop();
}
The Jetty project did not implement (yet) HTTP/2 request prioritization.
We are discussing whether this is any useful for a server, whose concern is to write back the responses as quick as it can.
Having one client changing its mind on the priority of the requests, or making a request knowing that in reality it first wanted another request served, it's a lot of work for the server that in the meantime has to serve the other 10,000 clients connected to it.
By the time we the server has recomputed the priority tree for the dependent requests, it could have probably have served the requests already.
By the time the client realizes that it has to change the priority of a request, the whole response for it could already be in flight.
Having said that, we are certainly interested in real world use cases where request prioritization performed by the server yields a real performance improvement. We just have not seen it yet.
I would love to hear why you are interested in request prioritization and how you are leveraging it. Your answer could be a drive for the Jetty project to implement HTTP/2 priorities.

Unable to connect using authentication [duplicate]

I am seeing a lot of Connection Resets in Production.There could be multiple causes to it but I wanted to ensure that there are no Connection leakages coming from in code.I am using Jersey Client in code
Client this.client = ApacheHttpClient.create();
client.resource("/stores/"+storeId).type(MediaType.APPLICATION_JSON_TYPE).put(ClientResponse.class,indexableStore);
Originally I was instantiating client in the following fashion
Client this.client = Client.create() and we changed it to ApacheHttpClient.create(). I am not calling close() on the response but I am assuming ApacheHttpClient would do that internally as HttpClient executeMethod gets invoked which handles all the boiler plate stuff for us. Could there be a potential connection leakage in the way the code is written ?
Like you said Connection Reset could be caused by many possible reasons. One such possibility could be that server timed out while processing the request, thats why the client receives connection reset. The comments section of the answered question here discusses possible causes of connection reset in detail. One possible solution I can think of is to configure HttpClient to retry the request in case of a failure. You could set the HttpMethodRetryHandler like below to do so (Reference). You may perhaps need to modify the code based on the exception you receive.
HttpMethodRetryHandler retryHandler = new HttpMethodRetryHandler()
{
public boolean retryMethod(
final HttpMethod method,
final IOException exception,
int executionCount)
{
if (executionCount >= 5)
{
// Do not retry if over max retry count
return false;
}
if (exception instanceof NoHttpResponseException)
{
// Retry if the server dropped connection on us
return true;
}
if (!method.isRequestSent())
{
// Retry if the request has not been sent fully or
// if it's OK to retry methods that have been sent
return true;
}
// otherwise do not retry
return false;
}
};
ApacheHttpClient client = ApacheHttpClient.create();
HttpClient hc = client.getClientHandler().getHttpClient();
hc.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, retryHandler);
client.resource("/stores/"+storeId).type(MediaType.APPLICATION_JSON_TYPE).put(ClientResponse.class,indexableStore);

Apache MINA server closes active UDP "session" after 60s

My client-server app works with Apache MINA at both, client and server sides. Sending data via UDP works OK, but after a minute server closes the connection (or MINA's way - "session") and stops answering.
The strange part is that the connection is active the whole time. Client is sending data every 1000ms and server answers to it with the same data. I've found a MINA's mechanism to destroying inactive sessions ExpiringMap, it's got a default value for session's time-to-live public static final int DEFAULT_TIME_TO_LIVE = 60; but I haven't found a way how to change it or better, update time-to-live for sessions.
Imho the time-to-live should update automatically with every incoming packet but I couldn't find a thing why isn't it my server doing. Should I say explicitly that I don't want it to destroy the session yet or what?
My code is quite similar to MINA's tutorials:
SERVER
IoAcceptor acceptor = new NioDatagramAcceptor();
try {
acceptor.setHandler( new UDPHandler() );
acceptor.bind( new InetSocketAddress(RelayConfig.getInstance().getUdpPort()) );
acceptor.getSessionConfig().setReadBufferSize( 2048 );
acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, IDLE_PERIOD );
System.out.println("RELAY ["+RelayConfig.getInstance().getId()+"]: initialized!");
} catch (IOException e) {
System.out.println("RELAY ["+RelayConfig.getInstance().getId()+"]: failed: "+e.getLocalizedMessage());
//e.printStackTrace();
}
CLIENT
NioDatagramConnector connector = new NioDatagramConnector();
connector.getSessionConfig().setUseReadOperation(true);
handler = new UDPHandler();
connector.setHandler(handler);
connector.getSessionConfig().setReadBufferSize(2048);
// try to connect to server!
try {
System.out.println("Connecting to " + relayIP + ":" + port);
ConnectFuture future = connector.connect(new InetSocketAddress(relayIP, port));
future.addListener(new IoFutureListener<IoFuture>() {
public void operationComplete(IoFuture future) {
ConnectFuture connFuture = (ConnectFuture)future;
if( connFuture.isConnected() ){
UDPClient.setSession(future.getSession());
Timer timer = new Timer("MyTimerTask", true);
timer.scheduleAtFixedRate(new MyTimerTask(), 1000, 1000); // My message is written here every 1000ms
} else {
log.error("Not connected...exiting");
}
}
});
future.awaitUninterruptibly();
} catch (RuntimeIoException e) {
System.err.println("Failed to connect.");
e.printStackTrace();
System.exit(1);
} catch (IllegalArgumentException e) {
System.err.println("Failed to connect. Illegal Argument! Terminating program!");
e.printStackTrace();
System.exit(1);
}
For any additional info please write in comments.
EDIT: Unfortunately I don't have access to that server any more, but problem was not solved back then. If there's anybody else who has the same problem and solved it, let us know.
I did some research and found the link below. You may need to explicitly set the disconnect option to false, but there is also another option to reset the timeout option. A timeout of 30000 is 30 seconds, 60000 is 60 seconds, etc... These solutions are from MINA2. It was not clear if you were using that or an older version. From this you should be able to add the call that implements a specific set of options when you open the UDP port.
MINA2 Documentation

Connection Reset with Jersey Client

I am seeing a lot of Connection Resets in Production.There could be multiple causes to it but I wanted to ensure that there are no Connection leakages coming from in code.I am using Jersey Client in code
Client this.client = ApacheHttpClient.create();
client.resource("/stores/"+storeId).type(MediaType.APPLICATION_JSON_TYPE).put(ClientResponse.class,indexableStore);
Originally I was instantiating client in the following fashion
Client this.client = Client.create() and we changed it to ApacheHttpClient.create(). I am not calling close() on the response but I am assuming ApacheHttpClient would do that internally as HttpClient executeMethod gets invoked which handles all the boiler plate stuff for us. Could there be a potential connection leakage in the way the code is written ?
Like you said Connection Reset could be caused by many possible reasons. One such possibility could be that server timed out while processing the request, thats why the client receives connection reset. The comments section of the answered question here discusses possible causes of connection reset in detail. One possible solution I can think of is to configure HttpClient to retry the request in case of a failure. You could set the HttpMethodRetryHandler like below to do so (Reference). You may perhaps need to modify the code based on the exception you receive.
HttpMethodRetryHandler retryHandler = new HttpMethodRetryHandler()
{
public boolean retryMethod(
final HttpMethod method,
final IOException exception,
int executionCount)
{
if (executionCount >= 5)
{
// Do not retry if over max retry count
return false;
}
if (exception instanceof NoHttpResponseException)
{
// Retry if the server dropped connection on us
return true;
}
if (!method.isRequestSent())
{
// Retry if the request has not been sent fully or
// if it's OK to retry methods that have been sent
return true;
}
// otherwise do not retry
return false;
}
};
ApacheHttpClient client = ApacheHttpClient.create();
HttpClient hc = client.getClientHandler().getHttpClient();
hc.getParams().setParameter(HttpMethodParams.RETRY_HANDLER, retryHandler);
client.resource("/stores/"+storeId).type(MediaType.APPLICATION_JSON_TYPE).put(ClientResponse.class,indexableStore);

Categories

Resources