Java Netty load testing issues - java

I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS.
But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback.
P.S. client purpose is only receiving messages from server and very seldom send heartbits.
Client.java
public class Client {
private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
ChannelFactory factory =
new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(),
Executors.newCachedThreadPool());
ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));
bootstrap = new ClientBootstrap(factory);
bootstrap.setPipelineFactory( new ClientPipelineFactory() );
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
bootstrap.setOption("receiveBufferSize", 1048576);
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", 9013));
if (!future.awaitUninterruptibly().isSuccess()) {
System.out.println("--- CLIENT - Failed to connect to server at " +
"localhost:9013.");
bootstrap.releaseExternalResources();
return false;
}
connector = future.getChannel();
return connector.isConnected();
}
public static void main( String[] args )
{
boolean started = start();
if ( started )
System.out.println( "Client connected to the server" );
}
}
ClientPipelineFactory.java
public class ClientPipelineFactory implements ChannelPipelineFactory{
private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
this.executionHandler = executionHandle;
}
#Override
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline pipeline = pipeline();
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
1024, Delimiters.lineDelimiter()));
pipeline.addLast( "executor", executionHandler);
pipeline.addLast("handler", new MessageHandler() );
return pipeline;
}
}
MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{
long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
cur_msg++;
if ( cur_msg == max_msg )
{
System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/( System.nanoTime() - startTime ) );
cur_msg = 0;
startTime = System.nanoTime();
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable.
Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )
Fixed. I put executor in front of the whole pipeline stack and it worked out!

If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read
bootstrap.setOption("receiveBufferSizePredictorFactory",
new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));
According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.
You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.
package org.jboss.netty.channel.socket.nio;
import org.jboss.netty.channel.Channel;
public final class NioChannelUtil {
public static long getWriteTaskQueueCount(Channel channel) {
NioSocketChannel nioChannel = (NioSocketChannel) channel;
return nioChannel.writeBufferSize.get();
}
}

Related

Netty as High Performant Http Server to Handle ~2-3 Million Requests/Sec

We are trying to Solve the Problem of Handling Huge Volume of Http POST Requests, and while using Netty Server, I was able to handle only ~50K requests/sec which is too low.
My question is how to tune this Server to ensure to handle > 1.5 million requests/second?
Netty4 Server
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.option(ChannelOption.SO_BACKLOG, 1024);
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new HttpServerInitializer(sslCtx));
Channel ch = b.bind(PORT).sync().channel();
System.err.println("Open your web browser and navigate to " +
(SSL? "https" : "http") + "://127.0.0.1:" + PORT + '/');
ch.closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Initializer
public class HttpServerInitializer extends ChannelInitializer<SocketChannel> {
private final SslContext sslCtx;
public HttpServerInitializer(SslContext sslCtx) {
this.sslCtx = sslCtx;
}
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
p.addLast(new HttpServerCodec());
p.addLast("aggregator", new HttpObjectAggregator(Integer.MAX_VALUE));
p.addLast(new HttpServerHandler());
}
}
Handler
public class HttpServerHandler extends ChannelInboundHandlerAdapter {
private static final String CONTENT = "SUCCESS";
#Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
final FullHttpRequest fReq = (FullHttpRequest) req;
Charset utf8 = CharsetUtil.UTF_8;
final ByteBuf buf = fReq.content();
String in = buf.toString( utf8 );
System.out.println(" In ==> "+in);
buf.release();
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE));
}
in = null;
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HTTP_1_1, CONTINUE));
}
boolean keepAlive = HttpHeaders.isKeepAlive(req);
FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, Unpooled.wrappedBuffer(CONTENT.getBytes()));
response.headers().set(CONTENT_TYPE, "text/plain");
response.headers().set(CONTENT_LENGTH, response.content().readableBytes());
if (!keepAlive) {
ctx.write(response).addListener(ChannelFutureListener.CLOSE);
} else {
response.headers().set(CONNECTION, Values.KEEP_ALIVE);
ctx.write(response);
}
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
{
cause.printStackTrace();
ctx.close();
}
}
Your question is very generic. However, I'll try to give you an answer regarding the netty optimizations and your code improvements.
Your code issues:
System.out.println(" In ==> "+in); - you shouldn't use this in high load cocurrent handler. Why? Because code inside println method is synchronized and thus gives penalties to your performance;
You do 2 class casts. To HttpRequest and to FullHttpRequest. You may use just last one;
Netty specific issues in your code:
You need to add epoll transport (in case your server is Linux like). It will give + ~30% out of box; How to.
You need to add native OpenSSL bindings. It will give + ~20%. How to.
EventLoopGroup bossGroup = new NioEventLoopGroup(); - you need to correctly setup sizes of bossGroup and workerGroup groups. Depending on your test scenarios. You didn't provide any info regarding your test cases, so I can't give you advice here;
new HttpObjectAggregator(Integer.MAX_VALUE) - you actually don't need this handler in your code. So for better performance, you may remove it.
new HttpServerHandler() - you don't need to create this handler for every channel. As it doesn't hold any state it may be shared across all pipelines. Search for #Sharable in netty.
new LoggingHandler(LogLevel.INFO) - you don't need this handler for high load tests as it logs a lot. Make your own logging when necessary;
buf.toString( utf8 ) - this is very wrong. You convert income bytes to string. But this doesn't make any sense as all data is already decoded in netty HttpServerCodec. So you do double work here;
Unpooled.wrappedBuffer(CONTENT.getBytes()) - you wrap constant message on every request. And thus - do unnecessary work on every request. You may create ByteBuf only once and do retain(), duplicate() depending on how you'll do this;
ctx.write(response) - you may consider using ctx.write(response, ctx.voidPromise()) in order to allocate less;
This is not all. However, fixing above issues would be a good start.

Paho client limit?

I'm running some performance test of a service/client service using mosquitto broker and clients, and paho clients. I got some strange results:
Deployment notes:
3 machines; producer, broker, consumer
Producers: 6 python scripts using mosquitto_pub as fast they can. See below.
Consumer: simple java client show below. Subscribing to all topics.
The hardware specifics has not shown significant difference.
1) Mosquitto gets around 1459.5055 messages/s but it sends only 973.9596666666667. The subscribers just get 485.5458333333333 .
2) Not matter how many instances of the paho clients are created the performance do not improve. E.g. if you run 6 producers in one topic and 2 consumer in two topic you will get 485.5458333333333. But if you add 6 producers to the other topic (already checked that the total amount of messages increment) the total performance stay the same and per topic is divided by two.
3) If you do the precisely the test to two separated java application the performance do not drop. Each application gets the max performance.
In no case the CPU or memory reaches any limit.
Producers.py
from datetime import datetime, date, time
import os,sys,time, json, random, itertools
arg = sys.argv
host="broker"
n=1
if len(arg)>1:
n = int(arg[1])
i=0
while True :
payload = {"id":str(n),"Time":datetime.now().strftime("%Y-%m-%dT%H:%M:%S.00Z"),"ResultValue":1.0,"ResultType":"integer","Datastream":{"id":str(n)}}
os.system( "mosquitto_pub -h "+host+" -t "+("/"+str(payload["id"])+" -m " +str(json.dumps(json.dumps(payload)))+"")
Consumer.java
package eu.linksmart.testing;
import org.eclipse.paho.client.mqttv3.*;
import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence;
import java.util.UUID;
public class Application implements MqttCallback {
public Application() {
id++;
}
public static void main(String[] args) {
try {
Application app = new Application();
create("1",new Application());
create("2",new Application());
while (true)
try {
Thread.sleep(30000);
} catch (InterruptedException e) {
e.printStackTrace();
}
} catch (MqttException e) {
e.printStackTrace();
}
}
static void create(String id, Application app) throws MqttException {
MqttClient mqttClient = new MqttClient("tcp://broker:1883", UUID.randomUUID().toString(), new MemoryPersistence());
mqttClient.connect();
mqttClient.subscribe("/"+id+"/#", 1);
mqttClient.setCallback(app);
}
long acc =0;
int i=0;
long start= System.nanoTime();
static int id=0;
#Override
public void connectionLost(Throwable throwable) {
}
#Override
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
i++;
acc = (System.nanoTime()-start);
if(acc/1000000>1000){
start = System.nanoTime();
System.out.println(String.valueOf((i * 1000000000.0) / acc));
acc =0;
i=0;
}
}
#Override
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
}
}
E.g. running producer for topic 1 as:
python Producers.py 1&
What limits the paho client inside an java application?
Well after a lot of debugging I found out what was the problem.
The topic $SYS/broker/load/messages/received/1min was reporting more messages as I was sending. Probably is counting the protocol messages as messages. It is so that in idle this topic is reporting 3.22 with one subscriber. So I thought I was sending 1459.5055 per/sec, this reported by mosquitto. But I was sending just the 485.5458333333333.
So do not trust this topic for application messages payload!

Netty 4 read/write in handler multiple times

I'm new in Netty, and I decided to start with 4.0.0, because I thought it should be better, because it's newer. My server application should receive data from gps devices, and the process is like this - at first I'm receiving 2 bytes, which are length of device imei, and then I'm receiving imei with that length, then I should send 0x01 to device if I want to accept data from it. After my answer device sends me gps data with AVL protocol. Now my server is working without Netty, and I want to change it to work with netty.
This is what I have done:
I have created server class like this
public class BusDataReceiverServer {
private final int port;
private final Logger LOG = LoggerFactory.getLogger(BusDataReceiverServer.class);
public BusDataReceiverServer(int port) {
this.port = port;
}
public void run() throws Exception {
LOG.info("running thread");
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try{
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new BusDataReceiverInitializer());
b.bind(port).sync().channel().closeFuture().sync();
}catch (Exception ex){
LOG.info(ex.getMessage());
}
finally {
LOG.info("thread closed");
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
public static void main(String[] args) throws Exception {
new BusDataReceiverServer(3129).run();
}
}
and created initializer class
public class BusDataReceiverInitializer extends ChannelInitializer<SocketChannel> {
#Override
protected void initChannel(SocketChannel socketChannel) throws Exception {
ChannelPipeline pipeline = socketChannel.pipeline();
pipeline.addLast("imeiDecoder", new ImeiDecoder());
pipeline.addLast("busDataDecoder", new BusDataDecoder());
pipeline.addLast("encoder", new ResponceEncoder());
pipeline.addLast("imeiHandler", new ImeiReceiverServerHandler());
pipeline.addLast("busDataHandler", new BusDataReceiverServerHandler());
}
}
then I have created decoders and encoder and 2 handlers. My imeiDecoder and encoder, and ImeiReceiverServerHandler are working. This is my ImeiReceiverServerHandler
public class ImeiReceiverServerHandler extends ChannelInboundHandlerAdapter {
private final Logger LOG = LoggerFactory.getLogger(ImeiReceiverServerHandler.class);
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageList<Object> msgs) throws Exception {
MessageList<String> imeis = msgs.cast();
String imei = imeis.get(0);
ctx.write(Constants.BUS_DATA_ACCEPT);
ctx.fireMessageReceived(msgs);
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
super.channelInactive(ctx); //To change body of overridden methods use File | Settings | File Templates.
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause); //To change body of overridden methods use File | Settings | File Templates.
}
}
Now, after accepting I don't understand how to continue receive gps data and forward it to handler BusDataReceiverServerHandler.
If anyone could help me with this or could offer me useful documentation, I will be very grateful. Or if it is possible to do this with Netty 3, for this I will also be thankful.
I have not used Netty 4, so I am not sure if my answer will be 100% accurate or the best way to do things in Netty 4, but what you need to do is track the state of your connection / client session in order to know when to forward messages to your second handler.
E.g.
private enum HandlerState { INITIAL, IMEI_RECEIVED; }
private HandlerState state = HandlerState.INITIAL;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageList<Object> msgs) throws Exception
{
if (state == HandlerState.INITIAL)
{
MessageList<String> imeis = msgs.cast();
String imei = imeis.get(0);
ctx.write(Constants.BUS_DATA_ACCEPT);
state = HandlerState.IMEI_RECEIVED;
} else
{
// Forward message to next handler...
// Not sure exactly how this is done in Netty 4
// Maybe: ctx.fireMessageReceived(msgs);
// Or maybe it is:
// ctx.nextInboundMessageBuffer().add(msg);
// ctx.fireInboundBufferUpdated();
// I believe you could also remove the IMEI handler from the
// pipeline instead of having it keep state, if it is not going to do anything
// further.
}
}
So either track state in the handler, or remove the handler from the pipeline once it has finished if it will not be used further. When tracking state, you can either keep the state in the handler itself (as shown above), or keep the state variables in the context / attribute map (however that is done in netty 4).
The reason to not keep the state in the handler itself would be if you were going to make the handler shareable (one instance used across multiple channels). It is not necessary to do this, but there could be some resource savings if you have a large number of concurrent channels.

Netty 4 - Outbound message at head of pipeline discarded

I am using Netty 4 RC1. I initialize my pipeline at the client side:
public class NodeClientInitializer extends ChannelInitializer<SocketChannel> {
#Override
protected void initChannel(SocketChannel sc) throws Exception {
// Frame encoding and decoding
sc.pipeline()
.addLast("logger", new LoggingHandler(LogLevel.DEBUG))
// Business logic
.addLast("handler", new NodeClientHandler());
}
}
NodeClientHandler has the following relevant code:
public class NodeClientHandler extends ChannelInboundByteHandlerAdapter {
private void sendInitialInformation(ChannelHandlerContext c) {
c.write(0x05);
}
#Override
public void channelActive(ChannelHandlerContext c) throws Exception {
sendInitialInformation(c);
}
}
I connect to the server using:
public void connect(final InetSocketAddress addr) {
Bootstrap bootstrap = new Bootstrap();
ChannelFuture cf = null;
try {
// set up the pipeline
bootstrap.group(new NioEventLoopGroup())
.channel(NioSocketChannel.class)
.handler(new NodeClientInitializer());
// connect
bootstrap.remoteAddress(addr);
cf = bootstrap.connect();
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture op) throws Exception {
logger.info("Connect to {}", addr.toString());
}
});
cf.channel().closeFuture().syncUninterruptibly();
} finally {
bootstrap.shutdown();
}
}
So, what I basically want to do is to send some initial information from the client to the server, after the channel is active (i.e. the connect was successful). However, when doing the c.write() I get the following warning and no package is send:
WARNING: Discarded 1 outbound message(s) that reached at the head of the pipeline. Please check your pipeline configuration.
I know there is no outbound handler in my pipeline, but I didn't think I need one (at this point) and I thought Netty would take care to transport the ByteBuffer over to the server. What am I doing wrong here in the pipeline configuration?
Netty only handle messages of type ByteBuf by default if you write to the Channel. So you need to wrap it in a ByteBuf. See also the Unpooled class with its static helpers to create ByteBuf instances.

Netty Camel samples

I'm a newbie to Netty.
I'm looking for some samples. (Preferably but not necessarity using Camel Netty Component and Spring)
Specifically a sample Netty app that consumes TCP messages.
Also how can I write a JUnit test that can test this netty app?
Thanks,
Dar
I assume you still want to integrate with Camel. I would first look at the camel documentation . After that frustrates you, you will need to start experimenting. I have one example where I created a Camel Processor as a Netty Server. The Netty components work such that a From endpoint is a server which consumes and a To endpoint is a client which produces. I needed a To endpoint that was a server and the component did not support that. I simply implemented a Camel Processor as a spring bean that started a Netty Server when it was initialized. The JBoss Netty documentation and samples are very good though. It is worthwhile to step through them.
Here is my slimmed down example. It is a server that sends a message to all the clients that are connected. If you are new to Netty I highly suggest going through the samples I linked to above:
public class NettyServer implements Processor {
private final ChannelGroup channelGroup = new DefaultChannelGroup();
private NioServerSocketChannelFactory serverSocketChannelFactory = null;
private final ExecutorService executor = Executors.newCachedThreadPool();
private String listenAddress = "0.0.0.0"; // overridden by spring-osgi value
private int listenPort = 51501; // overridden by spring-osgi value
#Override
public void process(Exchange exchange) throws Exception {
byte[] bytes = (byte[]) exchange.getIn().getBody();
// send over the wire
sendMessage(bytes);
}
public synchronized void sendMessage(byte[] message) {
ChannelBuffer cb = ChannelBuffers.copiedBuffer(message);
//writes to all clients connected.
this.channelGroup.write(cb);
}
private class NettyServerHandler extends SimpleChannelUpstreamHandler {
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelOpen(ctx, e);
//add client to the group.
NettyServer.this.channelGroup.add(e.getChannel());
}
// Perform an automatic recon.
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelConnected(ctx, e);
// do something here when a clien connects.
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Do something when a message is received...
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Log the exception/
}
}
private class PublishSocketServerPipelineFactory implements ChannelPipelineFactory {
#Override
public ChannelPipeline getPipeline() throws Exception {
// need to set the handler.
return Channels.pipeline(new NettyServerHandler());
}
}
// called by spring to start the server
public void init() {
try {
this.serverSocketChannelFactory = new NioServerSocketChannelFactory(this.executor, this.executor);
final ServerBootstrap serverBootstrap = new ServerBootstrap(this.serverSocketChannelFactory);
serverBootstrap.setPipelineFactory(new PublishSocketServerPipelineFactory());
serverBootstrap.setOption("reuseAddress", true);
final InetSocketAddress listenSocketAddress = new InetSocketAddress(this.listenAddress, this.listenPort);
this.channelGroup.add(serverBootstrap.bind(listenSocketAddress));
} catch (Exception e) {
}
}
// called by spring to shut down the server.
public void destroy() {
try {
this.channelGroup.close();
this.serverSocketChannelFactory.releaseExternalResources();
this.executor.shutdown();
} catch (Exception e) {
}
}
// injected by spring
public void setListenAddress(String listenAddress) {
this.listenAddress = listenAddress;
}
// injected by spring
public void setListenPort(int listenPort) {
this.listenPort = listenPort;
}
}
The camel release has a lot of examples but without a simple one for netty component.
Netty component can be use to setup a socket server to consume message and produce response back to the client. After some time of search on the web, I create my own tutorial using netty component in camel as a simple Camel-Netty hello world example to show:
Using netty component in camel to receive TCP message
Using POJO class to process the received message and create response
Sending response back to client.

Categories

Resources