Netty server buffer read multiple messages at once - java

1. Current problem
1 Netty client, 1 Netty server
In netty client, 3 different kafka threads send message to netty server by 1 ChannelHandler
On netty client side, it seems like channel handler send each thread's message at once.
But on netty server side, it seems like netty read multiple messages sent from 3 kafka threads at once.
I checked this by logging :
Client side log image, Server side log image
2. Code
Netty client code
public class ClientHandler extends ChannelInboundHandlerAdapter {
...
#Override
public void channelActive(ChannelHandlerContext ctx) {
log.debug("channelActive");
this.ctx = ctx;
}
public void sendMessage(String message) {
log.info("Sent message: {}", message);
ByteBuf messageBuffer = Unpooled.buffer();
messageBuffer.writeBytes(message.getBytes());
ctx.writeAndFlush(messageBuffer);
}
Netty server code
public class NettyServerHandler extends ChannelInboundHandlerAdapter {
...
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
NettyServer.ServerStatus.increaseAndGetTotalReadMsg();
ByteBuf byteBuf = (ByteBuf)msg;
log.info(byteBuf.toString(Charset.defaultCharset()));
byteBuf.release();
}
}
Here says that writeAndFlush is thread-safe but it seem's doesn't. Is it normal case or not?

Related

GRPC reconnection and keep the connection

StreamObserver<RouteSummary> responseObserver = new StreamObserver<RouteSummary>() {
#Override
public void onNext(RouteSummary summary) {
LOGGER.info("Action.");
}
#Override
public void onError(Throwable t) {
LOGGER.error("Error.");
}
#Override
public void onCompleted() {
LOGGER.info("Completed.");
}
};
There is a grpc connection and client side streaming. Its started but If the Grpc Client is restarted,
how can I keep getting responses where they left off.
Client-side streaming means request streaming. Assuming you meant server side streaming, if the client is restarted it has basically lost the messages in the previous response stream.

Is it possible to catch netty exception in Camel?

It seems to me that netty has its own exception handlers and they don't propagate exceptions (ie. IOException) back to camel route. Is there any way to know that client has disconnected?
Answering my own question.
My problem was releasing clients that would just wait forever to get some kind of response from netty mostly in case of connections closed by remote hosts during processing the pipeline.
What needs to be done is to add a custom handler to the pipeline that should extend ChannelDuplexHandler and override connect and write methods or SimpleChannelInboundHandler and override channelInactive. I used ChannelDuplexHandler.
public class ExceptionHandler extends ChannelDuplexHandler {
private final NettyProducer producer;
#Override
public void connect(ChannelHandlerContext ctx, SocketAddress remoteAddress, SocketAddress localAddress,
ChannelPromise promise)
throws Exception {
ctx.connect(remoteAddress, localAddress, promise)
.addListener((future -> {
if (!future.isSuccess()) {
// no need to do anything here, camel will manage it on its own
}
}));
}
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) {
ctx.write(msg, promise).addListener(future -> {
if (!future.isSuccess()) {
reportStatusBackToCamel(ctx);
}
});
}
private void reportStatusBackToCamel(ChannelHandlerContext ctx) {
NettyCamelState nettyCamelState = producer.getCorrelationManager().getState(ctx, ctx.channel(),
new IOException());
Exchange exchange = nettyCamelState.getExchange();
AsyncCallback callback = nettyCamelState.getCallback();
exchange.setException(new RuntimeException("Client disconnected"));
callback.done(false);
}
}
In case of SimpleChannelInboundHandler just put exchange handling into channelInactive method.
In your ClientInitializerFactory in initChannel you add this handler to the pipeline:
pipeline.addLast(new ExceptionHandler(producer));
producer is given to you on application startup. If you need additional spring injected beans as I did, you simply end up having a couple of constructors in your factory class, one #Autowired (with your injected fields) calling the other setting additional producer field.

Send greeting to a newly connected client

I'm writing a tcp server with netty and want to send some greetings to all newly connected clients. As of now I'm intending to do that with ChannelInitializer
ServerBootstrap b;
//...
b.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>()) {
public void init(SocketChannel ch){
ch.pipeline(). //...
ch.writeAndFlush(Unpooled.copiedBuffer("Hi there!", CharsetUtil.UTF_8));
}
}
Since everything in netty is asynchronous I'm not sure if this is the right way to send greeting on connection succeeded. Can someone suggest a recommended way?
You should do this via a ChannelInboundHandlerAdapter once the channelActive callback is executed.
Something like:
public class GreetingHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) {
ch.writeAndFlush(Unpooled.copiedBuffer("Hi there!", CharsetUtil.UTF_8));
}
}

Netty: Why different packets are connected together as an request in the server?

Here's the only handler in the Netty client, I sent 3 packets to the server.
#Sharable
public class ClientHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelActive(ChannelHandlerContext ctx) {
ctx.writeAndFlush(Unpooled.copiedBuffer("1", CharsetUtil.UTF_8));
ctx.writeAndFlush(Unpooled.copiedBuffer("2", CharsetUtil.UTF_8));
ctx.writeAndFlush(Unpooled.copiedBuffer("3", CharsetUtil.UTF_8))
.addListener(ChannelFutureListener.CLOSE);
}
}
In the server handler, I just print it, expected 3 times with separate 1, 2 and 3, but 123 actually. What happened? Isn't them different packets?
#Sharable
public class ServerHandler extends SimpleChannelInboundHandler<ByteBuf> {
#Override
public void channelRead0(ChannelHandlerContext ctx, ByteBuf in) {
System.out.println(in.toString(CharsetUtil.UTF_8));
}
}
TCP/IP protocol (that you are probably using in your server) is stream-based. That means the buffer of a stream-based transport is not a queue of packets but a queue of bytes. So it is up to OS how to send your data - as separate packets or as 1 packet with all your data combined.
You have 3 options either add some separator or send fixed length packets or attach packet size to message.
Here is more details in netty documentation.

How to offload blocking operation to a worker Verticle using websockets and async request

I implementing websockets using Vert.x 3.
The scenario is simple: opening socket from client doing some 'blocking' work at the vertex verticle worker and when finish response with the answer to the client(via the open socket)
Please tell me if I am doing it right:
Created VertxWebsocketServerVerticle. as soon as the websocket is opening and request coming from the client I am using eventBus and passing the message to
EventBusReceiverVerticle. there I am doing blocking operation.
how I am actually sending back the response back to VertxWebsocketServerVerticle and sending it back to the client?
code:
Main class:
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new EventBusReceiverVerticle("R1"),new DeploymentOptions().setWorker(true));
vertx.deployVerticle(new VertxWebsocketServerVerticle());
}
VertxWebsocketServerVerticle:
public class VertxWebsocketServerVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().websocketHandler(webSocketHandler -> {
System.out.println("Connected!");
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocketHandler.writeFinalBinaryFrame(buff);
webSocketHandler.handler(buffer -> {
String inputString = buffer.getString(0, buffer.length());
System.out.println("inputString=" + inputString);
vertx.executeBlocking(future -> {
vertx.eventBus().send("anAddress", inputString, event -> System.out.printf("got back from reply"));
future.complete();
}, res -> {
if (res.succeeded()) {
webSocketHandler.writeFinalTextFrame("output=" + inputString + "_result");
}
});
});
}).listen(8080);
}
#Override
public void stop() throws Exception {
super.stop();
}
}
EventBusReceiverVerticle :
public class EventBusReceiverVerticle extends AbstractVerticle {
private String name = null;
public EventBusReceiverVerticle(String name) {
this.name = name;
}
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("anAddress", message -> {
System.out.println(this.name +
" received message: " +
message.body());
try {
//doing some looong work..
Thread.sleep(10000);
System.out.printf("finished waiting\n");
startFuture.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
I always get:
WARNING: Message reply handler timed out as no reply was received - it will be removed
github project at: https://github.com/IdanFridman/VertxAndWebSockets
thank you,
ray.
Since you are blocking your websocket handler until it receives a reply for the sent message to the EventBus, which will not, in fact, be received until the set up delay of 10s laps, you certainly will get warning since the reply handler of the event bus will timeout -> Message sent but no response received before the timeout delay.
Actually I don't know if you are just experimenting the Vert.x toolkit or you are trying to fulfill some requirement, but certainly you have to adapt your code to match in the Vert.x spirit:
First you should better not block until a message is received in your websocket handler, keep in mind that everything is asynchrounous when it comes to Vert.x.
In order to sleep for some time, use the Vert.x way and not the Thread.sleep(delay), i.e. vertx.setTimer(...).

Categories

Resources