sharing state between threads using LocalChannel in Netty 4 - java

I'm building a POC with Netty 4, just a basic client/server setup; my question is how to effectively share state across threads within the server process itself, let me explain...
Server
The server is nothing fancy; standard logic for accepting remote connections using boss and workers event loop groups (code not shown).
Where it gets interesting
The server supports pluggable modules that provide metrics and group management services. The modules run autonomously as child threads, periodically generating information important for the server to function properly; this is where I'm unsure what to do; how to get info produced from the thread modules to the server process in a "netty" way.
Naive Approach
Local VM Channel
Looking at example LocalEcho, it appears LocalChannel and LocalServerChannel provides VM (in-memory) communication, though I was expecting such channels to be easier to set up, but here goes:
//in-memory server
EventLoopGroup serverGroup = new DefaultEventLoopGroup();
ServerBootstrap svm = new ServerBootstrap();
svm.group(serverGroup)
.channel(LocalServerChannel.class)
.handler(new ChannelInitializer<LocalServerChannel>() {
#Override
public void initChannel(LocalServerChannel ch) throws Exception {
ch.pipeline().addLast(new LoggingHandler(LogLevel.INFO))
}
})
.childHandler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast( EventsFromModule_Handler());
}
});
//start server vm
svm.bind(addr).sync();
//in-memory client
EventLoopGroup clientGroup = new NioEventLoopGroup();
Bootstrap cvm = new Bootstrap();
cvm.group(clientGroup)
.channel(LocalChannel.class)
.handler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast(
new DefaultModuleHandler());
}
});
Connect Modules
Here's where I create and fork the modules; first I get a channel from the client vm, passing it to the module..
Channel mch = cvm.connect(addr).sync().channel();
ModuleFactory.fork(
new DiscoveryService( mch),
)
...now the module has a dedicated channel for generating events; such events will be handled by EventsFromModule_Handler(), bridging the gap between modules and the (server) process.
Question
I'm very new to netty, but is using LocalChannels in this context a valid approach?
Am I doing something entirely wrong?

Related

Netty filtering reverse proxy

I'm trying to code a high performance reverse proxy server using Netty 4.1.
I based my code on an Java adaptation of Feng-Zihao/protox and the Netty Proxy Example.
I first had some trouble handling 100-CONTINUE but adding the HttpObjectAggregator into my pipeline kinda solved that.
serverBootstrap
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.DEBUG))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new LoggingHandler(LogLevel.DEBUG));
ch.pipeline().addLast(new HttpRequestDecoder());
ch.pipeline().addLast(new HttpResponseEncoder());
ch.pipeline().addLast(new HttpObjectAggregator(1048576));
ch.pipeline().addLast(new FrontendHandler());
}
})
// .option(ChannelOption.SO_REUSEADDR, true)
// .option(ChannelOption.SO_BACKLOG, 128)
// .childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.AUTO_READ, false)
.bind(port).sync();
On the client side, the request hangs indefinitely.
The thing is, AUTO_READ being at false seems to prevent the HttpObjectAggregator to do his work and my FrontendHandler only ever receives the channelActive event but never the channelRead.
It seems though that I need that to make sure I don't get into some race condition between the reads and the remote peer connection.
FYI, my goal in the end is to choose to forward or not the request based on a filter (probably a new handler right before my FrontendHandler) that will need to read the full http content.
Am I missing something here ?
Turn on auto read when your outbound channel becomes active, and have your FrontendHandler turn it off while processing each message. Then turn it on again when you are ready to handle another message.
This will let HttpObjectAggregator keep reading as many messages as it needs to in order to create a FullHttpMessage, and then stop sending it messages while your FrontendHandler is processing or waiting on some client write to invoke a listener.
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ctx.channel().config().setAutoRead(false);
...
// This call should probably be in some event listener
ctx.channel().config().setAutoRead(true);

How to Create Context in Netty

Good morning,
in java.sun version of http servers we used to do this for creating contexts and different handlers :
server = HttpServer.create(new InetSocketAddress(PORT), 0);
server.createContext("/##getPersonalImage", new PersonalImageHandler());
server.createContext("/##getProfile", new ProfileGetter());
and then you could reach it by typing
127.0.0.1:15000/##getProfile
but in the netty i think i have searched every thing in examples etc , but havent seen creating contexts like this , is this some sort of depcerated method or what ?
could you please help me to achieve this sort of context in the netty too ? thanks in advance
Netty works in this fashion.
You have the server and/or client you must setup and when you set the server up you can add handlers by adding a ChannelInitializer. You can also add or remove on the fly, but this is not always recommended as it can be costly.
When you need to pass data to or from that is not network related or related to the network data you read you can take several approaches, such as extending the handlers and adding some sort of field where you can access or put data or use ChannelAttributes.
Their tutorials and examples definitely are helpful when building out. I will comment on their example and explain and hope that is helpful.
From their User Guide
Channels
Client Code
package io.netty.example.time;
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() { //** This is the ChannelInitializer - The Channel is the nexus basically to communications, you add handlers to the channel in the order of how data is handled
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler()); //** Here we are adding TimeClient Handler to the SocketChannel seen below, there are many ways to add handlers
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
Handler Code
package io.netty.example.time;
import java.util.Date;
public class TimeClientHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg; // Here we are getting the message
try {
long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L; //** We will always have to write the logic unless there is already a netty handler for it, but even then you may or probably will have to implement business logic handler specific to your application(s)
System.out.println(new Date(currentTimeMillis));
ctx.close(); //** you can close a connection on a channel or a ChannelHandlerContext
} finally {
m.release(); //** Here you have to release the buffer
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
So if you want to be able to reach out, when you construct the handler you can add your own fields. For the attributes method see ChannelHandler for examples ChannelHandler
EDIT: Will yes there is some Netty handlers for IP specific filtering, but I am not sure about it specifically. I am not sure what your trying to do as I do not know the other library you mentioned. To give you an Idea of how I use Netty may help you. I have a MMO style game, when a client connects its over TCP w/SSL when they connect though in the handler I have a Session class i create and tracks all there information. Then I prompt the client through my own network protocol to open another connection to the server using TCP w/o SSL. I add that to their Session, Then i negotiate if they can receive UDP and if so I build out a specific UDP handler for them and attach it to the Session. Each Session has its own Instance of the handlers in the Handler that allows me to read and write from one channel to another and or handle that person. Each session also references each of the handlers, channel and connection data. I also have a file server build on http and a post server built in netty, the client implements native Java, hence i used a web-server to not have initial dependencies.

Cannot communicate using event bus for verticles running on different machines

We were trying to establish communication between verticles using event bus. We tried the simplest ping-pong communication example -
public class Sender extends AbstractVerticle {
public static void main(String[] args) {
Vertx.clusteredVertx(new VertxOptions(),res->{
res.result().deployVerticle(new Sender());
});
}
#Override
public void start() throws Exception {
EventBus eb = vertx.eventBus();
vertx.setPeriodic(1000, v -> {
eb.send("ping-address", "ping!", reply -> {
if (reply.succeeded()) {
System.out.println("Received reply: " + reply.result().body());
} else {
System.out.println("No reply");
}
});
});
}
}
Similarly we wrote the wrote the receiver. See the code.
Communication is successful if both the sender and receiver are run on the same machine. But when they are run different machines communication fails.
Also this does not seems to be the issue with Hazelcast Cluster manager (which we used) because hazelcast correctly discovers the other peer on both machine (this is evident from the console logs of hazelcast).
Members [2] {
Member [192.168.43.12]:5701
Member [192.168.43.84]:5701 this
}
Also firewall has not been enabled on both machines, and we were able to establish communication between the same machines using only hazelcast(without using vertx), and it worked perfectly (for example this).
So probably the issue is with vert-x.
Did you try setting setClustered(true) on VertxOptions? I was testing this example code and it works fine for me:
public static void main(String[] args) {
VertxOptions op = new VertxOptions();
op.setClustered(true);
Vertx.clusteredVertx(op, e -> {
if (e.succeeded()) {
HelloWorldVerticle hwv = new HelloWorldVerticle();
e.result().deployVerticle(hwv);
} else {
e.cause().printStackTrace();
}
});
}
Hazelcat communication is different from Vert.x communication.
From the documentation
"Cluster managers do not handle the event bus inter-node transport,
this is done directly by Vert.x with TCP connections."
When deploying, you can set the event bus to be in clustered mode.
From this on the documentation,
The event bus doesn’t just exist in a single Vert.x instance. By
clustering different Vert.x instances together on your network they
can form a single, distributed event bus.
With respect to clustering the event bus, the documentation says
The EventBusOptions also lets you specify whether or not the event
bus is clustered, the port and host.
When used in containers, you can also configure the public host and
port:
Code snippet is
VertxOptions options = new VertxOptions()
.setEventBusOptions(new EventBusOptions()
.setClusterPublicHost("whatever")
.setClusterPublicPort(1234)
);
Vertx.clusteredVertx(options, res -> {
// check if deployment was successful.
Other useful link is this

Config the server uses netty 4 has 20,000 client's connection

I'm using netty 4.0 to write TCP server, it may be 20k client load simultaneously. But my sever not withstand many such connections.
This is my code.
private void initServer(){
EventLoopGroup boosGroup = new NioEventLoopGroup(100);
EventLoopGroup workerGroup = new NioEventLoopGroup(1000);
EventExecutorGroup eegHandle = new DefaultEventExecutorGroup(1000);
EventExecutorGroup eegDecode = new DefaultEventExecutorGroup(1000);
EventExecutorGroup eegEndcode = new DefaultEventExecutorGroup(1000);
try {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap.group(boosGroup, workerGroup);
bootstrap.channel(NioServerSocketChannel.class);
bootstrap.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
protected void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeLine = ch.pipeline();
//add check idle time of connection
pipeLine.addLast("Ilde event", new IdleStateHandler(timeIdleRead, timeIdleWrite, 0));
//add idle handler to handle idle connection
pipeLine.addLast("Idle handler", new ServerHandleIdleTime(logger));
//add decode handler to decode message received from client
pipeLine.addLast(eegDecode, new ServerMsgDecoder());
//add business handler to process business
pipeLine.addLast(eegHandle, new ServerHandleSimple());
//add encode handler to encode message send to client
pipeLine.addFirst(eegEncode, new ServerMsgEncoder());
}
});
bootstrap.option(ChannelOption.SO_BACKLOG, 200);
bootstrap.childOption(ChannelOption.SO_KEEPALIVE, false);
// bootstrap.option(ChannelOption.SO_TIMEOUT, 10000);
ChannelFuture channelFuture = bootstrap.bind(host, port);
channelFuture.sync();
channelFuture.channel().closeFuture().sync();
} catch (Exception e) {
logger.error("", e);
} finally {
workerGroup.shutdownGracefully();
boosGroup.shutdownGracefully();
}}
Should I use 3 for 3 handler EventExecutorGroup like that?.
I used nthread(= 1000) for workerGroup have sufficient for 20k connection?.
I wish the help of everyone to reconfigure the server.
Thank you!
NIO does not imply to have as many threads as connected clients, but as many active clients (incoming or outgoing message), at least if you want them to be parallel.
Else, if maximum parallelism is not willing, then you will wait a bit between each message (each thread will go from one channel to another as soon as possible). In general, that's not an issue.
However I think 1000 is quite a lot already. But it may really depend on your needs. 1000 for 20K clients, means that at the same time, at most 5% of clients are really active (even if they are connected). If your clients are connected AND really active, 1000 could be not enough... If your clients send let say 1 message per second and each message takes 100 ms, then it implies in average 10% of concurrent messages, so 2000... You have to figure it yourself.
On your EventExecutorGroup, it depends on your encoder and decoder and business handlers:
If one is blocking or long time spending, then a EventExecutorGroup might be needed for this one.
If not, then no need for such EventExecutorGroup.
You might try also the following:
Increasing if necessary the number of treads to the number of probable active (or concurrent) messages in codec and business handlers, and also in workerGroup.
Using the same EventExecutorGroup for several handlers.
Add ChannelOption.SO_REUSEADDR to true in your bootstrap.
bossGroup should be in general up to (2 x number of core) +1 (so if you have a 6 core, you could set it to 13). There should not be such a need to increase it to 100.
Make the order of your handler more align with usual practice:
IdleStateHandler
ServerHandleIdleTime
ServerMsgDecoder
ServerMsgEncoder // currently in first position in your code due to addFirst
ServerHandleSimple

How does the Netty threading model work in the case of many client connections?

I intend to use Netty in an upcoming project. This project will act as both client and server. Especially it will establish and maintain many connections to various servers while at the same time serving its own clients.
Now, the documentation for NioServerSocketChannelFactory fairly specifies the threading model for the server side of things fairly well - each bound listen port will require a dedicated boss thread throughout the process, while connected clients will be handled in a non-blocking fashion on worker threads. Specifically, one worker thread will be able to handle multiple connected clients.
However, the documentation for NioClientSocketChannelFactory is less specific. This also seems to utilize both boss and worker threads. However, the documentation states:
One NioClientSocketChannelFactory has one boss thread. It makes a connection attempt on request. Once a connection attempt succeeds, the boss thread passes the connected Channel to one of the worker threads that the NioClientSocketChannelFactory manages.
Worker threads seem to function in the same way as for the server case too.
My question is, does this mean that there will be one dedicated boss thread for each connection from my program to an external server? How will this scale if I establish hundreds, or thousands of such connections?
As a side note. Are there any adverse side effects for re-using a single Executor (cached thread pool) as both the bossExecutor and workerExecutor for a ChannelFactory? What about also re-using between different client and/or server ChannelFactory instances? This is somewhat discussed here, but I do not find those answers specific enough. Could anyone elaborate on this?
This is not a real answer to your question regarding how the Netty client thread model works. But you can use the same NioClientSocketChannelFactory to create single ClientBootstrap with multiple ChannelPipelineFactorys , and in turn make a large number of connections. Take a look at the example below.
public static void main(String[] args)
{
String host = "localhost";
int port = 8090;
ChannelFactory factory = new NioClientSocketChannelFactory(Executors
.newCachedThreadPool(), Executors.newCachedThreadPool());
MyHandler handler1 = new MyHandler();
PipelineFactory factory1 = new PipelineFactory(handler1);
AnotherHandler handler2 = new AnotherHandler();
PipelineFactory factory2 = new PipelineFactory(handler2);
ClientBootstrap bootstrap = new ClientBootstrap(factory);
// At client side option is tcpNoDelay and at server child.tcpNoDelay
bootstrap.setOption("tcpNoDelay", true);
bootstrap.setOption("keepAlive", true);
for (int i = 1; i<=50;i++){
if(i%2==0){
bootstrap.setPipelineFactory(factory1);
}else{
bootstrap.setPipelineFactory(factory2);
}
ChannelFuture future = bootstrap.connect(new InetSocketAddress(host,
port));
future.addListener(new ChannelFutureListener()
{
#Override
public void operationComplete(ChannelFuture future) throws Exception
{
future.getChannel().write("SUCCESS");
}
});
}
}
It also shows how different pipeline factories can be set for different connections, so based on the connection you make you can tweak your encoders/decoders in the channel pipeline.
I am not sure your question has been answer. Here's my answer: there's a single Boss thread that is managing simultaneously all the pending CONNECTs in your app. It uses nio to process all the current connects in a single (Boss) thread, and then hands each successfully connected channel off to one of the workers.
Your question mainly concerns performance. Single threads scale very well on the client.
Oh, and nabble has been closed. You can still browse the archive there.

Categories

Resources