Good morning,
in java.sun version of http servers we used to do this for creating contexts and different handlers :
server = HttpServer.create(new InetSocketAddress(PORT), 0);
server.createContext("/##getPersonalImage", new PersonalImageHandler());
server.createContext("/##getProfile", new ProfileGetter());
and then you could reach it by typing
127.0.0.1:15000/##getProfile
but in the netty i think i have searched every thing in examples etc , but havent seen creating contexts like this , is this some sort of depcerated method or what ?
could you please help me to achieve this sort of context in the netty too ? thanks in advance
Netty works in this fashion.
You have the server and/or client you must setup and when you set the server up you can add handlers by adding a ChannelInitializer. You can also add or remove on the fly, but this is not always recommended as it can be costly.
When you need to pass data to or from that is not network related or related to the network data you read you can take several approaches, such as extending the handlers and adding some sort of field where you can access or put data or use ChannelAttributes.
Their tutorials and examples definitely are helpful when building out. I will comment on their example and explain and hope that is helpful.
From their User Guide
Channels
Client Code
package io.netty.example.time;
public class TimeClient {
public static void main(String[] args) throws Exception {
String host = args[0];
int port = Integer.parseInt(args[1]);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(workerGroup);
b.channel(NioSocketChannel.class);
b.option(ChannelOption.SO_KEEPALIVE, true);
b.handler(new ChannelInitializer<SocketChannel>() { //** This is the ChannelInitializer - The Channel is the nexus basically to communications, you add handlers to the channel in the order of how data is handled
#Override
public void initChannel(SocketChannel ch) throws Exception {
ch.pipeline().addLast(new TimeClientHandler()); //** Here we are adding TimeClient Handler to the SocketChannel seen below, there are many ways to add handlers
}
});
// Start the client.
ChannelFuture f = b.connect(host, port).sync(); // (5)
// Wait until the connection is closed.
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
}
}
}
Handler Code
package io.netty.example.time;
import java.util.Date;
public class TimeClientHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg; // Here we are getting the message
try {
long currentTimeMillis = (m.readUnsignedInt() - 2208988800L) * 1000L; //** We will always have to write the logic unless there is already a netty handler for it, but even then you may or probably will have to implement business logic handler specific to your application(s)
System.out.println(new Date(currentTimeMillis));
ctx.close(); //** you can close a connection on a channel or a ChannelHandlerContext
} finally {
m.release(); //** Here you have to release the buffer
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
ctx.close();
}
}
So if you want to be able to reach out, when you construct the handler you can add your own fields. For the attributes method see ChannelHandler for examples ChannelHandler
EDIT: Will yes there is some Netty handlers for IP specific filtering, but I am not sure about it specifically. I am not sure what your trying to do as I do not know the other library you mentioned. To give you an Idea of how I use Netty may help you. I have a MMO style game, when a client connects its over TCP w/SSL when they connect though in the handler I have a Session class i create and tracks all there information. Then I prompt the client through my own network protocol to open another connection to the server using TCP w/o SSL. I add that to their Session, Then i negotiate if they can receive UDP and if so I build out a specific UDP handler for them and attach it to the Session. Each Session has its own Instance of the handlers in the Handler that allows me to read and write from one channel to another and or handle that person. Each session also references each of the handlers, channel and connection data. I also have a file server build on http and a post server built in netty, the client implements native Java, hence i used a web-server to not have initial dependencies.
Related
To convince some people to switch from old school tech, I need to build a chat demo application that manages more than 10K concurrent connections using Java (like Node.Js stuff).
I have tested Netty 5.0 which is awesome but requires lot of work to be done; on the other hand Jetty 9.3 is great but is slow compared to other competitors.
After some search I found the Vert.x 3 toolkit which is based on Netty with a plethora of great tools (no need to reinvent the wheel), I have seen the examples in git and I was able to build a websocket server, etc.
public void start() throws Exception {
vertx.createHttpServer().websocketHandler(new Handler<ServerWebSocket>() {
#Override
public void handle(ServerWebSocket e) {
// business stuff in the old style not yet lambda
}
}).listen(port);
}
Being new to the Vert.x world, I could not figure out how to manage connected users using it, normally the old fashion way is to use something like:
HashMap<UUID,ServerWebSocket> connectedUsers;
When a connection is established I check if it exists; if not I add it as a new entry and do some functions to send, broadcast, retrieve through the collection and so on.
My question is does Vert.x 3 have something to deal with connections to track them and remove those who left (ping pong), broadcast, etc. or should I implement them from scratch using cookies, session, ....)
I could not find any real example using Vert.x 3.
Basically, the scope of the websocketHandler represents a connection. In your example this is your anonymous class. I created a little websocket chat example where I use the Vert.x event bus to distribute the messages to all the clients.
In the start method of the server we handle the websocket connections. You can implement the closeHandler to monitor client disconnection. There are also handlers for exceptions, ping-pong, etc. You can identify a specific connection by using the textHandlerID, but you have also access to the remote address.
public void start() throws Exception {
vertx.createHttpServer().websocketHandler(handler -> {
System.out.println("client connected: "+handler.textHandlerID());
vertx.eventBus().consumer(CHAT_CHANNEL, message -> {
handler.writeTextMessage((String)message.body());
});
handler.textMessageHandler(message -> {
vertx.eventBus().publish(CHAT_CHANNEL,message);
});
handler.closeHandler(message ->{
System.out.println("client disconnected "+handler.textHandlerID());
});
}).listen(8080);
}
The client example is also written in Java. It just prints all the received messages on the websocket connection to the console. After connection it sends a message.
public void start() throws Exception {
HttpClient client = vertx.createHttpClient();
client.websocket(8080, "localhost", "", websocket -> {
websocket.handler(data -> System.out.println(data.toString("ISO-8859-1")));
websocket.writeTextMessage(NAME+ ":hello from client");
});
}
I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.
I'm building a POC with Netty 4, just a basic client/server setup; my question is how to effectively share state across threads within the server process itself, let me explain...
Server
The server is nothing fancy; standard logic for accepting remote connections using boss and workers event loop groups (code not shown).
Where it gets interesting
The server supports pluggable modules that provide metrics and group management services. The modules run autonomously as child threads, periodically generating information important for the server to function properly; this is where I'm unsure what to do; how to get info produced from the thread modules to the server process in a "netty" way.
Naive Approach
Local VM Channel
Looking at example LocalEcho, it appears LocalChannel and LocalServerChannel provides VM (in-memory) communication, though I was expecting such channels to be easier to set up, but here goes:
//in-memory server
EventLoopGroup serverGroup = new DefaultEventLoopGroup();
ServerBootstrap svm = new ServerBootstrap();
svm.group(serverGroup)
.channel(LocalServerChannel.class)
.handler(new ChannelInitializer<LocalServerChannel>() {
#Override
public void initChannel(LocalServerChannel ch) throws Exception {
ch.pipeline().addLast(new LoggingHandler(LogLevel.INFO))
}
})
.childHandler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast( EventsFromModule_Handler());
}
});
//start server vm
svm.bind(addr).sync();
//in-memory client
EventLoopGroup clientGroup = new NioEventLoopGroup();
Bootstrap cvm = new Bootstrap();
cvm.group(clientGroup)
.channel(LocalChannel.class)
.handler(new ChannelInitializer<LocalChannel>() {
#Override
public void initChannel(LocalChannel ch) throws Exception {
ch.pipeline().addLast(
new DefaultModuleHandler());
}
});
Connect Modules
Here's where I create and fork the modules; first I get a channel from the client vm, passing it to the module..
Channel mch = cvm.connect(addr).sync().channel();
ModuleFactory.fork(
new DiscoveryService( mch),
)
...now the module has a dedicated channel for generating events; such events will be handled by EventsFromModule_Handler(), bridging the gap between modules and the (server) process.
Question
I'm very new to netty, but is using LocalChannels in this context a valid approach?
Am I doing something entirely wrong?
I want to use a single instance of netty to serve both web socket(socketio) and raw tcp connections. What i am doing now is to have ONLY a RoutingHandler at start which checks the first byte, if it is '[' , then remove the RoutingHandler and add tcp handlers to the channel pipeline, otherwise, add web socket handlers. The code looks like :
public class RoutingHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final ServerContext context;
public RoutingHandler(final ServerContext context) {
this.context = context;
}
#Override
protected void channelRead0(final ChannelHandlerContext ctx, final ByteBuf in) throws Exception {
if (in.isReadable()) {
ctx.pipeline().remove(this);
final byte firstByte = in.readByte();
in.readerIndex(0);
if (firstByte == 0x5B) {
this.context.routeChannelToTcp(ctx.channel());
} else {
// websocket
this.context.routeChannelToSocketIO(ctx.channel());
}
ctx.pipeline().fireChannelActive();
final byte[] copy = new byte[in.readableBytes()];
in.readBytes(copy);
ctx.pipeline().fireChannelRead(Unpooled.wrappedBuffer(copy));
}
}
}
The code seems to be working but it does not seem the best way to do it, especially I am kind of hacking the channel lifecycle by manually calling fireChannelActive() because adding extra handlers do not trigger the active event again hence some initialization code is not run.
IS there anything wrong with my solution? What is a better way to do it?
Thanks
This is referred to as Port Unification. There is a good example of it here, although it demonstrates switching between TCP and HTTP (with SSL and/or GZip detection), and not websockets, but the principles are the same.
Basically, you will read in the first 5 bytes to sniff the protocol (more or less as you did) and when the protocol is identified, modify the handlers in the pipeline accordingly.
Since you need to initiate a websocket through HTTP anyway, the example should work for you if you add the websocket upgrade procedure as outlined in this example.
To see this in action, take a look at the following game server which does this exactly. It is much the same way mentioned in Nicholas answer.
The relevant files that will do this are ProtocolMux and LoginProtocol.
I have a ServerBootstrap configured with a fairly standard Http-Codec ChannelInitializer.
On shutdown my server waits for a grace period where it can still handle incoming requests. My server supports keep-alive, but on shutdown I want to make sure every HttpResponse sent closes the connection with HTTP header "Connection: close" and that the channel is closed after the write. This is only necessary on server shutdown.
I have a ChannelHandler to support that:
#ChannelHandler.Sharable
public class CloseConnectionHandler extends ChannelOutboundHandlerAdapter {
#Override
public void write(ChannelHandlerContext ctx, Object msg, ChannelPromise promise) throws Exception {
HttpResponse response = (HttpResponse) msg;
if (isKeepAlive(response)) {
setKeepAlive(response, false);
promise.addListener(ChannelFutureListener.CLOSE);
}
ctx.write(msg, promise);
}
I keep a track of all connected clients using a ChannelGroup, so I can dynamically modify the pipeline of each client at the point of shutdown to include my CloseConnectionHandler, this works no problem.
However, new connections in the grace period have their pipeline configuration provided by the original ServerBootstrap ChannelInitializer, and I can't see a way of dynamically re-configuring that?
As a work-around I can have the CloseConnectionHandler configured in the standard pipeline and turned off with a boolean, only activating it on shutdown. But I'd rather avoid that if possible, seems a bit unnecessary.
there is currently no way to "replace" the initializer at run-time. So using a flag etc would be the best bet.