I'm building a TCP Server using Netty.
Is there any way to persist the connected client's session data while its channel exists?
for example, when a client connect to the server, I need to create its class instance and reuse in different ways when he send messages.
something like the code below:
// this is called when the client connect to the server
public void channelActive(final ChannelHandlerContext ctx) {
ctx.pipeline().get(SslHandler.class).handshakeFuture().addListener(
new GenericFutureListener<Future<Channel>>() {
public void operationComplete(Future<Channel> future) throws Exception {
// I need to create the class instance when the
// client connects to the server
ClientData clientData = new ClientData(ctx.channel());
channels.add(ctx.channel());
}
}
);
}
// this is called when the server receives a message from the connected client
public void channelRead0(ChannelHandlerContext ctx, String msg) throws Exception {
if("update".equals(msg)){
// then I need the retrieve the data created
// in the ChannelActive method.
clientData().update();
}
}
While browsing for solutions, I found a few examples where the developer used a cache service (like memcache or redis) to store and retrieve the data related to the connected client.
But I wish to solve this without depending on a external process.
Is there any way to achieve this? Any advice on the subject would be appreciated.
Thank you
You should use AttributeMap.attr(AttributeKey key), which is inherited by ChannelHandlerContext:
Storing stateful information
AttributeMap.attr(AttributeKey) allow you to store and access stateful information that is related with a handler and its context. Please refer to ChannelHandler to learn various recommended ways to manage stateful information. [1]
[1][http://netty.io/4.0/api/io/netty/channel/ChannelHandlerContext.html]
Related
Current Solution
I have a Java server (Tomcat) setup issue that I'm hoping someone can provide some guidance on. Currently my web application is a single-server that has a Java backend running on Tomcat 8.5. To handle Websocket connections, I keep a Map of all the javax.websocket.Session passed in the onOpen() method.
#ServerEndpoint("/status")
public class StatusMessenger
{
private static ConcurrentHashMap<String, Session> sessions = new ConcurrentHashMap();
#OnOpen
public void onOpen(Session session) throws Exception
{
String sessionId = session.getRequestParameterMap().get("sessionId").get(0);
sessions.put(session.getId(), session);
}
My application only broadcasts messages to all users, so the broadcast() in my code simply loops through sessions.values() and sends the message through each javax.websocket.Session.
public static void broadcast(String event, String message)
{
for (Session session: sessions.values())
{
// send the message
}
}
I'm not even sure that's the correct way to handle Websockets in Tomcat, but it's worked for me for years, so I assume it's acceptable.
The Problem
I want to now horizontally scale out my application on AWS to multiple servers. For the most part my application is stateless and I store the regular HTTP session information in the database. My problem is this static Map of javax.websocket.Session - it's not stateless, and there's a different Map on each server, each with their own list of javax.websocket.Sessions.
In my application, the server code in certain situations will need to broadcast out a message to all the users. These events may happen on any server in this multi-server setup. The event will trigger the broadcast() method which loops through the javax.websocket.Sessions. However, it will only loop through the sessions in it's own Map.
How do I get the multi-server application to broadcast this message to all websocket connections stored across all the servers in the setup? The application works fine on a single-server (obviously) because there's only 1 list of websocket sessions. In other words, how do I write a stateless application that needs to store the websocket connections so it can communicate with them later?
I found 2 alternative solutions for this...
In my load balancer I put a rule to route all paths with /{my websocket server path} to 1 server so that all the Sessions were on the same server.
Use a 3rd party web push library like Pusher (http://pusher.com)
I need:
A client, which communicates with the front-end, which communicates with 3 file servers.
How should I go about doing this? It needs to use RMI as distributed systems.
I also need to monitor all three file servers.
From what I understand, I need to establish an RMI registry, but how do I establish three concurrent servers within one registry?
Okay, so am I right in thinking i'd have the following: A server interface, a server implementation, and a master server which creates the three servers (with unique names) and finally a client?
The 'master server' needs to create a Registry on its own localhost, bind itself to the Registry so the slave servers can find it, and export a remote interface that lets the servers register themselves with it.
The master server must do the binding to this Registry on behalf of the slaves, as you can't bind to a remote Registry. But in fact the slaves don't need to be bound to the Registry at all, only registered with the master.
The master needs to export a second remote interface that provides the API to the client, which provides the upload API and whose implementation performs the balancing act. I would keep this interface separate from the interface used by the slaves, both for security reasons and for simplicity: you don't need clients trying to be slaves, or worrying about what the slave-relevant methods in the remote interface are.
All these servers and registries can run on port 1099.
The slaves are presumably multiple instances of the same service, so they all use a common remote interface. This interface provides the upload-to-slave API, and it also needs to allow each slave to provide the knowledge about how full each slave is, possibly as a return value from the upload method, or else as a query method.
Quick sketch:
public interface UploadMaster extends Remote
{
void upload(String name, byte[] contents) throws IOException, RemoteException;
}
public interface LoadBalancingMaster extends Remote
{
void register(Slave slave) throws RemoteException;
void unregister(Slave slave) throws RemoteException;
}
public interface Slave extends Remote
{
/** #return the number of files now uploaded to this slave. */
int upload(String name, byte[] contents) throws IOException, RemoteException;
int getFileCount() throws RemoteException;
}
I hope this is homework. RMI is a poor choice for file transfer, as it bundles up the entire argument list into memory at both ends, rather than providing a streaming interface.
I am required to make a client-server application as my project submission for university finals.
I have figured out how I would be writing the server but I am kinda confused with this situation I am facing.
So the server is said to support only one defined protocol (represented by a Protocol interface) and would serve to the clients that speaks using that rule only. To test the functionality of the server, I have wrote an implementation that supports the HTTP Protocol so that I can quickly test the server from browser, but there is one thing that is really confusing me.
I have defined the server as:
public interface Server {
// Methods...
public void start() throws Exception;
public Protocol getProtocol();
}
The base implementation of server does this:
public class StandardServer implements Server {
/* Implementations */
public synchronized final void start() throws Exception {
try {
while (true) {
Socket socket = serverSocket.accept();
// Use the protocol to handle the request
getProtocol().handshake(socket);
}
} catch (IOException ex) {
logger.error(ex);
}
}
}
I am confused that is it really required to do this, as I am certain that there are better ways to do this.
What I have considered so far is:
Synchronize the getProtocol() method.
Make the implementation of Protocol a thread and then use it to handle requests.
Spawn a thread when the client connects and pass-in the protocol object to that thread.
What would be the good ways to do this, considering that the server would be getting a decent amount of requests per second?
Any source code help/reference would be highly appreciated.
P.S:
I am not implementing an HTTP Server.
There would be multiple implementations of Server
I would like to make a kind of logging proxy in netty. The goal is to be able to have a web browser make HTTP requests to a netty server, have them be passed on to a back-end web server, but also be able to take certain actions based on HTTP specific things.
There's a couple of useful netty exmaples, HexDumpProxy (which does the proxying part, agnostic to the protocol), and I've taken just a bit of code from HttpSnoopServerHandler.
My code looks like this right now:
HexDumpProxyInboundHandler can be found at http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/proxy/HexDumpProxyInboundHandler.html
//in HexDumpProxyPipelineFactory
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline(); // Note the static import.
p.addLast("handler", new HexDumpProxyInboundHandler(cf, remoteHost, remotePort));
p.addLast("decoder", new HttpRequestDecoder());
p.addLast("handler2", new HttpSnoopServerHandler());
return p;
}
//HttpSnoopServerHandler
public class HttpSnoopServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
HttpRequest request = (HttpRequest) e.getMessage();
System.out.println(request.getUri());
//going to do things based on the URI
}
}
Unfortunately messageReceived in HttpSnoopServerHandler never gets called - it seems like HexDumpProxyInboundHandler consumes all the events.
How can I have two handlers, where one of them requires a decoder but the other doesn't (I'd rather have HexDumpProxy as it is, where it doesn't need to understand HTTP, it just proxies all connections, but my HttpSnoopHandler needs to have HttpRequestDecoder in front of it)?
I've not tried it but you could extend HexDumpProxyInboundHandler and override messageReceived with something like
super.messageReceived(ctx, e);
ctx.sendUpstream(e);
Alternatively you could modify HexDumpProxyInboundHandler directly to that the last thing messageReceived does is call super.messageReceived(ctx,e).
This would only work for inbound data from the client. Data from the service you're proxy-ing would still be passed through without you code seeing it.
I have an RMI client that connects to some RMI server just to let it know it can use this new client.
Can I pass directly some Remote object so that:
serverRemoteObject.registerClient(theClientRemoteObjectTheServerShouldUse);
will actually give the server some object he can use without connecting to my client?
The following question says it is possible, but no real example was given:
Is it possible to use RMI bidirectional between two classes?
Andrew
Yes, you can. This is how exactly callbacks work in case of RMI. You send across an object to the server and when the server invokes a method on your object, it would be executed in the "client" JVM as opposed to on the server. Look into UnicastRemoteObject.export method for export any object which implements the Remote interface as a remote object which can be passed to your server.
interface UpdateListener extends Remote {
public void handleUpdate(Object update) throws RemoteException;
}
class UpdateListenerImpl implements UpdateListener {
public void handleUpdate(Object update) throws RemoteException {
// do something
}
}
//somewhere in your client code
final UpdateListener listener = new UpdateListenerImpl();
UnicastRemoteObject.export(listener);