How to ignore messages from disconnected channel - java

I'm implementing simple netty server for a multiplayer game. I'm just trying to figure out Netty.
I test the server via telnet. What i done is broadcast the messages to all channels. It's working smoothly. Also I remove channels from map on close event which is fine.
But the problem is if one of the clients disconnect unexpectedly, before closed callback, messageReceived callback called which the sender is disconnected channel.
How can i properly ignore the message comes from disconnected client?
I use StringBuffer in messagedReceived but for the case StringBuffer.toString is also not a proper string. At the end disconnected channel broadcast pointless message to other channels and itself, when receiver channel is itself throws an exception Connection reset by peer
which it's normal because the channel itself is not available at the moment.
Here is the code ;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
System.out.println();
System.out.println("------------------");
Channel current = e.getChannel();
System.out.println("SenderChannel:"+current.getId());
if(!current.isOpen())
System.out.println("Not Open");
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
StringBuffer sbs = new StringBuffer();
while(buf.readable()) {
sbs.append((char) buf.readByte());
}
String s = sbs.toString();
System.out.println(s);
String you = "You:" + s;
String other = "Other:" + s;
byte [] uResponse = you.getBytes();
byte [] otherResponse = other.getBytes();
Iterator iterator = channelList.entrySet().iterator();
while(iterator.hasNext()){
Map.Entry pairs = (Map.Entry)iterator.next();
Integer key = (Integer)pairs.getKey();
Channel c = (Channel)pairs.getValue();
System.out.println("ReceiverChannel:"+c.getId());
if(key != current.getId())
c.write(ChannelBuffers.wrappedBuffer(otherResponse));
else
c.write(ChannelBuffers.wrappedBuffer(uResponse));
}
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e){
Channel ch = e.getChannel();
channelList.remove(ch.getId());
System.out.println();
System.out.println("*****************");
System.out.println("DisconnectEvent:"+ch.getId());
System.out.println("*****************");
System.out.println();
ch.close();
}

You can't solve the problem in the manner that you would like. If there's a network problem then technically the sender could disconnect at any time, for example
as soon as the thread enters messageReceived
while you're iterating through channelList
while you're iterating through channelList but after you've echoed back to the sender
after you've broadcast the message
Netty can't raise the disconnected event while messageReceived is processing because you're running in the thread that will raise the event (unless you have a non-ordered execution handler in your pipeline). The correct solution really depends on your application. If the broadcast results in all the other receivers responding it's probably better / easier to have the server suppress any messages destined for a client that's no longer connected.
Also, if you're really going to use strings then take a look at StringEncoder / StringDecoder. There's no guarantee in your code that the message event buffer contains a complete string.

Just put a try/catch around each send. If one of them fails, close the corresponding channel.

If this is for a multiplayer game server, it might be better to use an existing Netty game server solution like java game server. Disconnects become events which get sent to the session and since it is event driven, you could write your own handler to decide whether or not to receive anymore events on the same session. Since events are queued in a FIFO order, if disconnect happens then you need not go ahead with subsequent broadcasts.

I am not a Java Developer. But from socket point of view this data is in buffer or sent before disconnecting of user. So when you are in receiving stage user is still connected and exactly on time of completing of receiving user is already disconnected. So I think best way to prevent this things is to check if user is still connected after each receiving of data.
In C# I personally use this code to check if user is still connected:
if (client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Receive(checkConn, SocketFlags.Peek) == 0)
return false;
}
return true;
I am not sure about Java And Netty (And if your connection is TCP) but this is what I use and this could be possible to convert it easily to Java.

Related

Java Server - Sending packets out incorrectly?

Currently have a TCP server built in Java and I'm sending messages/packets out to clients using their socket's OutputStream:
// Send all player's information to everyone else
outerPlayerIter = players.iterator();
while(outerPlayerIter.hasNext()) {
Player outerPlayer = outerPlayerIter.next();
Iterator<Player> innerPlayerIter = players.iterator();
while(innerPlayerIter.hasNext()) {
Player innerPlayer = innerPlayerIter.next();
boolean isYou = false;
if(innerPlayer.equals(outerPlayer)) isYou = true;
// Send innerPlayer's info to outerPlayer
Thread.sleep(100);
dataBuffer.clearBuffer();
dataBuffer.writeByte(Msgs.mm_toclient.MES_SENDPLAYERINFO);
dataBuffer.writeBool(isYou);
dataBuffer.writeBool(innerPlayer.getIsHost());
dataBuffer.writeString(innerPlayer.getName());
dataBuffer.writeString(innerPlayer.getPublicIP().getHostAddress());
dataBuffer.writeShort((short)innerPlayer.getUdpPort());
outerPlayer.getSocket().getOutputStream().write(dataBuffer.getByteArray());
outerPlayer.getSocket().getOutputStream().flush();
}
}
However, sometimes the clients don't appear to receive all the messages. I can't send multiple messages at the exact same time over one socket.
One way to temporarily fix this was to sleep before I send another packet out. But I'm not sure why this is needed.
Am I doing something wrong in regards to how I'm sending/writing the packets out to be sent? What can be fixed to allow multiple packets to be received correctly at once without sleeping?
It might be due to the fact that the client closes the socket way too fast before the communication should actually finished. Could you please try to bump up the thread.sleep value or, on the client side, if you use any kind of timing, try to bump up that one as well.

How to keep data with each channel on NIO Server

I have a Java NIO server which receives data from clients.
When a channel is ready for read i.e key.isReadable() return true read(key) is called to read data.
Currently I am using a single read buffer for all channels and in read() method , I clear the buffer and read into it and then finally put into a byte array , supposing that I will get all data in one shot.
But let's say I do not get complete data in one shot(I have special characters at data ending to detect).
Problem :
So now how to keep this partial data with channel or how to deal with partial read problem ? or globally ?
I read somewhere attachments are not good.
Take a look at the Reactor pattern. Here is a link to basic implementation by professor Doug Lea:
http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf
The idea is to have single reactor thread which blocks on Selector call. Once there are IO events ready, reactor thread dispatches the events to appropriate handlers.
In pdf above, there is inner class Acceptor within Reactor which accepts new connections.
Author uses single handler for read and write events and maintains state of this handler. I prefer to have separate handlers for reads and writes but this is not as easy to work with as with 'state machine'. There can be only one Attachment per event, so some kind of injection is needed to switch read/write handlers.
To maintain state between subsequent read/writes you will have to do couple of things:
Introduce custom protocol which tells you when the message is fully read
Have timeout or cleanup mechanism for stale connections
Maintain client specific sessions
So, you can do something like this:
public class Reactor implements Runnable{
Selector selector = Selector.open();
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
public Reactor(int port) throws IOException {
serverSocketChannel.socket().bind(new InetSocketAddress(port));
serverSocketChannel.configureBlocking(false);
// let Reactor handle new connection events
registerAcceptor();
}
/**
* Registers Acceptor as handler for new client connections.
*
* #throws ClosedChannelException
*/
private void registerAcceptor() throws ClosedChannelException {
SelectionKey selectionKey0 = serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
selectionKey0.attach(new Acceptor());
}
#Override
public void run(){
while(!Thread.interrupted()){
startReactorLoop();
}
}
private void startReactorLoop() {
try {
// wait for new events for each registered or new clients
selector.select();
// get selection keys for pending events
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> selectedKeysIterator = selectedKeys.iterator();
while (selectedKeysIterator.hasNext()) {
// dispatch even to handler for the given key
dispatch(selectedKeysIterator.next());
// remove dispatched key from the collection
selectedKeysIterator.remove();
}
} catch (IOException e) {
// TODO add handling of this exception
e.printStackTrace();
}
}
private void dispatch(SelectionKey interestedEvent) {
if (interestedEvent.attachment() != null) {
EventHandler handler = (EventHandler) interestedEvent.attachment();
handler.processEvent();
}
}
private class Acceptor implements EventHandler {
#Override
public void processEvent() {
try {
SocketChannel clientConnection = serverSocketChannel.accept();
if (clientConnection != null) {
registerChannel(clientConnection);
}
} catch (IOException e) {e.printStackTrace();}
}
/**
* Save Channel - key association - in Map perhaps.
* This is required for subsequent/partial reads/writes
*/
private void registerChannel(SocketChannel clientChannel) {
// notify injection mechanism of new connection (so it can activate Read Handler)
}
Once read event is handled, notify injection mechanism that write handler can be injected.
New instances of read and write handlers are created by the injection mechanism once, when new Connection is available. This injection mechanism switches handlers as needed. Lookup of handlers for each Channel is done from the Map that is filled at the connection Acceptance by the method `registerChannel().
Read and write handlers have ByteBuffer instances, and since each Socket Channel has its own pair of handlers, you can now maintain state between partial reads and writes.
Two tips to improve performance:
Try to do first read immediately when connection is accepted. Only if you don't read enough data as defined by header in your custom protocol, register Channel interest for read events.
Try to do write first without registering interest for write events and only if you don't write all the data, register interest for
write.
This will reduce number of Selector wakeups.
Something like this:
SocketChannel socketChannel;
byte[] outData;
final static int MAX_OUTPUT = 1024;
ByteBuffer output = ByteBuffer.allocate(MAX_OUTPUT);
// if message was not written fully
if (socketChannel.write(output) < messageSize()) {
// register interest for write event
SelectionKey selectionKey = socketChannel.register(selector, SelectionKey.OP_WRITE);
selectionKey.attach(writeHandler);
selector.wakeup();
}
Finally, there should be timed Task which checks if Connections are still alive/SelectionKeys are canceled. If client breaks TCP connection, server will usually not know of this. As a result, there will be number of Event handlers in memory, bind as Attachments to stale connections which will result with memory leak.
This is the reason why you may say Attachments are not good, but the issue can be dealt with.
To deal with this here are two simple ways:
TCP keep alive could be enabled
periodic task could check timestamp of last activity on the given Channel. If it is idle for to long, server should terminate connection.
There's an ancient and very inaccurate NIO blog from someone at Amazon where it is wrongly asserted that key attachments are memory leaks. Complete and utter BS. Not even logical. This is also the one where he asserts you need all kinds of supplementary queues. Never had to do that yet, in about 13 years of NIO.
What you need is a ByteBuffer per channel, or possibly two, one for read and one for write. You can store a single one as the attachment itself: if you want two, or have other data to store, you need to define yourself a Session class that contains both buffers and whatever else you want to associate with the channel, for example client credentials, and use the Session object as the attachment.
You really can't get very far in NIO with a single buffer for all channels.

Mock XMPP Server with Mina works only part of the time

I've created a mock XMPP server that processes PLAIN encryption stanzas. I'm able to use Pidgin and go through the entire session creation, to the point where Pidgin thinks the user is on an actually XMPP server and is sending regular pings.
However, it seems like not all messages are processed correctly and when I do get a successful login, it was just luck. I'm talking, maybe 1/10th of the time I actually get connected. The other times it seems like Pidgin missed a message or I dumped messages to fast on the transport.
If I enable Pidgin's XMPP Console plugin, the first connection is ALWAYS successful, but a second user fails to make it through, typically dying when Pidgin requests Service Discovery.
My Mina code is something like this:
try
{
int PORT = 20600;
IoAcceptor acceptor = null;
acceptor = new NioSocketAcceptor();
acceptor.getFilterChain().addFirst("codec", new ProtocolCodecFilter( new ProtocolCodecFactoryImpl()));
acceptor.getFilterChain().addLast("executor", new ExecutorFilter(IoEventType.MESSAGE_RECEIVED));
acceptor.setHandler( new SimpleServerHandler());
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 10);
acceptor.bind( new InetSocketAddress(PORT));
}
catch (Exception ex)
{
System.out.println(ex.getMessage());
}
and the SimpleServerHandler is responsible for message/stanza processing and session creation. The messageReceived function looks like:
#Override
public void messageReceived(IoSession session, Object msg) throws Exception
{
String str = msg.toString();
System.out.println("MESSAGE: " + str);
process(session, str);
}
and finally, process is in charge of parsing the message out, and writing the response. I do use sychonized on my write:
public void sessionWrite(IoSession session, String buf)
{
synchronized(session)
{
WriteFuture future = session.write(buf);
}
}
I have omitted my processing code for brevity, but it simply looks for certain pieces of data, crafts a response and calls sessionWrite(...)
My question is, will this pattern work? And if not, should I consider shoving received messages in a Queue and simply processing the Queue from say a Timer?
It turns out, Pidgin would send two IQ stanzas, but I wasn't handling them correctly. My decoder now determines the end of a stanza and only writes a stanza to the buffer I read from.
Works like a dream now!

Selector.select() starts an infinite loop

I have a minimal JMS provider, which sends topic messages over UDP and queue messages over TCP.
I use a single selector to handle UDP and TCP selection keys (registering both SocketChannels and DatagramChannels).
My problem is: if I only send and receive UDP packets, everything goes well, but as soon as I start writing on a TCP socket (using Selector.wakeup() to have the selector do the actual writing), the selector enters an infinite loop, returning an empty selection key set, and eating 100% CPU.
The code of the main loop (somewhat simplified) is:
public void run() {
while (!isInterrupted()) {
try {
selector.select();
} catch (final IOException ex) {
break;
}
final Iterator<SelectionKey> selKeys = selector.selectedKeys().iterator();
while (selKeys.hasNext()) {
final SelectionKey key = selKeys.next();
selKeys.remove();
if (key.isValid()) {
if (key.isReadable()) {
this.read(key);
}
if (key.isConnectable()) {
this.connect(key);
}
if (key.isAcceptable()) {
this.accept(key);
}
if (key.isWritable()) {
this.write(key);
key.cancel();
}
}
}
synchronized(waitingToWrite) {
for (final SelectableChannel channel: waitingToWrite) {
try {
channel.register(selector, SelectionKey.OP_WRITE);
} catch (ClosedChannelException ex) {
// TODO: reopen
}
}
waitingToWrite.clear();
}
}
}
And for a UDP send (TCP send is similar):
public void udpSend(final String xmlString) throws IOException {
synchronized(outbox) {
outbox.add(xmlString);
}
synchronized(waitingToWrite) {
waitingToWrite.add(dataOutChannel);
}
selector.wakeup();
}
So, what's wrong here? Should I use 2 different selectors to handle UDP and TCP packets?
I suggest you check the return value of select() method.
try {
if(selector.select() == 0) continue;
} catch (final IOException ex) {
break;
}
Did you try debugging to see where the loop is?
Edit:
I recomend that instead of calling "remove()" on the iterator, you call selectedKeys.clear() after you iterate over them. It is possible that the implementation of the iterator, does not remove it from the underlying set.
Check that you are not registering OP_CONNECT on a connected channel.
Problem went away after upgrading to Java 1.6.0_22.
You are probably getting an IOException and ignoring it in the empty catch block. Never do that. And just continuing after an IOException is practically never the correct action. The only exception to that rule I can think of offhand is a SocketTimeoutException, and you're in non-blocking mode so you won't be getting those, and you don't get them on selectors anyway. I would want to see the content of your methods that handle connect, accept, read, and write.
Modify your design to have one thread per incoming network connection.
The selector should be used when you're using one thread to process incoming messages on multiple TCP sockets. You register each socket with the selector and then select(), which blocks until there is data available on one of them. Then you loop through each key and process the waiting data. This is the method I've always used when writing C code, and it will work, but I don't think it's the best way to do it in Java.
Java has great native thread support, which C does not. I think it makes a lot more sense to have one thread per TCP socket and not to use selectors at all. If you just do a read operation on a socket, the thread will block until data arrives or the socket is closed. This is effectively the same thing as selecting with only one registered channel.
If you want to get this to work with only one thread, you should use the selector only for TCP sockets where you want incoming connections. This way, the only time the call to select() will return is when there is incoming data waiting on one of the sockets. That thread will be asleep at all other times, and no other operation will wake it up.

Java Sockets and Dropped Connections

What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.

Categories

Resources