I've created a mock XMPP server that processes PLAIN encryption stanzas. I'm able to use Pidgin and go through the entire session creation, to the point where Pidgin thinks the user is on an actually XMPP server and is sending regular pings.
However, it seems like not all messages are processed correctly and when I do get a successful login, it was just luck. I'm talking, maybe 1/10th of the time I actually get connected. The other times it seems like Pidgin missed a message or I dumped messages to fast on the transport.
If I enable Pidgin's XMPP Console plugin, the first connection is ALWAYS successful, but a second user fails to make it through, typically dying when Pidgin requests Service Discovery.
My Mina code is something like this:
try
{
int PORT = 20600;
IoAcceptor acceptor = null;
acceptor = new NioSocketAcceptor();
acceptor.getFilterChain().addFirst("codec", new ProtocolCodecFilter( new ProtocolCodecFactoryImpl()));
acceptor.getFilterChain().addLast("executor", new ExecutorFilter(IoEventType.MESSAGE_RECEIVED));
acceptor.setHandler( new SimpleServerHandler());
acceptor.getSessionConfig().setIdleTime(IdleStatus.BOTH_IDLE, 10);
acceptor.bind( new InetSocketAddress(PORT));
}
catch (Exception ex)
{
System.out.println(ex.getMessage());
}
and the SimpleServerHandler is responsible for message/stanza processing and session creation. The messageReceived function looks like:
#Override
public void messageReceived(IoSession session, Object msg) throws Exception
{
String str = msg.toString();
System.out.println("MESSAGE: " + str);
process(session, str);
}
and finally, process is in charge of parsing the message out, and writing the response. I do use sychonized on my write:
public void sessionWrite(IoSession session, String buf)
{
synchronized(session)
{
WriteFuture future = session.write(buf);
}
}
I have omitted my processing code for brevity, but it simply looks for certain pieces of data, crafts a response and calls sessionWrite(...)
My question is, will this pattern work? And if not, should I consider shoving received messages in a Queue and simply processing the Queue from say a Timer?
It turns out, Pidgin would send two IQ stanzas, but I wasn't handling them correctly. My decoder now determines the end of a stanza and only writes a stanza to the buffer I read from.
Works like a dream now!
Related
I've read Netty Guide, it does not explain much on ChannelFuture. I find ChannelFuture is a complex idea when applying it.
What I am trying to do is to write message to a context after it's initial response. Different from typical request/response flow. I need a flow like this:
Client send request -> Server (netty)
Server send a response with ctx.writeAndFlush(msg);
Server send some more message to that ctx after step 2 is complete.
The problem is that if I do something like this, the second write will not send out:
ctx.writeAndFlush(response);
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); //will not send to client
Then I try to use ChannelFuture, it works, but I am not sure if I am logically correct:
ChannelFuture msgIsSent = ctx.writeAndFlush(response);
if(msgIsSent.isDone())
{
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); //this works
}
or should I use a ChannelFutureListener() instead?
ChannelFuture msgIsSent = ctx.writeAndFlush(response);
msgIsSent.addListener(new ChannelFutureListener(){
#Override
public void operationComplete(ChannelFuture future)
{
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg);
}
});
Will this also works?
Which one is the best practice approach? Is there any potential problem using method 2?
Of course, this depends too on your "protocol" (meaning for instance if you use HTTP, sending 2 answears for the same request is not supported by HTTP protocol). But let say your protocol allows you to send multiple response parts:
Netty add messages to send to the pipeline, respecting the order.
So in your first example, I'm a bit surprised it does not work:
ctx.writeAndFlush(response);
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg); // should send the message
However it could be lead by your protocol. For instance, this could happen:
response in message queue to send
flush not yet done
newMsg in message queue to send
flush now come but protocol does not support 2 messages so only send first one
So if your protocol must admit that first message is sent already, then you have to wait for the first, so doing something like:
ctx.writeAndFlush(response).addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) {
if (future.isDone()) {
Message newMsg = createMessage();
ctx.writeAndFlush(newMsg);
} else { // an error occurs, do perhaps something else
}
}
});
So your last proposal (I've just don't create a ChannelFuture but directly used the result of writeAndFlush, but both are equals). Just take care of the case where operationComplete does not mean it is in success.
Try this:
ctx.channel().writeAndFlush(response);
Message newMsg = createMessage();
ctx.channel().writeAndFlush(newMsg);
Channel.write() always starts from the tail of the ChannelPipeline.
ChannelHandlerContext.write() starts from the current position of the ChannelHandler.
#2 looks better but make sure to test if the operation was successful. If not, use future.getCause() to access the exception. Not that it will change the functionality, but you can shorten the code by simply adding the listener directly on the result of the write call, I.e. you don't need to declare the future itself since it will be provided in the callback.
I'm using this program to test a PULL socket with ROUTER. I create/bind a ROUTER, then connect a PULL socket with an identity to it; the ROUTER then sends a message addressed specifically for the client using its identity (basic zeromq enveloping)
Test Program
public static void main(String[] o){
ZContext routerCtx = new ZContext();
Socket rtr = routerCtx.createSocket( ZMQ.ROUTER);
rtr.setRouterMandatory(true);
rtr.bind("tcp://*:5500");
ZContext clientCtx = new ZContext();
Socket client1 = clientCtx.createSocket( ZMQ.PULL);
client1.setIdentity("client1".getBytes());
client1.connect("tcp://localhost:5500");
try{
//Thread.currentThread().sleep(2000);
rtr.sendMore("client1");
rtr.sendMore("");
rtr.send("Hello!");
System.out.println( client1.recvStr());
System.out.println("Client Received: " + client1.recvStr());
}catch(Exception e1){
System.out.println( "Could not send to client1: " + e1.getMessage());
}
routerCtx.destroy();
clientCtx.destroy();
}
Results
The expected result is to print Client Received: Hello!", but instead the ROUTER throws an exception consistent with unaddressable message; I'm using setRouterMandatory(true) to throw that exception under such circumstances, however, the client explicitly sets an identity and the server sends to that identity, so I don't understand why the exception is raised.
Temporary Fix
If I add a slight delay by uncommenting Thread.currentThread().sleep(2000);, the message is delivered successfully, but I despise using sleeps and waits, it creates messy and brittle code, but more importantly, doesn't answer the "why?"
Questions
Why is this happening? It was my understanding that "late joining" applied only to PUB/SUB sockets.
Is PULL with ROUTER an invalid socket combination? I'm using it for a chat program, and aside from this issue, it works great.
Why is this happening?
You have a race condition. The client1.connect call starts the connection process, but there is no guarantee the actual connection is established when you call rtr.sendMore("client1");. Your sleep() workaround pretty much proves this.
Changing PULL to DEALER is a step in the right direction, because DEALER can send and receive. In order to avoid the need for sleeps and waits you would have to change your protocol. A simple change to the code above would be to have the DEALER connect and then immediately send a "HELLO" message to the ROUTER (could be just an empty message). The router code must be redesigned such that it does nothing until it receives a HELLO message from the DEALER. Once you have received the HELLO message you know the connection is successfully established and you can safely send your chat messages.
Also, this protocol eliminates the need for your router to know the client id in advance. Instead you can extract it from the HELLO message. A message from a DEALER to a ROUTER is guaranteed to be a multi-part message and the first part is the client ID.
I'm implementing simple netty server for a multiplayer game. I'm just trying to figure out Netty.
I test the server via telnet. What i done is broadcast the messages to all channels. It's working smoothly. Also I remove channels from map on close event which is fine.
But the problem is if one of the clients disconnect unexpectedly, before closed callback, messageReceived callback called which the sender is disconnected channel.
How can i properly ignore the message comes from disconnected client?
I use StringBuffer in messagedReceived but for the case StringBuffer.toString is also not a proper string. At the end disconnected channel broadcast pointless message to other channels and itself, when receiver channel is itself throws an exception Connection reset by peer
which it's normal because the channel itself is not available at the moment.
Here is the code ;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
System.out.println();
System.out.println("------------------");
Channel current = e.getChannel();
System.out.println("SenderChannel:"+current.getId());
if(!current.isOpen())
System.out.println("Not Open");
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
StringBuffer sbs = new StringBuffer();
while(buf.readable()) {
sbs.append((char) buf.readByte());
}
String s = sbs.toString();
System.out.println(s);
String you = "You:" + s;
String other = "Other:" + s;
byte [] uResponse = you.getBytes();
byte [] otherResponse = other.getBytes();
Iterator iterator = channelList.entrySet().iterator();
while(iterator.hasNext()){
Map.Entry pairs = (Map.Entry)iterator.next();
Integer key = (Integer)pairs.getKey();
Channel c = (Channel)pairs.getValue();
System.out.println("ReceiverChannel:"+c.getId());
if(key != current.getId())
c.write(ChannelBuffers.wrappedBuffer(otherResponse));
else
c.write(ChannelBuffers.wrappedBuffer(uResponse));
}
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e){
Channel ch = e.getChannel();
channelList.remove(ch.getId());
System.out.println();
System.out.println("*****************");
System.out.println("DisconnectEvent:"+ch.getId());
System.out.println("*****************");
System.out.println();
ch.close();
}
You can't solve the problem in the manner that you would like. If there's a network problem then technically the sender could disconnect at any time, for example
as soon as the thread enters messageReceived
while you're iterating through channelList
while you're iterating through channelList but after you've echoed back to the sender
after you've broadcast the message
Netty can't raise the disconnected event while messageReceived is processing because you're running in the thread that will raise the event (unless you have a non-ordered execution handler in your pipeline). The correct solution really depends on your application. If the broadcast results in all the other receivers responding it's probably better / easier to have the server suppress any messages destined for a client that's no longer connected.
Also, if you're really going to use strings then take a look at StringEncoder / StringDecoder. There's no guarantee in your code that the message event buffer contains a complete string.
Just put a try/catch around each send. If one of them fails, close the corresponding channel.
If this is for a multiplayer game server, it might be better to use an existing Netty game server solution like java game server. Disconnects become events which get sent to the session and since it is event driven, you could write your own handler to decide whether or not to receive anymore events on the same session. Since events are queued in a FIFO order, if disconnect happens then you need not go ahead with subsequent broadcasts.
I am not a Java Developer. But from socket point of view this data is in buffer or sent before disconnecting of user. So when you are in receiving stage user is still connected and exactly on time of completing of receiving user is already disconnected. So I think best way to prevent this things is to check if user is still connected after each receiving of data.
In C# I personally use this code to check if user is still connected:
if (client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Receive(checkConn, SocketFlags.Peek) == 0)
return false;
}
return true;
I am not sure about Java And Netty (And if your connection is TCP) but this is what I use and this could be possible to convert it easily to Java.
I am using Apache Mina in the Server side. I've a client which is written in tradition IO. Here's the CLIENT side code that sends data to server.
class SomeClass extends Thread
{
Socket socket;
//Constructor
SomeClass()
{
Socket socket = ...
}
public void run()
{
while (j++ < 10)
{
System.out.println("CLIENT[" + clientNo + "] Send Message =>" + requests[clientNo][j]);
OutputStream oStrm = socket.getOutputStream();
byte[] byteSendBuffer = (requests[clientNo][j]).getBytes();
oStrm.write(byteSendBuffer);
oStrm.flush();
}
}
}
The above thread is run for say 20 times. So 20 sockets are created. And in 1 socket, many messages are send. With a server written using IO socket classes i'm able to retrieve data perfectly.
THe problem comes in the Apache Mina based Server which uses BUFFER! I am not able to get individual messages.
How do i get individual messages (given i'm not able to change anything in client, AND the length of individual messages are not known)
Server Side Code
Socket Creation
public static void main(String[] args) throws IOException, SQLException {
System.out.println(Charset.defaultCharset().name());
IoAcceptor acceptor = new NioSocketAcceptor();
ProtocolCodecFilter(charset.newEncoder(),charset.newDecoder() ));
acceptor.setHandler( new TimeServerHandler() );
acceptor.getSessionConfig().setReadBufferSize(64 );
acceptor.getSessionConfig().setIdleTime( IdleStatus.BOTH_IDLE, 10 );
acceptor.bind( new InetSocketAddress(PORT) );
}
Handler Code
public void messageReceived(IoSession session, Object message) throws Exception {
AbstractIoBuffer bf = (AbstractIoBuffer)message;
Charset charset = Charset.forName("UTF-8");
CharsetDecoder decoder = charset.newDecoder();
String outString = bf.getString(decoder);
}
How do i get individual messages
You don't. There is no such thing as a message in TCP. It is a byte-stream protocol. There are no message boundaries and there is no guarantee that one read equals one write at the other end.
(given i'm not able to change anything in client, AND the length of individual messages are not known)
Your are going to have to parse the messages to find where they stop according to the definition of the application protocol. If that isn't possible because, say, the protocol is ambiguous, the client will have to be junked. However it seems that as you can't change the client, it must already work with an existing system, so the guy before you had the same problem and solved it somehow.
MINA is actually a very elaborate framework to solve your problem in an elegant way. Its basic concept is a filter chain, in which a series of filters are applied on an incoming message.
You should implement a protocol decoder (implementing MessageDecoder) and register it in your MINA filter chain. That decoder should parse byte buffers to the object representation of your choice.
Then, you can register a message handler that handles complete messages.
What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.