I have a minimal JMS provider, which sends topic messages over UDP and queue messages over TCP.
I use a single selector to handle UDP and TCP selection keys (registering both SocketChannels and DatagramChannels).
My problem is: if I only send and receive UDP packets, everything goes well, but as soon as I start writing on a TCP socket (using Selector.wakeup() to have the selector do the actual writing), the selector enters an infinite loop, returning an empty selection key set, and eating 100% CPU.
The code of the main loop (somewhat simplified) is:
public void run() {
while (!isInterrupted()) {
try {
selector.select();
} catch (final IOException ex) {
break;
}
final Iterator<SelectionKey> selKeys = selector.selectedKeys().iterator();
while (selKeys.hasNext()) {
final SelectionKey key = selKeys.next();
selKeys.remove();
if (key.isValid()) {
if (key.isReadable()) {
this.read(key);
}
if (key.isConnectable()) {
this.connect(key);
}
if (key.isAcceptable()) {
this.accept(key);
}
if (key.isWritable()) {
this.write(key);
key.cancel();
}
}
}
synchronized(waitingToWrite) {
for (final SelectableChannel channel: waitingToWrite) {
try {
channel.register(selector, SelectionKey.OP_WRITE);
} catch (ClosedChannelException ex) {
// TODO: reopen
}
}
waitingToWrite.clear();
}
}
}
And for a UDP send (TCP send is similar):
public void udpSend(final String xmlString) throws IOException {
synchronized(outbox) {
outbox.add(xmlString);
}
synchronized(waitingToWrite) {
waitingToWrite.add(dataOutChannel);
}
selector.wakeup();
}
So, what's wrong here? Should I use 2 different selectors to handle UDP and TCP packets?
I suggest you check the return value of select() method.
try {
if(selector.select() == 0) continue;
} catch (final IOException ex) {
break;
}
Did you try debugging to see where the loop is?
Edit:
I recomend that instead of calling "remove()" on the iterator, you call selectedKeys.clear() after you iterate over them. It is possible that the implementation of the iterator, does not remove it from the underlying set.
Check that you are not registering OP_CONNECT on a connected channel.
Problem went away after upgrading to Java 1.6.0_22.
You are probably getting an IOException and ignoring it in the empty catch block. Never do that. And just continuing after an IOException is practically never the correct action. The only exception to that rule I can think of offhand is a SocketTimeoutException, and you're in non-blocking mode so you won't be getting those, and you don't get them on selectors anyway. I would want to see the content of your methods that handle connect, accept, read, and write.
Modify your design to have one thread per incoming network connection.
The selector should be used when you're using one thread to process incoming messages on multiple TCP sockets. You register each socket with the selector and then select(), which blocks until there is data available on one of them. Then you loop through each key and process the waiting data. This is the method I've always used when writing C code, and it will work, but I don't think it's the best way to do it in Java.
Java has great native thread support, which C does not. I think it makes a lot more sense to have one thread per TCP socket and not to use selectors at all. If you just do a read operation on a socket, the thread will block until data arrives or the socket is closed. This is effectively the same thing as selecting with only one registered channel.
If you want to get this to work with only one thread, you should use the selector only for TCP sockets where you want incoming connections. This way, the only time the call to select() will return is when there is incoming data waiting on one of the sockets. That thread will be asleep at all other times, and no other operation will wake it up.
Related
I have a Java NIO server which receives data from clients.
When a channel is ready for read i.e key.isReadable() return true read(key) is called to read data.
Currently I am using a single read buffer for all channels and in read() method , I clear the buffer and read into it and then finally put into a byte array , supposing that I will get all data in one shot.
But let's say I do not get complete data in one shot(I have special characters at data ending to detect).
Problem :
So now how to keep this partial data with channel or how to deal with partial read problem ? or globally ?
I read somewhere attachments are not good.
Take a look at the Reactor pattern. Here is a link to basic implementation by professor Doug Lea:
http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf
The idea is to have single reactor thread which blocks on Selector call. Once there are IO events ready, reactor thread dispatches the events to appropriate handlers.
In pdf above, there is inner class Acceptor within Reactor which accepts new connections.
Author uses single handler for read and write events and maintains state of this handler. I prefer to have separate handlers for reads and writes but this is not as easy to work with as with 'state machine'. There can be only one Attachment per event, so some kind of injection is needed to switch read/write handlers.
To maintain state between subsequent read/writes you will have to do couple of things:
Introduce custom protocol which tells you when the message is fully read
Have timeout or cleanup mechanism for stale connections
Maintain client specific sessions
So, you can do something like this:
public class Reactor implements Runnable{
Selector selector = Selector.open();
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
public Reactor(int port) throws IOException {
serverSocketChannel.socket().bind(new InetSocketAddress(port));
serverSocketChannel.configureBlocking(false);
// let Reactor handle new connection events
registerAcceptor();
}
/**
* Registers Acceptor as handler for new client connections.
*
* #throws ClosedChannelException
*/
private void registerAcceptor() throws ClosedChannelException {
SelectionKey selectionKey0 = serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
selectionKey0.attach(new Acceptor());
}
#Override
public void run(){
while(!Thread.interrupted()){
startReactorLoop();
}
}
private void startReactorLoop() {
try {
// wait for new events for each registered or new clients
selector.select();
// get selection keys for pending events
Set<SelectionKey> selectedKeys = selector.selectedKeys();
Iterator<SelectionKey> selectedKeysIterator = selectedKeys.iterator();
while (selectedKeysIterator.hasNext()) {
// dispatch even to handler for the given key
dispatch(selectedKeysIterator.next());
// remove dispatched key from the collection
selectedKeysIterator.remove();
}
} catch (IOException e) {
// TODO add handling of this exception
e.printStackTrace();
}
}
private void dispatch(SelectionKey interestedEvent) {
if (interestedEvent.attachment() != null) {
EventHandler handler = (EventHandler) interestedEvent.attachment();
handler.processEvent();
}
}
private class Acceptor implements EventHandler {
#Override
public void processEvent() {
try {
SocketChannel clientConnection = serverSocketChannel.accept();
if (clientConnection != null) {
registerChannel(clientConnection);
}
} catch (IOException e) {e.printStackTrace();}
}
/**
* Save Channel - key association - in Map perhaps.
* This is required for subsequent/partial reads/writes
*/
private void registerChannel(SocketChannel clientChannel) {
// notify injection mechanism of new connection (so it can activate Read Handler)
}
Once read event is handled, notify injection mechanism that write handler can be injected.
New instances of read and write handlers are created by the injection mechanism once, when new Connection is available. This injection mechanism switches handlers as needed. Lookup of handlers for each Channel is done from the Map that is filled at the connection Acceptance by the method `registerChannel().
Read and write handlers have ByteBuffer instances, and since each Socket Channel has its own pair of handlers, you can now maintain state between partial reads and writes.
Two tips to improve performance:
Try to do first read immediately when connection is accepted. Only if you don't read enough data as defined by header in your custom protocol, register Channel interest for read events.
Try to do write first without registering interest for write events and only if you don't write all the data, register interest for
write.
This will reduce number of Selector wakeups.
Something like this:
SocketChannel socketChannel;
byte[] outData;
final static int MAX_OUTPUT = 1024;
ByteBuffer output = ByteBuffer.allocate(MAX_OUTPUT);
// if message was not written fully
if (socketChannel.write(output) < messageSize()) {
// register interest for write event
SelectionKey selectionKey = socketChannel.register(selector, SelectionKey.OP_WRITE);
selectionKey.attach(writeHandler);
selector.wakeup();
}
Finally, there should be timed Task which checks if Connections are still alive/SelectionKeys are canceled. If client breaks TCP connection, server will usually not know of this. As a result, there will be number of Event handlers in memory, bind as Attachments to stale connections which will result with memory leak.
This is the reason why you may say Attachments are not good, but the issue can be dealt with.
To deal with this here are two simple ways:
TCP keep alive could be enabled
periodic task could check timestamp of last activity on the given Channel. If it is idle for to long, server should terminate connection.
There's an ancient and very inaccurate NIO blog from someone at Amazon where it is wrongly asserted that key attachments are memory leaks. Complete and utter BS. Not even logical. This is also the one where he asserts you need all kinds of supplementary queues. Never had to do that yet, in about 13 years of NIO.
What you need is a ByteBuffer per channel, or possibly two, one for read and one for write. You can store a single one as the attachment itself: if you want two, or have other data to store, you need to define yourself a Session class that contains both buffers and whatever else you want to associate with the channel, for example client credentials, and use the Session object as the attachment.
You really can't get very far in NIO with a single buffer for all channels.
I'm developing a client (Java)/server(C++) application using TCP sockets.
The protocol I used is composed of Messages beginning by 2 bytes defining the type of what will be the content of the Message.
So basically, the receiving thread waits for data to be received in a loop. But I want to use a timeout with the socket to be notified that the other host takes too long to send data.
receivingSocket.setSoTimeout(durationInMilliseconds);
DataInputStream in = new DataInputStream(receivingSocket.getInputStream());
boolean success = false;
short value = 0;
do {
try {
value = in.readShort();// will throw a SocketTimeoutException in case of timeout, without 2 bytes available from the socket
success = true;
} catch (SocketTimeoutException e) {
/// do something if it happens to often. Otherwise go on with the loop
}
} catch (IOException e) {
/// abort connection in case of other problem
}
} while (!success)
Now, what happens if the receiving thread calls in.readShort() at a point where the socket has got only one byte available in its buffer ? Does this byte remain on the socket's stack ? Or is it lost ? In the first case, I could read it next time I call in.readShort(), otherwise it seems lost for good...
readShort() here is an example, my question stands also for readInt(), ...
Thanks for your help,
It isn't specified. I believe the way the implementation works is that the half data is lost, but in any case there's nothing written that says anything else, so you just have to assume the worst.
However in practice this is very unlikely to happen, provided you observe common sense at the sender.
Explanation
I'm revisiting the project I used to teach myself Java.
In this project I want to be able to stop the server from accepting new clients and then perform a few 'cleanup' operations before exiting the JVM.
In that project I used the following style for a client accept/handle loop:
//Exit loop by changing running to false and waiting up to 2 seconds
ServerSocket serverSocket = new ServerSocket(123);
serverSocket.setSoTimeout(2000);
Socket client;
while (running){ // 'running' is a private static boolean
try{
client = serverSocket.accept();
createComms(client); //Handles Connection in New Thread
} catch (IOException ex){
//Do Nothing
}
}
In this approach a SocketTimeoutException will be thrown every 2 seconds, if there are no clients connecting, and I don't like relying on exceptions for normal operation unless it's necessary.
I've been experimenting with the following style to try and minimise relying on Exceptions for normal operation:
//Exit loop by calling serverSocket.close()
ServerSocket serverSocket = new ServerSocket(123);
Socket client;
try{
while ((client = serverSocket.accept()) != null){
createComms(client); //Handles Connection in New Thread
}
} catch (IOException ex){
//Do Nothing
}
In this case my intention is that an Exception will only be thrown when I call serverSocket.close() or if something goes wrong.
Question
Is there any significant difference in the two approaches, or are they both viable solutions?
I'm totally self-taught so I have no idea if I've re-invented the wheel for no reason or if I've come up something good.
I've been lurking on SO for a while, this is the first time I've not been able to find what I need already.
Please feel free to suggest completely different approaches =3
The problem with second approach is that the server will die if an exception occurs in the while loop.
The first approach is better, though you might want to add logging exceptions using Log4j.
while (running){
try{
client = serverSocket.accept();
createComms(client);
} catch (IOException ex){
// Log errors
LOG.warn(ex,ex);
}
}
Non-blocking IO is what you're looking for. Instead of blocking until a SocketChannel (non-blocking alternative to Socket) is returned, it'll return null if there is currently no connection to accept.
This will allow you to remove the timeout, since nothing will be blocking.
You could also register a Selector, which informs you when there is a connection to accept or when there is data to read. I have a small example of that here, as well as a non-blocking ServerSocket that doesnt use a selector
EDIT: In case something goes wrong with my link, here is the example of non-blocking IO, without a selector, accepting a connection:
class Server {
public static void main(String[] args) throws Exception {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
while(true) {
SocketChannel sc = ssc.accept();
if(sc != null) {
//handle channel
}
}
}
}
The second approach is better (for the reasons you mentioned: relying on exceptions in normal program flow is not a good practise) allthough your code suggests that serverSocket.accept() can return null, which it can not. The method can throw all kinds of exceptions though (see the api-docs). You might want to catch those exceptions: a server should not go down without a very good reason.
I have been using the second approach with good success, but added some more code to make it more stable/reliable: see my take on it here (unit tests here). One of the 'cleanup' tasks to consider is to give some time to the threads that are handling the client communications so that these threads can finish or properly inform the client the connection will be closed. This prevents situations where the client is not sure if the server completed an important task before the connection was suddenly lost/closed.
I'm implementing simple netty server for a multiplayer game. I'm just trying to figure out Netty.
I test the server via telnet. What i done is broadcast the messages to all channels. It's working smoothly. Also I remove channels from map on close event which is fine.
But the problem is if one of the clients disconnect unexpectedly, before closed callback, messageReceived callback called which the sender is disconnected channel.
How can i properly ignore the message comes from disconnected client?
I use StringBuffer in messagedReceived but for the case StringBuffer.toString is also not a proper string. At the end disconnected channel broadcast pointless message to other channels and itself, when receiver channel is itself throws an exception Connection reset by peer
which it's normal because the channel itself is not available at the moment.
Here is the code ;
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
System.out.println();
System.out.println("------------------");
Channel current = e.getChannel();
System.out.println("SenderChannel:"+current.getId());
if(!current.isOpen())
System.out.println("Not Open");
ChannelBuffer buf = (ChannelBuffer) e.getMessage();
StringBuffer sbs = new StringBuffer();
while(buf.readable()) {
sbs.append((char) buf.readByte());
}
String s = sbs.toString();
System.out.println(s);
String you = "You:" + s;
String other = "Other:" + s;
byte [] uResponse = you.getBytes();
byte [] otherResponse = other.getBytes();
Iterator iterator = channelList.entrySet().iterator();
while(iterator.hasNext()){
Map.Entry pairs = (Map.Entry)iterator.next();
Integer key = (Integer)pairs.getKey();
Channel c = (Channel)pairs.getValue();
System.out.println("ReceiverChannel:"+c.getId());
if(key != current.getId())
c.write(ChannelBuffers.wrappedBuffer(otherResponse));
else
c.write(ChannelBuffers.wrappedBuffer(uResponse));
}
}
#Override
public void channelDisconnected(ChannelHandlerContext ctx,
ChannelStateEvent e){
Channel ch = e.getChannel();
channelList.remove(ch.getId());
System.out.println();
System.out.println("*****************");
System.out.println("DisconnectEvent:"+ch.getId());
System.out.println("*****************");
System.out.println();
ch.close();
}
You can't solve the problem in the manner that you would like. If there's a network problem then technically the sender could disconnect at any time, for example
as soon as the thread enters messageReceived
while you're iterating through channelList
while you're iterating through channelList but after you've echoed back to the sender
after you've broadcast the message
Netty can't raise the disconnected event while messageReceived is processing because you're running in the thread that will raise the event (unless you have a non-ordered execution handler in your pipeline). The correct solution really depends on your application. If the broadcast results in all the other receivers responding it's probably better / easier to have the server suppress any messages destined for a client that's no longer connected.
Also, if you're really going to use strings then take a look at StringEncoder / StringDecoder. There's no guarantee in your code that the message event buffer contains a complete string.
Just put a try/catch around each send. If one of them fails, close the corresponding channel.
If this is for a multiplayer game server, it might be better to use an existing Netty game server solution like java game server. Disconnects become events which get sent to the session and since it is event driven, you could write your own handler to decide whether or not to receive anymore events on the same session. Since events are queued in a FIFO order, if disconnect happens then you need not go ahead with subsequent broadcasts.
I am not a Java Developer. But from socket point of view this data is in buffer or sent before disconnecting of user. So when you are in receiving stage user is still connected and exactly on time of completing of receiving user is already disconnected. So I think best way to prevent this things is to check if user is still connected after each receiving of data.
In C# I personally use this code to check if user is still connected:
if (client.Poll(0, SelectMode.SelectRead))
{
byte[] checkConn = new byte[1];
if (client.Receive(checkConn, SocketFlags.Peek) == 0)
return false;
}
return true;
I am not sure about Java And Netty (And if your connection is TCP) but this is what I use and this could be possible to convert it easily to Java.
This question has no doubt been asked in various forms in the past, but not so much for a specific scenario.
What is the most correct way to stop a Thread that is blocking while waiting to receive a network message over UDP.
For example, say I have the following Thread:
public class ClientDiscoveryEngine extends Thread {
private final int PORT;
public ClientDiscoveryEngine(final int portNumber) {
PORT = portNumber;
}
#Override
public void run() {
try {
socket = new DatagramSocket(RECEIVE_PORT);
while (true) {
final byte[] data = new byte[256];
final DatagramPacket packet = new DatagramPacket(data, data.length);
socket.receive(packet);
}
} catch (SocketException e) {
// do stuff 1
} catch (IOException e) {
// do stuff 2
}
}
}
Now, would the more correct way be using the interrupt() method? For example adding the following method:
#Override
public void interrupt() {
super.interrupt();
// flip some state?
}
My only concern is, is socket.receive() not a non-interruptable blocking method? The one way that I have thought of would be to implement the interrupt method as above, in that method call socket.close() and then cater for it in the run method in the catch for the SocketException. Or maybe instead of while(true) use some state that gets flipped in the interrupt method. Is this the best way? Or is there a more elegant way?
Thanks
The receive method doesn't seem to be interruptible. You could close the socket: the javadoc says:
Any thread currently blocked in receive(java.net.DatagramPacket) upon
this socket will throw a SocketException
You could also use setSoTimeout to make the receive method block only for a small amount of time. After the method has returned, your thread can check if it has been interrupted, and retry to receive again for this small amount of time.
Read this answer Interrupting a thread that waits on a blocking action?
To stop a thread, you should not user neither interrupt nor stop in java. The best way, as you suggested by the end of your question, is to have the loop inside the main method controlled by a flag that you can rise as needed.
Here is an old link about this :
http://download.oracle.com/javase/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html
Other ways of stopping a thread are deprecated and don't provide as much control as this one. Also, this may have changed a bit with executor services, I didn't have time to learn much about it yet.
Also, if you want to avoid your thread to be blocked in some IO state, waiting for a socket, you should give your socket a connection and reading time out (method setSoTimeout).
Regards,
Stéphane
This is one of the easier ones. If it's blocked on a UDP socket, send the socket a UDP message that instructs the receiving thread to 'stop'.
Rgds,
Martin