I've pasted a server side code snippet below. This server code works under normal circumstances, however, the following scenario manages to break the code.
Server and client are on the same machine. I used the loopback address, and the actual IP address, it makes no difference.
Scenario
Server is online, Client makes request (WritableByteChannel.write(ByteBuffer src) returns 12 byte, which is the correct size, but as research revealed that only means the 12 bytes are written to the TCP buffer).
Server program is turned off. Client notices that the channel is closed on the remote side and closes it on its own side, it doesn't make any requests.
Server is online again.
Client tries to make a request, but fails, because the channel is closed/invalid and can't be reused (even though server is online again).
Client checks server's online status, gets positive result, connects again and immediately makes another request.
Server accepts client (code below), after that processes the if clause with the key.isReadable() condition, but then fails on the read, which indicates end-of-stream.
It would be too complex to create an SSCCE, please comment if important information is missing or this is too abstract and I'll provide further information.
Question
How can a freshly created/accepted channel fail on the read operation?
What am I missing? What steps can I undertake to prevent this?
I already tried wireshark, but I can't capture any packets on the designated TCP port, even if the communication is acutally working.
Problem/Additional Info
It's possible to capture packets into .pcap file with RawCap
The problem was the way the client checked the server status. I've added the method below.
Code snippets
Snippet 1
while (online)
{
if (selector.select(5000) == 0)
continue;
Iterator<SelectionKey> it = selector.selectedKeys().iterator();
while (it.hasNext())
{
SelectionKey key = it.next();
it.remove();
if (key.isAcceptable())
{
log.log(Level.INFO, "Starting ACCEPT!");
ServerSocketChannel serverSocketChannel = (ServerSocketChannel) key.channel();
SocketChannel channel = serverSocketChannel.accept();
channel.configureBlocking(false);
channel.register(selector, SelectionKey.OP_READ);
log.log(Level.INFO, "{0} connected to port {1}!",
new Object[] {channel.socket().getInetAddress().getHostAddress(), isa.getPort()});
}
boolean accepted = false;
if (key.isReadable())
{
log.log(Level.INFO, "Starting READ!");
SocketChannel channel = (SocketChannel) key.channel();
bb.clear();
bb.limit(Header.LENGTH);
try
{
NioUtil.read(channel, bb); // server fails here!
}
catch (IOException e)
{
channel.close();
throw e;
}
bb.flip();
Snippet 2
public static ByteBuffer read(ReadableByteChannel channel, ByteBuffer bb) throws IOException
{
while (bb.remaining() > 0)
{
int read = 0;
try
{
read = channel.read(bb);
}
catch (IOException e)
{
log.log(Level.WARNING, "Error during blocking read!", e);
throw e;
}
// this causes the problem... or indicates it
if (read == -1)
{
log.log(Level.WARNING, "Error during blocking read! Reached end of stream!");
throw new ClosedChannelException();
}
}
return bb;
}
Snippet 3
#Override
public boolean isServerOnline()
{
String host = address.getProperty(PropertyKeys.SOCKET_SERVER_HOST);
int port = Integer.parseInt(address.getProperty(PropertyKeys.SOCKET_SERVER_PORT));
boolean _online = true;
try
{
InetSocketAddress addr = new InetSocketAddress(InetAddress.getByName(host), port);
SocketChannel _channel = SocketChannel.open();
_channel.connect(addr);
_channel.close();
}
catch (Exception e)
{
_online = false;
}
return _online;
}
Solution
The problem was not the method that checked, if the service is available/the server is online. The problem was the second point EJP mentioned.
Specific input was expected from the server and it was left in an inconsistent state if that conditions were not met.
I've added some fallback measures, now the the reconnect process - including the check method - is working fine.
Clearly the client must have closed the connection. That's the only way read() returns -1.
Notes:
You're throwing the inappropriate ClosedChannelException when read() returns -1. That exception is thrown by NIO when you've already closed the channel and continue to use it. It has nothing to do with end of stream, and shouldn't be used for that. If you must throw something, throw EOFException.
You also shouldn't loop the way you are. You should only loop while read() is returning a positive number. At present you are starving the select loop while trying to read data that may never arrive.
Related
I'm setting up a simple program to test starting a server, and I'm getting a silent failure state. My client seems to think it has sent, while my server doesn't think it's recieving. The two are managing the initial connection, it's just sending things after that where it's failing.
I've cut things down to the core of where it's currently failing I think.
Here's part of the Client code
public void Client (int port, String ip)
{
try {
sock = new Socket(ip, port);
System.out.println("Found the server.");
streamInput = new DataInputStream(sock.getInputStream());
// sends output to the socket
streamOutput = new DataOutputStream(
sock.getOutputStream());
streamOutput.writeChars("Client Begining Conversation");
System.out.println(streamInput.readUTF());
}
catch (UnknownHostException u) {
System.out.println(u);
return;
}
catch (IOException i) {
System.out.println(i);
return;
}
}
public static void main(String[] args) throws IOException {
// create the frame
try {
ClientGui main = new ClientGui();
main.Client(8000,"127.0.0.1");
main.show(true);
} catch (Exception e) {e.printStackTrace();}
Here's server code.
public Server(int port) throws Exception
{
ServerSocket gameServer = new ServerSocket(port);
Socket gameSocket = gameServer.accept();
System.out.println("Client has connected");
// to send data to the client
PrintStream dataOutput
= new PrintStream(gameSocket.getOutputStream());
// to read data coming from the client
BufferedReader reader = new BufferedReader( new InputStreamReader(
gameSocket.getInputStream()
));
//play logic
Play(reader,dataOutput);
public void Play(BufferedReader reader, PrintStream dataOutput) throws Exception
{
String received, textSent;
System.out.println("Waiting for response.");
received = reader.readLine();
System.out.println("Client has responded");
//contenue until 'Exit' is sent
while (received != "Exit" || received != "exit") {
System.out.println(received);
textSent = received + "recieved";
// send to client
dataOutput.println(textSent);
}
}
My client gets to here -
Found the server.
and my server gets to here -
Trying to start server.
Client has connected
Waiting for response.
At which point, it just hangs forever, each side waiting for the other. It doesn't throw an error, it just... waits until I force it closed.
So it appears that I'm either doing something wrong when I send with "streamOutput.writeChars" in my client, or I'm doing something wrong when I receive with my server with "reader.readLine();", but I can't figure out what.
Or I could be doing something more fundamentally wrong.
The problem is that reader.readLine() doesn’t return until it sees a new line character, but streamOutput.writeChars("Client Begining Conversation") doesn’t send one.
More generally, mixing a DataOutputStream on the client with a BufferedReader on the server won’t work reliably, as the latter expects plain text, while the former produces formatted binary data. For example, the character encoding might not match. The same applies to communication in the opposite direction with PrintStream and DataInputStream. It’s best to pick either a text based or binary protocol and then be consistent about the pair of classes used on both the client and server.
In the case of a text protocol, an explicit character encoding should be defined, as the default can vary between platforms. As a learning exercise, it might not matter, but it’s a good practice to be explicit about specifying a character encoding whenever handling networked communication. UTF-8 is a good choice unless there’s a specific reason to use another one.
In addition, it is generally preferred to use PrintWriter instead of PrintStream for text output in new code. Read this answer for an explanation.
I'm using a function to read bytes from non-blocking SocketChannel (socket from accept()) and from blocking SocketChannel (client side). I'm implementing a server using selector to handle multiple clients, and I'm using loopback address to use my laptop only. I wrote this
while((r = socketChannel.read(ackBuf)) != -1) {
System.out.println(name3d+" r: "+r);
}
and I expected that when the end of the content in the channel was reached, read() would returned -1 but is not what succedes.
read(), in non-blocking configuration, return 0 also if nothing is ready to read at the moment but it will be soon (if I understand well) so if I change the code to
while((r = socketChannel.read(ackBuf)) > 0) {
System.out.println(name3d+" r: "+r);
}
I will not read nothing also if something will be ready a moment later.
How can I distinguish if I got 0 because is not ready or because it is ended?
In the following snippet I can test for a second time the read after a sleep but I'm sure is not the reliable way to do what I want.
int times = 0;
while((r = socketChannel.read(ackBuf)) != -1 && times<2) {
if (r == 0)
try {
Thread.sleep(500);
times++;
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(name3d+" r: "+r);
}
" if I got 0 because is not ready or because it is ended?" Do you mean the message or the totality of the communication?
For the message, you should use a communication protocol (like json or http) for the communication, I think you should get a SocketException... You would if you using blocking and the person on the other end closed the connection... (I've written to a lot of people on SO about how SocketException is your friend)
--- edit ---
Looking over the documention for Channel, it looks like you should get an IOException of some kind (SocketException is a subclass of IOException) if/when the channcel is closed
The Non-blocking SocketChannel is used a bit different.
You first wait for the selection key to tell you that there is data, and
then you read that data from the channel.
See this code draft:
Selector selector = Selector.open();
SocketChannel sc = SocketChannel.open();
sc.configureBlocking(false);
sc.connect(addr);
sc.register(selector, SelectionKey.OP_READ);
while (true) {
// select() can block!
if (selector.select() == 0) {
continue;
}
Iterator iterator = selector.selectedKeys().iterator();
while (iterator.hasNext()) {
SelectionKey key = (SelectionKey) iterator.next();
iterator.remove();
if (key.isReadable()) {
SocketChannel sc = (SocketChannel) key.channel();
ByteBuffer bb = ByteBuffer.allocate(1024);
sc.read(bb);
System.out.println("Message received!");
}
}
I am developing a server that is connected with many clients. I need to know when a client is disconnecting from server. So each client is sending a specific character to the server. If the character is not received after two seconds then I should disconnect the server from the client (releasing allocated resource for this client).
This is the main code of my server:
public EchoServer(int port) throws IOException {
this.port = port;
hostAddress = InetAddress.getByName("127.0.0.1");
selector = initSelector();
loop();
}
private Selector initSelector() throws IOException {
Selector socketSelector = SelectorProvider.provider().openSelector();
ServerSocketChannel serverChannel = ServerSocketChannel.open();
serverChannel.configureBlocking(false);
InetSocketAddress isa = new InetSocketAddress(hostAddress, port);
serverChannel.socket().bind(isa);
serverChannel.register(socketSelector, SelectionKey.OP_ACCEPT);
return socketSelector;
}
private void loop() {
for (;true;) {
try {
selector.select();
Iterator<SelectionKey> selectedKeys = selector.selectedKeys()
.iterator();
while (selectedKeys.hasNext()) {
SelectionKey key = selectedKeys.next();
selectedKeys.remove();
if (!key.isValid()) {
continue;
}
// Check what event is available and deal with it
if (key.isAcceptable()) {
accept(key);
} else if (key.isReadable()) {
read(key);
} else if (key.isWritable()) {
write(key);
}
}
Thread.sleep(1000);
timestamp++;
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
}
The first question is that, whether the way that I used in order to recognizing online clients (sending specific message every second) is a good approach or not?
If it is good, how can I detect with SelectionKey is related to witch client and then how can I disconnect the key from server?
The first question is that, whether the way that I used in order to recognizing online clients (sending specific message every second) is a good approach or not?
Not in the case of an echo server. In many cases such as this, all you need is to recognize end of stream and connection failure appropriately.
how can I detect with SelectionKey is related to which client
The SelectionKey has a channel, the channel has a socket, and the Socket has a remote IP address:port. That's all you need.
and then how can I disconnect the key from server?
Close the channel when you get -1 from the read() method, or any IOException when reading or writing.
whether the way that I used in order to recognizing online clients (sending specific message every second) is a good approach or not?
Yes, it is called a heartbeat.
how can I detect with SelectionKey is related to witch client and then how can I disconnect the key from server?
You can attach an object which has all the information need regarding a channel. You include this when you register the channel.
I'm writing a simple TCP client/server program pair in Java, and the server must disconnect if the client hasn't sent anything in 10 seconds. socket.setSoTimeout() gets me that, and the server disconnects just fine. The problem is - how can I get the client to determine if the server is closed? Currently I'm using DataOutputStream for writing to the server, and some answers here on SO suggest that writing to a closed socket will throw an IOException, but that doesn't happen.
What kind of writer object should I use to send arbitrary byte blocks to the server, that would throw an exception or otherwise indicate that the connection has been closed remotely?
Edit: here's the client code. This is a test function that reads one file from the file system and sends it to the server. It sends it in chunks, and pauses for some time between each chunk.
public static void sendFileWithTimeout(String file, String address, int dataPacketSize, int timeout) {
Socket connectionToServer = null;
DataOutputStream outStream = null;
FileInputStream inStream = null;
try {
connectionToServer = new Socket(address, 2233);
outStream = new DataOutputStream(connectionToServer.getOutputStream());
Path fileObject = Paths.get(file);
outStream.writeUTF(fileObject.getFileName().toString());
byte[] data = new byte[dataPacketSize];
inStream = new FileInputStream(fileObject.toFile());
boolean fileFinished = false;
while (!fileFinished) {
int bytesRead = inStream.read(data);
if (bytesRead == -1) {
fileFinished = true;
} else {
outStream.write(data, 0, bytesRead);
System.out.println("Thread " + Thread.currentThread().getName() + " wrote " + bytesRead + " bytes.");
Thread.sleep(timeout);
}
}
} catch (IOException | InterruptedException e) {
System.out.println("Something something.");
throw new RuntimeException("Problem sending data to server.", e);
} finally {
TCPUtil.silentCloseObject(inStream);
TCPUtil.silentCloseObject(outStream);
TCPUtil.silentCloseObject(connectionToServer);
}
}
I'd expect the outStream.write to throw an IOException when it tries to write to a closed server, but nothing.
I'd expect the outStream.write to throw an IOException when it tries to write to a closed server, but nothing.
It won't do that the first time, because of the socket send buffer. If you keep writing, it will eventually throw an IOException: 'connection reset'. If you don't have data to get to that point, you will never find out that the peer has closed.
I think you need to flush and close your stream after written like outStream.flush(); outStream.close(); inStream.close();
Remember ServerSocket.setSoTimeout() is different from client's function with same name.
For server, this function only throws SocketTimeoutException for you to catch it if timeout is expired, but the server socket still remains.
For client, setSoTimeout() relates to 'read timeout' for stream reading.
In your case, you must show your server code of closing the connected socket after catching SocketTimeoutException => ensure server closed the associated socket with a specified client. If done, at client side, your code line:
throw new RuntimeException("Problem sending data to server.", e);
will be called.
[Update]
I noticed that you stated to set timeout for the accepted socket at server side to 10 secs (=10,000 milliseconds); for that period, did your client complete all the file sending? if it did, never the exception occurs.
[Suggest]
for probing, just comment out your code of reading file content to send to server, and try replacing with several lines of writing to output stream:
outStream.writeUTF("ONE");
outStream.writeUTF("TWO");
outStream.writeUTF("TREE");
Then you can come to the conclusion.
I have a Java TCP server which, when a client connects to it, outputs a message to the client every 30 seconds. It is a strict requirement that the client does not send any messages to the server, and that the server does not send any data other than the 30-second interval messages to the client.
When I disconnect the client, the server will not realise this until the next time it tries to write to the client. So it can take up to 30 seconds for the server to recognise the disconnect.
What I want to do is check for the disconnect every few seconds without having to wait, but I am not sure how to do this given that a) the server does not receive from the client and b) the server cannot send any other data. Would anyone please be able to shed some light on this? Thanks.
Even though your server doesn't "receive" from the client, a non-blocking read on the client socket will tell you that either there's nothing to be read (as you expect), or that the client has disconnected.
If you're using NIO you can simply use a non-blocking Selector loop (with non-blocking sockets) and only write on your 30 second marks. If a SelectionKey is readable and the read on the SocketChannel returns -1 you know the client has disconnected.
EDIT: Another approach with blocking is simply to select with a 30 second timeout. Any client disconnects will cause the select to return and you'll know which ones those are via the read set. The additional thing you'd need to do there is track how long you were blocked in the select to figure out when to do your writes on the 30 second mark (Setting the timeout for the next select to the delta).
Big Edit: After talking to Myn below, offering complete example:
public static void main(String[] args) throws IOException {
ServerSocket serverSocket = null;
try {
serverSocket = new ServerSocket(4444);
} catch (IOException e) {
System.err.println("Could not listen on port: 4444.");
System.exit(1);
}
Socket clientSocket = null;
try {
clientSocket = serverSocket.accept();
} catch (IOException e) {
System.err.println("Accept failed.");
System.exit(1);
}
// Set a 1 second timeout on the socket
clientSocket.setSoTimeout(1000);
PrintWriter out = new PrintWriter(clientSocket.getOutputStream(), true);
BufferedReader in = new BufferedReader(
new InputStreamReader(
clientSocket.getInputStream()));
long myNextOutputTime = System.currentTimeMillis() + 30000;
String inputLine = null;
boolean connected = true;
while (connected)
{
try {
inputLine = in.readLine();
if (inputLine == null)
{
System.out.println("Client Disconnected!");
connected = false;
}
}
catch(java.net.SocketTimeoutException e)
{
System.out.println("Timed out trying to read from socket");
}
if (connected && (System.currentTimeMillis() - myNextOutputTime > 0))
{
out.println("My Message to the client");
myNextOutputTime += 30000;
}
}
out.close();
in.close();
clientSocket.close();
serverSocket.close();
}
Worth noting here is that the PrintWriter really moves you far away from the actual socket, and you're not going to catch the socket disconnect on the write (It will never throw an exception, you have to manually check it with checkError()) You could change to using a BufferedWriter instead (requires using flush() to push the output) and handling it like the BufferedReader to catch a disco on the write.
If you are managing multiple clients then I guess you would be using Non-Blocking sockets (If not then consider using Non-Blocking). You can use Selector to monitor all the connected sockets to check if they are readable or writeable or there is some Error on that socket. When some client disconnects, your Selector will mark that socket and will return.
For more help google "Socket Select function"