Java writing to closed output stream not throwing ioexception - java

Here is the flow of my client/server.
Socket is created in main thread.
Socket passes to Thread 1.
Client sends data to server
Server responds to client
Server closes input stream, output stream, and socket by invoking close()
Socket returned to main thread, and then passed to Thread 2.
Client writes data to server - no exception, no errors, server gets no data
Client attempts to read data - no exceptions, no errors
How can I detect the problem that socket was closed?
try {
os = new DataOutputStream(this.socket.getOutputStream());
os.write(data);
os.flush();
System.out.println("Written " + data.length + " bytes");
} catch (IOException e) {
System.out.println("Client failed to write to stream");
System.out.println(e.getMessage());
e.printStackTrace();
}
The exception is never thrown. It says Written 60 bytes. Any ideas?
UPDATE
Here is the way that I read the response. I wait for data, read first 4 bytes (which gives the the length of response), and keep reading until I read the specified length. This loop never ends because no data ever comes in.
is = new DataInputStream(this.socket.getInputStream());
while(true){
while(is.available() > 0){
bos.write(is.read());
}
if(contentLength == 0 && bos.size() > 3){
byte[] bytes = bos.toByteArray();
byte[] size = Arrays.copyOf(bytes, 4);
contentLength = ByteBuffer.wrap(size).getInt();
}
if(bos.size() - 4 < contentLength){
continue;
}
break;
}

When the server closes its end of the TCP connection, it sends a FIN packet to tell the client. The client program sees this as the InputStream reaching the end. Also, if the client tries to write the server will send an RST packet to signal error. If the client program tries to write after RST was received, the API will throw a SocketException with the message "connection reset."
Your problem is with detecting the end of the input here:
while(is.available() > 0){
bos.write(is.read());
}
The available method probably doesn't do what you think it does. If you want to read 4 bytes, read 4 bytes:
byte[] bytes = new byte[4];
is.readFully(bytes); // throws EOFException on end of input

It says that it wrote 60 bytes because it did. The server just isn't listening anymore. You need to get the input stream to see if anything is there. If nothing comes back, then you know that the connection didn't work.
UPDATE
Use a timer to determine if the server is responding.
Long currenttime = System.currentTimeMillis();
while(System.currentTimeMillis() - currentTime < x){ //x = miliseconds you want to wait
(try to read from server)
}
This way if it takes longer than x seconds, it will time out and you can do some error catching after the while loop

Related

Send message from Java client to Python server using sockets and vice versa

I am trying to send receive data using a Python server and a Java client. First, Java sends a JSON in string to Python Server. After the string received, Python server will send a JSON back to the client. After the client receives the JSON from the server, it again sends a JSON in string to server. (Client sends the same message all the time) This is a recursive process.
ISSUE: After when I execute both Python server and Java, Python server receives the message sent by the Java Client and it sent back the JSON. But in the client side, the message sent by the server didnt receive.
Server.py
import socket
import threading
import json
import numpy
HEADER_INITIAL = 25
PORT = 1234
SERVER = socket.gethostbyname(socket.gethostname())
ADDR = (SERVER, PORT)
FORMAT = 'utf-8'
def handle_client(self, conn, addr):
print(f"[NEW CONNECTION] {addr} connected.")
connected = True
while connected:
msg = conn.recv(HEADER_INITIAL).decode(FORMAT)
if msg:
print("[DATA] RECEIVED"+ str(msg))
x = {
"Sentence": "This is a value"
}
y = json.dumps(x)
conn.send(y.encode(FORMAT))
conn.send("\n".encode(FORMAT));
conn.close()
Client.java
try (Socket socket = new Socket(Address, Port)) {
InputStream input = socket.getInputStream();
InputStreamReader reader = new InputStreamReader(input);
OutputStream output = socket.getOutputStream();
PrintWriter writer = new PrintWriter(output, true);
int character;
StringBuilder data = new StringBuilder();
while(true){
Thread.sleep(4000);
String strJson = "{'message':'Hello World'}";
JSONObject jsonObj = new JSONObject(strJson);
writer.println(jsonObj.toString());
while((character = reader.read()) != -1) {
data.append((char) character);
}
System.out.println(data);
}
} catch (UnknownHostException ex) {
System.out.println("Server not found: " + ex.getMessage());
} catch (IOException ex) {
System.out.println("I/O error: " + ex.getMessage());
}
UPDATE
Here is the debug output.
I first started the server and then started client. Initially server receives the {'message':'Hello World'} value which is sent by the client and the server sends back the value of the x variable to the client. Then the server receives nothing from the client, but the client prints the value of x continuously.( System.out.println(data);) I tried to send dynamic values from the server to client, but the client prints only the value which is sent by the server in the first time.
You don't provide any debugging output so it's difficult to be 100% sure this is the entire cause. However, it seems pretty evident that this section of your client code isn't correct:
while((character = reader.read()) != -1) {
data.append((char) character);
}
System.out.println(data);
The server is holding the connection open forever (nothing ever sets connected to false). And so in the loop above, the character returned by reader.read will never be -1 because -1 is only returned at "end of stream". End of stream will only occur when the server closes its socket -- or is otherwise disconnected.
You should add a check for the newline to break out of the read loop:
if (character == '\n')
break;
or you could add it to the while condition:
while ((character = reader.read()) != -1 && character != '\n') {
...
Your code overall lacks appropriate handling of possible exceptional conditions. For example, if the client disconnects, your server will never exit its loop. It will call recv, get back an empty string (signifying "end of file" on the connection), and so will correctly bypass sending a response, but it will then simply go back and execute recv again, get an empty string again, and so forth forever.
Also, your python code makes the implicit assumption that the recv returns exactly the single string that was sent by the client, which is not guaranteed. If the client sends a 20 character string for example, it's possible that the first server recv call returns the first 10 characters, and the next call returns the rest.
(In practice, given the sleep in the client side code, that's unlikely to be a problem in this snippet of code, but one should program defensively because in a real production program, there will inevitably be a race or edge case that will do exactly this and it will cause the client and server to get out of sync and be difficult to debug.)

TCP detect disconnected server from client

I'm writing a simple TCP client/server program pair in Java, and the server must disconnect if the client hasn't sent anything in 10 seconds. socket.setSoTimeout() gets me that, and the server disconnects just fine. The problem is - how can I get the client to determine if the server is closed? Currently I'm using DataOutputStream for writing to the server, and some answers here on SO suggest that writing to a closed socket will throw an IOException, but that doesn't happen.
What kind of writer object should I use to send arbitrary byte blocks to the server, that would throw an exception or otherwise indicate that the connection has been closed remotely?
Edit: here's the client code. This is a test function that reads one file from the file system and sends it to the server. It sends it in chunks, and pauses for some time between each chunk.
public static void sendFileWithTimeout(String file, String address, int dataPacketSize, int timeout) {
Socket connectionToServer = null;
DataOutputStream outStream = null;
FileInputStream inStream = null;
try {
connectionToServer = new Socket(address, 2233);
outStream = new DataOutputStream(connectionToServer.getOutputStream());
Path fileObject = Paths.get(file);
outStream.writeUTF(fileObject.getFileName().toString());
byte[] data = new byte[dataPacketSize];
inStream = new FileInputStream(fileObject.toFile());
boolean fileFinished = false;
while (!fileFinished) {
int bytesRead = inStream.read(data);
if (bytesRead == -1) {
fileFinished = true;
} else {
outStream.write(data, 0, bytesRead);
System.out.println("Thread " + Thread.currentThread().getName() + " wrote " + bytesRead + " bytes.");
Thread.sleep(timeout);
}
}
} catch (IOException | InterruptedException e) {
System.out.println("Something something.");
throw new RuntimeException("Problem sending data to server.", e);
} finally {
TCPUtil.silentCloseObject(inStream);
TCPUtil.silentCloseObject(outStream);
TCPUtil.silentCloseObject(connectionToServer);
}
}
I'd expect the outStream.write to throw an IOException when it tries to write to a closed server, but nothing.
I'd expect the outStream.write to throw an IOException when it tries to write to a closed server, but nothing.
It won't do that the first time, because of the socket send buffer. If you keep writing, it will eventually throw an IOException: 'connection reset'. If you don't have data to get to that point, you will never find out that the peer has closed.
I think you need to flush and close your stream after written like outStream.flush(); outStream.close(); inStream.close();
Remember ServerSocket.setSoTimeout() is different from client's function with same name.
For server, this function only throws SocketTimeoutException for you to catch it if timeout is expired, but the server socket still remains.
For client, setSoTimeout() relates to 'read timeout' for stream reading.
In your case, you must show your server code of closing the connected socket after catching SocketTimeoutException => ensure server closed the associated socket with a specified client. If done, at client side, your code line:
throw new RuntimeException("Problem sending data to server.", e);
will be called.
[Update]
I noticed that you stated to set timeout for the accepted socket at server side to 10 secs (=10,000 milliseconds); for that period, did your client complete all the file sending? if it did, never the exception occurs.
[Suggest]
for probing, just comment out your code of reading file content to send to server, and try replacing with several lines of writing to output stream:
outStream.writeUTF("ONE");
outStream.writeUTF("TWO");
outStream.writeUTF("TREE");
Then you can come to the conclusion.

InputStream receive method blocking

I am stuck with the following problem. I have created a connection to a remote echo server. The following method is used for receiving the bytes received from the server:
public byte[] receive() {
byte[] resultBuff = new byte[0];
byte[] buff = new byte[4096];
try {
InputStream in = socket.getInputStream();
int k = -1;
while((k = in.read(buff, 0, buff.length)) != -1) {
System.out.println(k);
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
String test = new String(resultBuff);
System.out.println(test);
}
System.out.println(resultBuff.length + " bytes read.");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return resultBuff;
}
I am able to get the following response from the server:
Connection to MSRG Echo server established
The problem is that the loop gets stuck at the second execution on in.read(). I understand that this is due the the server not sending any EOF info and the like.
I am not sure which of the following two solutions is correct and in which way to implement it:
Each message coming from the server will be read by a new execution of the receive() method. How do I prevent the in.read() method from blocking?
The loop inside the receive() method should be kept alive until application exit. This means that my implementation is currently using in.read() wrong. In which way should this be implemented.
The key to this question is your use of the word 'message'. There are no messages in TCP, only a byte stream. If you want messages you must implement them yourself: read a byte at a time until you have a complete message, process it, rinse and repeat. You can amortize the cost of the single-byte reads by using a BufferedInputStream.
But there are no messages in an echo server. Your read and accumulate strategy is therefore inappropriate. Just echo immediately whatever you received.

How can I make sure I received whole file through socket stream?

Ok, So I'm making a Java program that has a server and client and I'm sending a Zip file from server to client. I have sending the file down, almost. But recieving I've found some inconsistency. My code isn't always getting the full archive. I'm guessing it's terminating before the BufferedReader has the full thing. Here's the code for the client:
public void run(String[] args) {
try {
clientSocket = new Socket("jacob-custom-pc", 4444);
out = new PrintWriter(clientSocket.getOutputStream(), true);
in = new BufferedInputStream(clientSocket.getInputStream());
BufferedReader inRead = new BufferedReader(new InputStreamReader(in));
int size = 0;
while(true) {
if(in.available() > 0) {
byte[] array = new byte[in.available()];
in.read(array);
System.out.println(array.length);
System.out.println("recieved file!");
FileOutputStream fileOut = new FileOutputStream("out.zip");
fileOut.write(array);
fileOut.close();
break;
}
}
}
} catch(IOException e) {
e.printStackTrace();
System.exit(-1);
}
}
So how can I be sure the full archive is there before it writes the file?
On the sending side write the file size before you start writing the file. On the reading side Read the file size so you know how many bytes to expect. Then call read until you have gotten everything you expect. With network sockets it may take more than one call to read to get everything that was sent. This is especially true as your data gets larger.
HTTP sends a content-length: x+\n in bytes. This is elegant, it might throw a TimeoutException if the conn is broken.
You are using a TCP socket. The ZIP file is probably larger than the network MTU, so it will be split up into multiple packets and reassembled at the other side. Still, something like this might happen:
client connects
server starts sending. The ZIP file is bigger than the MTU and therefore split up into multiple packets.
client busy-waits in the while (true) until it gets the first packets.
client notices that data has arrived (in.available() > 0)
client reads all available data, writes it to the file and exits
the last packets arrive
So as you can see: Unless the client machine is crazily slow and the network is crazily fast and has a huge MTU, your code simply won't receive the entire file by design. That's how you built it.
A different approach: Prefix the data with the length.
Socket clientSocket = new Socket("jacob-custom-pc", 4444);
DataInputStream dataReader = new DataInputStream(clientSocket.getInputStream());
FileOutputStream out = new FileOutputStream("out.zip");
long size = dataReader.readLong();
long chunks = size / 1024;
int lastChunk = (int)(size - (chunks * 1024));
byte[] buf = new byte[1024];
for (long i = 0; i < chunks; i++) {
dataReader.read(buf);
out.write(buf);
}
dataReader.read(buf, 0, lastChunk);
out.write(buf, 0, lastChunk);
And the server uses DataOutputStream to send the size of the file before the actual file. I didn't test this, but it should work.
How can I make sure I received whole file through socket stream?
By fixing your code. You are using InputStream.available() as a test for end of stream. That's not what it's for. Change your copy loop to this, which is also a whole lot simpler:
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
Use with any buffer size greater than zero, typically 8192.
In.available() just tells you that there is no data to be consumed by in.read() without blocking (waiting) at the moment but it does not mean the end of stream. But, they may arrive into your PC at any time, with TCP/IP packet. Normally, you never use in.available(). In.read() suffices everything for the reading the stream entirely. The pattern for reading the input streams is
byte[] buf;
int size;
while ((size = in.read(buf)) != -1)
process(buf, size);
// end of stream has reached
This way you will read the stream entirely, until its end.
update If you want to read multiple files, then chunk you stream into "packets" and prefix every one with an integer size. You then read until size bytes is received instead of in.read = -1.
update2 Anyway, never use in.available for demarking between the chunks of data. If you do that, you imply that there is a time delay between incoming data pieces. You can do this only in the real-time systems. But Windows, Java and TCP/IP are all these layers incompatible with real-time.

Is Socket.getInputStream().read(byte[]) guaranteed to not block after at least some data is read?

The JavaDoc for the class InputStream says the following:
Reads up to len bytes of data from the input stream into an array of
bytes. An attempt is made to read as many as len bytes, but a smaller
number may be read. The number of bytes actually read is returned as
an integer. This method blocks until input data is available, end of
file is detected, or an exception is thrown.
This corresponds to my experience as well. See for instance the example code below:
Client:
Socket socket = new Socket("localhost", PORT);
OutputStream out = socket.getOutputStream();
byte[] b = { 0, 0 };
Thread.sleep(5000);
out.write(b);
Thread.sleep(5000);
out.write(b);
Server:
ServerSocket server = new ServerSocket(PORT);
Socket socket = server.accept();
InputStream in = socket.getInputStream();
byte[] buffer = new byte[4];
System.out.println(in.read(buffer));
System.out.println(in.read(buffer));
Output:
2 // Two bytes read five seconds after Client is started.
2 // Two bytes read ten seconds after Client is started.
The first call to read(buffer) blocks until input data is available. However the method returns after two bytes are read, even though there is still room in the byte buffer, which corresponds with the JavaDoc stating that 'An attempt is made to read as many as len bytes, but a smaller number may be read'. However, is it guaranteed that the method will not block once at least one byte of data is read when the input stream comes from a socket?
The reason I ask is that I saw the following code in the small Java web server NanoHTTPD, and I wondered if a HTTP Request smaller than 8k bytes (which most requests are) potientially could make the thread block indefinately unless there is a guarantee that it won't block once some data is read.
InputStream is = mySocket.getInputStream();
// Read the first 8192 bytes. The full header should fit in here.
byte[] buf = new byte[8192];
int rlen = is.read(buf, 0, bufsize);
Edit:
Let me try to illustrate once more with a relatively similar code example. EJP says that the method blocks until either EOS is signalled or at least one byte of data has arrived, in which case it reads however many bytes of data have arrived, without blocking again, and returns that number, which corresponds to the JavaDoc for method read(byte[], int, int) in the class InputStream. However, if one actually looks at the source code it is clear that the method indeed blocks until the buffer is full. I've tested it by using the same Client as above and copying the InputStream-code to a static method in my server example.
public static void main(String[] args) throws Exception {
ServerSocket server = new ServerSocket(PORT);
Socket socket = server.accept();
InputStream in = socket.getInputStream();
byte[] buffer = new byte[4];
System.out.println(read(in, buffer, 0, buffer.length));
}
public static int read(InputStream in, byte b[], int off, int len) throws IOException {
if (b == null) {
throw new NullPointerException();
}
else if (off < 0 || len < 0 || len > b.length - off) {
throw new IndexOutOfBoundsException();
}
else if (len == 0) {
return 0;
}
int c = in.read();
if (c == -1) {
return -1;
}
b[off] = (byte)c;
int i = 1;
try {
for (; i < len; i++) {
c = in.read();
if (c == -1) {
break;
}
b[off + i] = (byte)c;
}
}
catch (IOException ee) {
}
return i;
}
This code will have as its output:
4 // Four bytes read ten seconds after Client is started.
Now clearly there is data available after 5 seconds, however the method still blocks trying to fill the entire buffer. This doesn't seem to be the case with the input stream that Socket.getInputStream() returns, but is it guaranteed that it will never block once data is available, like the JavaDoc says but not like the source code shows?
However, is it guaranteed that the method will not block once at least one byte of data is read when the input stream comes from a socket?
I don't think this question means anything. The method blocks until either EOS is signalled or at least one byte of data has arrived, in which case it reads however many bytes of data have arrived, without blocking again, and returns that number.
I saw the following code in the small Java web server NanoHTTPD
The code is wrong. It makes the invalid assumption that the entire header will be delivered in the first read. I would expect to see a loop here, that loops until a blank line is detected.
I wondered if a HTTP Request smaller than 8k bytes (which most requests are) potientially could make the thread block indefinitely unless there is a guarantee that it won't block once some data is read.
Again I don't think this means anything. The method will block until at least one byte has arrived, or EOS. Period.

Categories

Resources