Java Detect Closed Stream - java

I have a general socket implementation consisting of an OutputStream and an InputStream.
After I do some work, I am closing the OutputStream.
When this is done, my InputStream's read() method returns -1 for an infinite amount of time, instead of throwing an exception like I had anticipated.
I am now unsure of the safest route to take, so I have a few of questions:
Am I safe to assume that -1 is only
returned when the stream is closed?
Is there no way to recreate the IO
exception that occurs when the
connection is forcefully broken?
Should I send a packet that will tell my InputStream that it should close instead of the previous two methods?
Thanks!

The -1 is the expected behavior at the end of a stream. See InputStream.read():
Reads the next byte of data from the input stream. The value byte is returned as an int in the range 0 to 255. If no byte is available because the end of the stream has been reached, the value -1 is returned. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown.
You should still catch IOException for unexpected events of course.

Am I safe to assume that -1 is only returned when the stream is closed?
Yes.
You should not assume things like this. You should read the javadoc and implement according how the API is specified to behave. Especially if you want your code to be robust (or "safe" as you put it.)
Having said that, this is more or less what the javadoc says in this case. (One could quibble that EOF and "stream has been closed" don't necessarily mean the same thing ... and that closing the stream by calling InputStream.close() or Socket.close() locally will have a different effect. However, neither of these are directly relevant to your use-case.)
Is there no way to recreate the IO exception that occurs when the connection is forcefully broken?
No. For a start, no exception is normally thrown in the first place, so there is typically nothing to "recreate". Second the information in the original exception (if there ever was one) is gone.
Should I send a packet that will tell my InputStream that it should close instead of the previous two methods?
No. The best method is to test the result of the read call. You need to test it anyway, since you cannot assume that the read(byte[]) method (or whatever) will have returned the number of bytes you actually asked for.
I suppose that throwing an application specific exception would be OK under some circumstances.
But remember the general principle that exceptions should not be used for normal flow control.
One of the other answers suggests creating a proxy InputStream that throws some exception instead of returning -1.
IMO, that is a bad idea. You end up with a proxy class that claims to be an InputStream, but violates the contract of the read methods. That could lead to trouble if the proxy was passed to something that expected a properly implemented InputStream.
Second, InputStream is an abstract class not an interface, so Java's dynamic proxy mechanism won't work. (For example, the newProxyInstance method requires a list of interfaces, not classes.)

According to the InputStream javadoc, read() returns:
the next byte of data, or -1 if the end of the stream is reached.
So you are safe to assume that and it's better to use what's specified in the API than try and recreate an exception because exceptions thrown could be implementation-dependent.

Also, closing the Outputs Stream in a socket closes the socket itself.
This is what the JavaDoc for Socket says:
public OutputStream getOutputStream()
throws IOException
Returns an output stream for this socket.
If this socket has an associated channel then the resulting output
stream delegates all of its operations
to the channel. If the channel is in
non-blocking mode then the output
stream's write operations will throw
an IllegalBlockingModeException.
Closing the returned OutputStream will close the associated socket.
Returns:
an output stream for writing bytes to this socket.
Throws:
IOException - if an I/O error occurs when creating the output stream
or if the socket is not connected.
Not sure that this is what you actually want to do.

Is there no way to recreate the IO exception that occurs when the connection is forcefully broken?
I'll answer this one. InputStream is only an interface. If you really want implementation to throw an exception on EOF, provide your own small wrapper, override read()s and throw an exception on -1 result.
The easiest (least coding) way would be to use a Dynamic Proxy:
InputStream pxy = (InputStream) java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
new Class[]{ InputStream.class },
new ThrowOnEOFProxy(obj));
where ThrowOnEOFProxy would check the method name, call it and if result is -1, throw IOException("EOF").

Related

JVM not killed on SIGPIPE

What is the reason for the JVM handling SIGPIPE the way it does?
I would've expected for
java foo | head -10
with
public class Foo {
public static void main(String[] args){
Stream.iterate(0, n -> n + 1).forEach(System.out::println);
}
}
to cause the process to be killed when writing the 11th line, however that is not the case. Instead, it seems that only a trouble flag is being set at the PrintStream, which can be checked through System.out.checkError().
What happens is that the SIGPIPE exception results in an IOException.
For most OutputStream and Writer classes, this exception propagates through the "write" method, and has to be handled by the caller.
However, when you are writing to System.out, you are using a PrintStream, and that class by design takes care of the IOException of you. As the javadoc says:
A PrintStream adds functionality to another output stream, namely the ability to print representations of various data values conveniently. Two other features are provided as well. Unlike other output streams, a PrintStream never throws an IOException; instead, exceptional situations merely set an internal flag that can be tested via the checkError method.
What is the reason for the JVM handling SIGPIPE the way it does?
The above explains what is happening. The "why" is ... I guess ... that the designers wanted to make PrintStream easy to use for typical use cases of System.out where the caller doesn't want to deal with a possible IOException on every call.
Unfortunately, there is no elegant solution to this:
You could just call checkError ...
You should be able get hold of the FileDescriptor.out object, and wrap it in a new FileOutputStream object ... and use that instead of System.out.
Note that there are no strong guarantees that the Java app will only write 10 lines of output in java foo | head -1. It is quite possible for the app to write-ahead many lines, and to only "see" the pipe closed after head has gotten around to reading the first 10 of them. This applies with System.out (and checkError) or if you wrap FileDescriptor.

How do I check if ObjectInputStream has something to read? [duplicate]

I'm using an ObjectInputStream to call readObject for reading in serialized Objects. I would like to avoid having this method block, so I'm looking to use something like Inputstream.available().
InputStream.available() will tell you there are bytes available and that read() will not block. Is there an equivalent method for seriailzation that will tell you if there are Objects available and readObject will not block?
No. Although you could use the ObjectInputStream in another thread and check to see whether that has an object available. Generally polling isn't a great idea, particularly with the poor guarantees of InputStream.available.
The Java serialization API was not designed to support an available() function. If you implement your own object reader/writer functions, you can read any amount of data off the stream you like, and there is no reporting method.
So readObject() does not know how much data it will read, so it does not know how many objects are available.
As the other post suggested, your best bet is to move the reading into a separate thread.
I have an idea that by adding another InputStream into the chain one can make availability information readable by the client:
HACK!
InputStream is = ... // where we actually read the data
BufferedInputStream bis = new BufferedInputStream(is);
ObjectInputStream ois = new ObjectInputStream(bis);
if( bis.available() > N ) {
Object o = ois.readObject();
}
The tricky point is value of N. It should be big enough to cover both serialization header and object data. If those are varying wildly, no luck.
The BufferedInputStream works for me, and why not just check if(bis.available() > 0) instead of a N value, this works perfectly for me.
I think ObjectInputStream.readObject blocks(= waits until) when no input is to be read. So if there is any input at all in the stream aka if(bis.available() > 0) ObjectInputStream.readObject will not block. Keep in mind that ObjectInputStream.readObject might throw a ClassNotFoundException, and that is't a problem at all to me.

ObjectInputStream available() method doesn't work as expected (Java)

I've been trying to figure out why a method I've written to read objects from a file didn't work and realized that the available() method of ObjectInputStream gave 0 even though the file wasn't fully read.
The method did work after I've used the FileInputStream available() method instead to determine the EOF and it worked!
Why doesn't the method work for ObjectInputStram while it works for FileInputStream?
Here's the code:
public static void getArrFromFile() throws IOException, ClassNotFoundException {
Product p;
FileInputStream in= new FileInputStream(fName);
ObjectInputStream input= new ObjectInputStream(in);
while(in.available()>0){
p=(Product)input.readObject();
if (p.getPrice()>3000)
System.out.println(p);
}
input.close();
P.S-
I've read that I should use the EOF exception instead of available() for this, but I just wanna know why this doesn't work.
Thanks a lot!!!
Because, as the javadoc tells, available() returns an estimation of the number of bytes that can be read without blocking. The base InputStream implementation always returns 0, because this is a valid estimation. But whatever it returns, the fact that it returns 0 doesn't mean that there is nothing to read anymore. Only that the stream can't guarantee that at least one byte can be read without blocking.
Although this is not documented clearly I have realized from experience that it has to do with dynamic data. If your class only contains statically typed data, then available() is able to estimate the size. If there are dynamic data in your object, like lists etc, then it is not possible to make that estimation.
The available() method just tells how many bytes can be read without blocking. It's not very useful in regular code, but people see the name and erroneously think it does something else.
So in short: don't use available(), it's not the right method to use. Streams indicate ending differently, such as returning -1 or in ObjectInputStream's case, throwing an EOFException.
I found some trick! If you still want to use .available() to read all objects to the end, you can add an integer (ex: out.writeInt(0)) before adding each Object (ex: out.writeObject(obj)) when you write to the file and also read integer before reading each Object. So .available() can read byte left on file and won't be crash! Hope it helps you!
Use available function of InputStream instead of ObjectInputStream. Then if there is any data, get them as an object.
Something like:
if(inputStreamObject.available() > 0){
Object anyName = objectInputStreamObject.readObject();
}
You can get the inputStreamObject directly from the Socket.
I used it and the problem solved.

How does InputStream.available() work? [duplicate]

This question already has answers here:
What does InputStream.available() do in Java?
(2 answers)
Closed 9 years ago.
Directly from the API:
public int available()
throws IOException
Returns an estimate of the number of bytes that can be read (or
skipped over) from this input stream without blocking by the next
invocation of a method for this input stream. The next invocation
might be the same thread or another thread. A single read or skip of
this many bytes will not block, but may read or skip fewer bytes.
Note that while some implementations of InputStream will return the
total number of bytes in the stream, many will not. It is never
correct to use the return value of this method to allocate a buffer
intended to hold all data in this stream.
A subclass' implementation of this method may choose to throw an
IOException if this input stream has been closed by invoking the
close() method.
The available method for class InputStream always returns 0.
This method should be overridden by subclasses.
I cannot quite grasp the concept of a possible usage of this method. Can anybody make a real life example about it?
Thanks in advance.
I've been searching for a real life example for this for 20+ years.
How it works depends on the stream. For some streams, it doesn't work at all. For buffered streams, it works by returning the amount unread in the buffer plus the available() of the nested stream. For sockets and files, it executes a system call.

Suggestion for API design about stream

I need to design an API method which takes an OutputStream as a parameter.
Is it a good practice to close the stream inside the API method or let the caller close it?
test(OutputStream os) {
os.close() //???
}
I think it should be symmetric.
If you do not open that stream (which is likely to be your case), you should not close it, either, in general.
Unless the purpose of the API is to "finish up the stream", you should let the caller close. He had it first, he was responsible for it, and he may decide that he wants to write some stuff to the stream that your API didn't originally envision. Keep your functionality seperated; its more composable.
Let user close it. As you are taking OutputStream in argument so we can think that user has already created and opened it. So if you close in your method it will be not good. And if you are just taking new OutputStream as argument and opens it in your method then no need to take it as argument and you can also close it in your method.
Different use-cases require different patterns, for example, depending on whether the caller needs to read from or write to the stream after the call has completed.
The key API design rule is that the API should specify whether it is the caller or called method's responsibility to close the stream.
Having said that, it is generally simpler and safer if the code that opens a stream is also responsible for closing it.
Consider the case where methodA is supposed to open a stream and pass it to methodB, but an exception is thrown between the stream being opened and methodB entering the try / finally statement that is ultimately responsible for closing it. You need to code it something like the following to ensure that streams don't leak:
public void methodA() throws IOException {
InputStream myStream = new FileInputStream(...);
try {
// do stuff with stream
methodB(myStream);
} finally {
myStream.close();
}
}
/**
* #param myStream this method is responsible for closing myStream.
*/
public void methodB(InputStream myStream) throws IOException {
try {
// do more stuff with myStream
} finally {
myStream.close();
}
}
This won't leak an open stream as a result of exceptions (or errors!) thrown in either methodA or methodB. (It works for the standard stream types because the Closable API specifies that close has no effect when called on a stream that is already closed.)

Categories

Resources