This question already has answers here:
What does InputStream.available() do in Java?
(2 answers)
Closed 9 years ago.
Directly from the API:
public int available()
throws IOException
Returns an estimate of the number of bytes that can be read (or
skipped over) from this input stream without blocking by the next
invocation of a method for this input stream. The next invocation
might be the same thread or another thread. A single read or skip of
this many bytes will not block, but may read or skip fewer bytes.
Note that while some implementations of InputStream will return the
total number of bytes in the stream, many will not. It is never
correct to use the return value of this method to allocate a buffer
intended to hold all data in this stream.
A subclass' implementation of this method may choose to throw an
IOException if this input stream has been closed by invoking the
close() method.
The available method for class InputStream always returns 0.
This method should be overridden by subclasses.
I cannot quite grasp the concept of a possible usage of this method. Can anybody make a real life example about it?
Thanks in advance.
I've been searching for a real life example for this for 20+ years.
How it works depends on the stream. For some streams, it doesn't work at all. For buffered streams, it works by returning the amount unread in the buffer plus the available() of the nested stream. For sockets and files, it executes a system call.
Related
public void mark(int readAheadLimit)
throws IOException
FilterReader class in Java.io package has a mark method that marks the next element, what is the use of that parameter in that method.
JavaDocs - Limit on the number of characters that may be read while still preserving the mark. After reading this many characters, attempting to reset the stream may fail
What does that mean . Examples and explanations are appreciated about failing reset and what does that parameter do!!
There is a better explanation in InputStream.mark. A reader that supports mark() should mirror this behavior, e.g. by delegating it to the underlying InputStream:
Marks the current position in this input stream. A subsequent call to the reset method repositions this stream at the last marked position so that subsequent reads re-read the same bytes.
The readlimit arguments tells this input stream to allow that many bytes to be read before the mark position gets invalidated.
The general contract of mark is that, if the method markSupported returns true, the stream somehow remembers all the bytes read after the call to mark and stands ready to supply those same bytes again if and whenever the method reset is called. However, the stream is not required to remember any data at all if more than readlimit bytes are read from the stream before reset is called.
Marking a closed stream should not have any effect on the stream.
The mark method of InputStream does nothing.
So, the parameter tells the mark() method how large the buffer for remembering elements needs to be. This allows it to allocate a buffer of appropriate size, if needed.
I've been trying to figure out why a method I've written to read objects from a file didn't work and realized that the available() method of ObjectInputStream gave 0 even though the file wasn't fully read.
The method did work after I've used the FileInputStream available() method instead to determine the EOF and it worked!
Why doesn't the method work for ObjectInputStram while it works for FileInputStream?
Here's the code:
public static void getArrFromFile() throws IOException, ClassNotFoundException {
Product p;
FileInputStream in= new FileInputStream(fName);
ObjectInputStream input= new ObjectInputStream(in);
while(in.available()>0){
p=(Product)input.readObject();
if (p.getPrice()>3000)
System.out.println(p);
}
input.close();
P.S-
I've read that I should use the EOF exception instead of available() for this, but I just wanna know why this doesn't work.
Thanks a lot!!!
Because, as the javadoc tells, available() returns an estimation of the number of bytes that can be read without blocking. The base InputStream implementation always returns 0, because this is a valid estimation. But whatever it returns, the fact that it returns 0 doesn't mean that there is nothing to read anymore. Only that the stream can't guarantee that at least one byte can be read without blocking.
Although this is not documented clearly I have realized from experience that it has to do with dynamic data. If your class only contains statically typed data, then available() is able to estimate the size. If there are dynamic data in your object, like lists etc, then it is not possible to make that estimation.
The available() method just tells how many bytes can be read without blocking. It's not very useful in regular code, but people see the name and erroneously think it does something else.
So in short: don't use available(), it's not the right method to use. Streams indicate ending differently, such as returning -1 or in ObjectInputStream's case, throwing an EOFException.
I found some trick! If you still want to use .available() to read all objects to the end, you can add an integer (ex: out.writeInt(0)) before adding each Object (ex: out.writeObject(obj)) when you write to the file and also read integer before reading each Object. So .available() can read byte left on file and won't be crash! Hope it helps you!
Use available function of InputStream instead of ObjectInputStream. Then if there is any data, get them as an object.
Something like:
if(inputStreamObject.available() > 0){
Object anyName = objectInputStreamObject.readObject();
}
You can get the inputStreamObject directly from the Socket.
I used it and the problem solved.
I suppose this is the input counterpart of this question which I asked some time back:
Can I use both PrintWriter and BufferedOutputStream on the same outputstream?
Q1) I need to read both String lines and byte [] from the same inputstream. So can I use the scanner wrapper to read lines first and then use the inputstream directly to read byte []? Will it cause any conflict?
Q2) If there are no more references to the scanner object and it gets garbage collected, will it automatically close the connection?
Q3) If the answer to the first question is yes and the answer to the second question is no, once I am done with the reading I only have to call inputstream.close() and not Scanner right? (Because by then I won't have a handle to the scanner object anymore)
For 1), you can always read bytes and convert them to String using the encoding of your choice. I'm pretty sure this is what all "readers" to under the hood.
For 2), no, Scanner class doesn't override the finalize method so I'm pretty sure it doesn't close the handle (and it really shouldn't). The section on finalizers in the Effective Java book has a detailed explanation on this topic.
For 3), closing the Scanner would automatically close the underlying stream. I'm pretty sure this is how almost all I/O classes handle the passed in file/resource handle.
Q1) Yes, the scanner buffers its input so when you come to switch to a different stream some of the bytes you want may have been consumed.
If you can use the Scanner to read bytes, that is a better option.
Q2) The connection will be closed when it is cleaned up.
Q3) You only need to close the input stream as Scanner is a pure Java object (and an input) For Buffered outputs you need to call flush() or close() to ensure unwritten data is sent.
I have a general socket implementation consisting of an OutputStream and an InputStream.
After I do some work, I am closing the OutputStream.
When this is done, my InputStream's read() method returns -1 for an infinite amount of time, instead of throwing an exception like I had anticipated.
I am now unsure of the safest route to take, so I have a few of questions:
Am I safe to assume that -1 is only
returned when the stream is closed?
Is there no way to recreate the IO
exception that occurs when the
connection is forcefully broken?
Should I send a packet that will tell my InputStream that it should close instead of the previous two methods?
Thanks!
The -1 is the expected behavior at the end of a stream. See InputStream.read():
Reads the next byte of data from the input stream. The value byte is returned as an int in the range 0 to 255. If no byte is available because the end of the stream has been reached, the value -1 is returned. This method blocks until input data is available, the end of the stream is detected, or an exception is thrown.
You should still catch IOException for unexpected events of course.
Am I safe to assume that -1 is only returned when the stream is closed?
Yes.
You should not assume things like this. You should read the javadoc and implement according how the API is specified to behave. Especially if you want your code to be robust (or "safe" as you put it.)
Having said that, this is more or less what the javadoc says in this case. (One could quibble that EOF and "stream has been closed" don't necessarily mean the same thing ... and that closing the stream by calling InputStream.close() or Socket.close() locally will have a different effect. However, neither of these are directly relevant to your use-case.)
Is there no way to recreate the IO exception that occurs when the connection is forcefully broken?
No. For a start, no exception is normally thrown in the first place, so there is typically nothing to "recreate". Second the information in the original exception (if there ever was one) is gone.
Should I send a packet that will tell my InputStream that it should close instead of the previous two methods?
No. The best method is to test the result of the read call. You need to test it anyway, since you cannot assume that the read(byte[]) method (or whatever) will have returned the number of bytes you actually asked for.
I suppose that throwing an application specific exception would be OK under some circumstances.
But remember the general principle that exceptions should not be used for normal flow control.
One of the other answers suggests creating a proxy InputStream that throws some exception instead of returning -1.
IMO, that is a bad idea. You end up with a proxy class that claims to be an InputStream, but violates the contract of the read methods. That could lead to trouble if the proxy was passed to something that expected a properly implemented InputStream.
Second, InputStream is an abstract class not an interface, so Java's dynamic proxy mechanism won't work. (For example, the newProxyInstance method requires a list of interfaces, not classes.)
According to the InputStream javadoc, read() returns:
the next byte of data, or -1 if the end of the stream is reached.
So you are safe to assume that and it's better to use what's specified in the API than try and recreate an exception because exceptions thrown could be implementation-dependent.
Also, closing the Outputs Stream in a socket closes the socket itself.
This is what the JavaDoc for Socket says:
public OutputStream getOutputStream()
throws IOException
Returns an output stream for this socket.
If this socket has an associated channel then the resulting output
stream delegates all of its operations
to the channel. If the channel is in
non-blocking mode then the output
stream's write operations will throw
an IllegalBlockingModeException.
Closing the returned OutputStream will close the associated socket.
Returns:
an output stream for writing bytes to this socket.
Throws:
IOException - if an I/O error occurs when creating the output stream
or if the socket is not connected.
Not sure that this is what you actually want to do.
Is there no way to recreate the IO exception that occurs when the connection is forcefully broken?
I'll answer this one. InputStream is only an interface. If you really want implementation to throw an exception on EOF, provide your own small wrapper, override read()s and throw an exception on -1 result.
The easiest (least coding) way would be to use a Dynamic Proxy:
InputStream pxy = (InputStream) java.lang.reflect.Proxy.newProxyInstance(
obj.getClass().getClassLoader(),
new Class[]{ InputStream.class },
new ThrowOnEOFProxy(obj));
where ThrowOnEOFProxy would check the method name, call it and if result is -1, throw IOException("EOF").
[Input|Output]Streams exist since JDK1.0, while their character-counterparts Readers|Writers exist since JDK1.1.
Most concepts seem similar, with one exception: the base classes of streams declare an abstract method which processes one single byte at a time, while base readers/writers classes declare an abstract method which processes whole char-arrays.
Thus, given that I understand it correctly, every overridden stream class is limited to process single bytes (thereby performing at least one method call for each byte!), while overridden readers/writers only need a method call per array(-buffer).
Isn't that a huge performance problem?
Can a stream be implemented as subclass of either InputStream or OutputStream, but nevertheless be based on byte-arrays?
Actually, subclasses of InputStream have to override the method that reads a single byte at a time, but also can override others methods that read while byte-arrays. I think that's actually the case for most of the Input/Output streams.
So this isn't much of a performance problem, in my opinion, and yes, you can extend Input/Output stream and be based on byte-arrays.
Single byte reading is almost always a huge performance problem. But if you read the API docs of InputStream, you see that you HAVE TO override read(), but SHOULD also override read(byte[],int,int). Most code that uses any kind of InputStream calls the array style method anyway, but the default implementation of that function is just implemented by calling read() for every byte, so the negative performance impact.
For OutputStream the same holds.
As Daniel said, you must override read(), since client might use it directly, but you also should override read (byte[],int,int).
However, I doubt if you should be concerned with performance since the jvm can and will inline that method for you. Most of all, it doesn't seem like an issue to me.
Also, most readers use some underlying input stream behind the scene, so in any case those char-array based methods end up calling read(byte[],int,int) or even read() directly.
Note that Readers/Writers are for reading characters which can be made of more than one byte such as unicode characters. Streams on the other hand are more suitable when you're dealing with non-string (binary) data.
Apart from that, InputStream and OutputStream also have methods to read/write an entire array of bytes.
performance wise, if you wrap it with BufferedInputStream, JVM should be able to optimize the overhead of single byte read method calls to nothing, i.e. it's as fast as if you do the buffering yourself.