I am streaming binary data (a CSV file extracted from the database as a Clob) to the browser by calling response.getOutputStream() and would normally wrap the OutputStream in a BufferedOutputStream when copying the data.
Should I close the BufferedOutputStream or will this also close the underlying OutputStream?
[Extra question: Do I need to use a BufferedOutputStream in this case or is the response already buffered?]
Yes, it closes it. As for whether you should close it - are you expecting to write anything else to the response stream? If not, I think it's fine to close it. If you don't close it, you should obviously flush it instead - but I suspect you could figure that bit out for yourself :)
The behaviour is actually inherited from FilterOutputStream. The Javadocs for for FilterOutputStream.close state:
The close method of FilterOutputStream
calls its flush method, and then calls
the close method of its underlying
output stream.
As for whether you should buffer it - I'm not sure that this is well defined. It may be buried in the servlet spec somewhere - and it may even be configurable (sometimes you really don't want buffering, but if you can buffer the whole response it means you can serve a nicer error page if things go wrong after you've started writing).
Closing the BufferedOutputStream will also close the underlying OutputStream. You should close the BufferedOutputStream so that it flushes its contents before closing the underlying stream. See the implementation of FilterOutputStream.close() (from which BufferedOutputStream extends) to convince yourself.
I guess that whether or not the response stream given to your servlet is buffered or not depends on the implementation of your Servlet Container. FWIW I know that Tomcat does buffer its servlet response streams by default, in order to attempt to set the content-length HTTP header.
Related
I am reading a file being uploaded over the web and it is being sent through http and we are receiving through a BufferedInputStream. Sometimes we get a timeout exception in the middle of reading from the stream. My working theory is that the connection is being closed from the client before we can process the whole file. The file is in the order of mb.
Does this theory make sense? Does the client need to keep the connection open in order for the server to completely read bytes from the input stream?
No good news in that case, data lost will occur.
Does this theory make sense? Does the client need to keep the connection open in order for the server to completely read bytes from the input stream?
No.
As long as the BufferedInputStream has bytes in the buffer, any calls to read() / read(byte[]) / read(byte[], int, int) will simply give you data from the buffer and will never even touch the underlying inputstream.
As long as you don't touch said inputstream, it cannot just start throwing exceptions out of the clear blue sky. You need to call something on the actual socket inputstream (be it read, close, flush, write - something), in order to get an exception thrown.
What could happen is a mixed mode operation: You call e.g:
var raw = socket.getInputStream();
var buffered = new BufferedInputStream(raw);
byte[] b = new byte[1000];
buffered.read(b); // actually reads 4000 bytes into buffer, giving 1000 of them.
// 3000 left in the buffer!
byte[] c = new byte[2000];
buffered.read(c); // works fine and never touches raw. Can't throw.
byte[] d = new byte[2000];
buffered.read(d); // 'mixed mode'
Here, in the 'mixed mode' situation, the first 1000 bytes are filled by the buffer, but then raw.available() is invoked (source: The actual source code of BufferedInputStream.java); if it returns a non-zero number, then more data is fetched from raw directly; if it is 0, read() just returns (read() is under no obligation to read ALL the requested bytes; it merely needs to [A] read at least 1, and [B] return how much it did read; usually you want readNBytes instead).
However, in.available() is allowed to throw. If it does, voila.
However, a normal TCP close would not cause TimeoutExceptions.
A much more likely scenario is the following: Your code is simply not processing the data fast enough. The sending entity at some point is just fed up with it all and refuses to continue to let you hog a file handle and just hangs up on you. If you're already using a buffer, perhaps there's some network equipment in between that is dog slow, or the server is configured with unrealistic expectations of how fast your network connections are.
When I saw descriptions about these two methods as below, I felt a bit confused about them.
protected void drain(): Similar to flush but does not propagate the flush to the underlying stream.
void flush(): Writes any buffered output bytes and flush through to the underlying stream.
For example:
FileOutputStream out = new FileOutputStream("test.txt");
ObjectOutputStream oout = new ObjectOutputStream(out);
Question 1:
If I call oout.flush(), it will force the output data in the ObjectOutputStream to be written to the underlying FileOutputStream, but it doesn't guarantee that these data will be subsequently written from FileOutputStream to the file "test.txt" in the disk since I don't call out.flush(), is this correct?
Question 2:
What if I call oout.drain()?
What executions will be done?
Question 1:
If I call oout.flush(), it will force the output data in the ObjectOutputStream to be written to the underlying FileOutputStream, but it doesn't guarantee that these data will be subsequently written from FileOutputStream to the file "test.txt" in the disk since I don't call out.flush(), am I right?
No, you are wrong. It does flush the FileOutputStream. However, as FileOutputStream doesn't buffer or flush, in fact there is no difference between drain() and flush() in this circumstance. If there had been a BufferedOutputStream around the FileOutputStream, there would have been a difference.
Question 2:
What if I call oout.drain()? What executions will be done?
It will flush the ObjectOutputStream but not the underlying stream, exactly as it says in the Javadoc.
You have somehow managed to get this completely back to front. I can't understand how: the Javadoc is quite clear. Also, as drain() is protected it is none of your business anyway.
Flush will write the data to the test.txt. However, drain is similar to flush but does not propagate the flush to the underlying stream.
So of course we must try-catch-finaly any Closable resource.
But I came across some code which sins as follows:
java.util.Properties myProps = ... reads & loads (and doesn't close Stream!)
myProperties.store(new FileOutputStream(myFilePath), null);
System.exit(0);
java.util.Properties.store() flushes the underlying stream (the FileOutputStream)
Will this be enough?
Can you think of a scenario where the file won't be written? assuming that the method passes and no exception is being thrown in 'store'
It is enough in this specific case, but it is nevertheless very bad practice. The FileOutputStream should be closed, not merely flushed.
If you don't want open file references I would close the streams. Flushing only makes sure that all changes are written to file.
Has anyone ever seen an exception thrown when calling close method on any closable object?
An IOException will be thrown on close if the final flush fails. Possible causes include:
the file system is full, or the user is over quota,
hard disc errors,
a file system was forcibly unmounted,
a remote file system is unavailable due to networking or other problems,
(possibly) a character encoding error if writing to the file via an OutputStreamWriter or similar,
a device error if the "file" is a device file,
a lost connection if the closeable is a network stream,
a broken pipe if the closeable is a pipe to external process,
and so on.
I have certainly seen some of these. Others are unlikely.
However, if the data you are writing is important then you should allow for close failing. For example, if your application is writing out a critical file the file system fills up, your application had better notice this before it replaces the old copy of the file with the truncated version.
Yes, it's not that rare, IMHO if you are working with anything other than non-local disk files.
Close() works if at that point your closable is still valid and open. Many things like pipes, remote files, etc., can die prematurely.
In addition, I have seen code that ignores errors on open and write and still tries to close (e.g., in a finally block).
Not in terms of file-io, but in terms of sockets the close will raise IOException when the other side has aborted the connection. For example, when you fire a HTTP request on a (large) webpage and then immediately navigate away by clicking another link on the webpage (while it isn't finished loading), then the server side will get an IOException (or a subclass like ClientAbortException in Tomcat servers and clones) when the outputstream of the HTTP response is to be flushed/closed.
Old post and long since answered but here's a real example:
The following code will except out when bufferedWriter.close() is called. This happens because the BufferedWriter's underlying Writer (the FileWriter) has already been closed and when a BufferedWriter closes, it first attempts to flush any data in its buffer to its underlying Writer.
File newFile = new File("newFile.txt");
FileWriter fileWriter = new FileWriter(newFile);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
bufferedWriter.write("Hello World");
fileWriter.close();
bufferedWriter.close();
Note: If there's no data in the buffer [comment out the write() line or add a flush() call] then no exception will be generated
I haven't, but it's possible. Imagine if there's an OutputStream that for some reason hasn't written to the file yet. Well, calling close() will flush out the data, but if the file is locked - then an IOException would be raised.
Try yanking a USB drive with an open file on it. If it doesn't give an exception I'd be pretty surprised.
I guess you could try to force this by unplugging the disk your file is on. But on any Closable? I think it would be easy to get something that uses a socket to throw an exception upon closing.
I have - in my unit tests against mocks ;)
I have a client connecting to my server. The client sends some messages to the server which I do not care about and do not want to waste time parsing its messages if I'm not going to be using them. All the i/o I'm using is simple java i/o, not nio.
If I create the input stream and just never read from it, can that buffer fill up and cause problems? If so, is there something I can do or a property I can set to have it just throw away data that it sees?
Now what if the server doesn't create the input stream at all? Will that cause any problems on the client/sending side?
Please let me know.
Thanks,
jbu
When you accept a connection from a client, you get an InputStream. If you don't read from that stream, the client's data will buffer up. Eventually, the buffer will fill up and the client will block when it tries to write more data. If the client writes all of its data before reading a response from the server, you will end up with a pretty classic deadlock situation. If you really don't care about the data from the client, just read (or call skip) until EOF and drop the data. Alternatively, if it's not a standard request/response (like HTTP) protocol, fire up a new thread that continually reads the stream to keep it from getting backed up.
If you get no useful data from the client, what's the point of allowing it to connect?
I'm not sure of the implications of never reading from a buffer in Java -- I'd guess that eventually the OS would stop accepting data on that socket, but I'm not sure there.
Why don't you just call the skip method of your InputStream occasionally with a large number, to ensure that you discard the data?
InputStream in = ....
byte[] buffer = new byte[4096] // or whatever
while(true)
in.read(buffer);
if you accept the connection, you should read the data. to tell you the truth i have never seen (or could forsee) a situation where this (a server that ignores all data) could be useful.
I think you get the InputStream once you accept the request, so if you don't acknowledge that request the underlying framework (i.e. tomcat) will drop that request (after some lapsed time).
Regards.