Do I need to close a ByteArrayInputStream? - java

Short question,
I saw in some old code where a ByteArrayInputStream was created like:
new BufferedReader(new InputStreamReader(new ByteArrayInputStream(somebytes)));
And then the BufferedReader is used to read out somebytes line by line.
All working fine, but I noticed that the BufferedReader is never closed.
This is all working in a long running websphere application, the somebytes are not terrible big (200k most), it is only invoked a few times a week and we're not experiencing any apparent memory leaks. So I expect that all the objects are successfully garbage collected.
I always (once) learned that input/output streams need to be closed, in a finally statement. Are ByteStreams the exception to this rule?
kind regards
Jeroen.

You don't have to close ByteArrayInputStream, the moment it is not referenced by any variable, garbage collector will release the stream and somebytes (of course assuming they aren't referenced somewhere else).
However it is always a good practice to close every stream, in fact, maybe the implementation creating the stream will change in the future and instead of raw bytes you'll be reading file? Also static code analyzing tools like PMD or FindBugs (see comments) will most likely complain.
If you are bored with closing the stream and being forced to handle impossible IOException, you might use IOUtils:
IOUtils.closeQuietly(stream);

It is always good practice to close your readers. However not closing a ByteArrayInputStream does not have as heavy of a potential negative effect because you are not accessing a file, just a byte array in memory.

As #TomaszNurkiewicz mentioned it's always good to close the opened stream. Another good way to let it do the try block itself. Use try with resource like.......
try ( InputStream inputStream = new ByteArrayInputStream(bytes); Workbook workBook = new XSSFWorkbook(inputStream)) {
here Workbook and InputStream both implements Closeable Interface so once try block completes ( normally or abruptly), stream will be closed for sure.

Resources need to be closed in a finally (or equivalent). But where you just have some bytes, no it doesn't matter. Although when writing, be careful to flush in the happy case.

Related

Can I omit try-catch?

I want to fetch an HTML page and read in with BufferedReader. So I use try-with-resources to open it handles IOException this way:
try(BufferedReader reader = new BufferedReader(new InputStreamReader(url.openStream()))) {
} catch(IOException e) {
throw e;
}
Is this a good pattern to catch and instantly throw? And what if I omit try at all and state that function throws IOException? If then any potentional memory leak?
Much appreciate any advice!
A catch block is not required in a try-with-resources statement. You could write the following, which would mean exactly the same as your original code:
try (BufferedReader reader = new BufferedReader(
new InputStreamReader(url.openStream()))) {
// do something
}
So, you can leave out the catch block, if all you ever do there is immediately throw the same exception again.
You do want the try block however, so that the BufferedReader and underlying stream(s) are automatically closed at the end of the block.
Is this a good pattern to catch and instantly throw?
No, catching and immediately re-throwing the same exception does not add anything useful.
To add to #Jesper's excellent answer, you do want to include the try block so that the BufferedReader will be closed right away. If you don't do this, it'll eventually be closed when the object is garbage collected, so it isn't technically a resource leak in the sense that the resources would eventually be reclaimed; however, the "eventually" part is potentially problematic because there are no guarantees as to exactly when that'll happen. Thus, a bigger issue is whether this would create race conditions if it's using a resource that needs to be reused eventually.
I'm not very familiar with the implementation details of that exact class, so this is somewhat speculative, but one example of an issue you can run into with some classes that perform network calls if you fail to return resources to the operating system promptly is port exhaustion.
By way of another example, if you are modifying a file, the file could remain locked until the GC happens to release the file lock by cleaning up the relevant object.

Do I need to close InputStreamReader even if InputStream should remain open?

The InputStream is passed as a parameter from somewhere, where it will be further processed and then closed. So I don't want to close the InputStream here. Consider the following code:
void readInputStream(final InputStream inputStream) {
final BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream));
String line;
while ((line = bufferedReader.readLine() != null) {
// do my thing
}
}
If I close the BufferedReader and/or the InputStreamReader, then the underlying InputStream will be closed as well, according to another Stackoverflow post.
My question: Do Readers need to be closed, even if the underlying InputStream is closed somewhere else? Can I get a memory leak by not closing Readers?
Do readers need to be closed, even if the underlying InputStream is closed somewhere else?
No they absolutely don't need to be in that scenario. But it is generally a good idea to close them anyway.
Can I get a memory leak by not closing readers?
No, there is no memory leak, assuming that the Reader itself becomes unreachable once you have finished with it. And besides, a Reader doesn't typically use a lot of memory.
The more important question is whether you can get a resource leak by not closing a Reader. The answer is ... it depends.
If you can guarantee that the underlying InputStream will always be closed somewhere else in the application, then that takes care of possible memory leaks.
If you can't guarantee that, then there is a risk of a resource leak. The underlying OS level file descriptors are a limited resource in (for example) Linux. If a JVM doesn't close them, they can run out and a certain system calls will start to fail unexpectedly.
But if you do close the Reader then the underlying InputStream will be closed.
Calling close() more than once on a InputStream is harmless, and costs almost nothing.
The only case where you shouldn't close the Reader is when it would be wrong to close the underlying InputStream. For example, if you close a SocketInputStream the rest of the application may not be able to reestablish the network connection. Likewise, the InputStream associated with System.in usually cannot be reopened.
In that case, it is actually safe to allow the Reader you create in your method to be garbage collected. Unlike InputStream, a typical Reader class doesn't override Object::finalize() to close its source of data.
#Pshemo brought up an important point about system design.
If you are accepting an InputStream as an argument, then it may be wrong to wrap it with a local Reader ... especially a BufferedReader. A BufferedReader is liable to read-ahead on the stream. If the stream is going to be used by the caller after your method returns, then any data that has been read into the buffer but not consumed by this method is liable to be lost.
A better idea would be for the caller to pass a Reader. Alternatively, this method should be documented as taking ownership of the InputStream. And in that case, it should always close() it.
Yes, Readers need to be closed. Use a proxy, e.g. CloseShieldInputStream, to prevent the passed parameter from being closed.
void readInputStream(InputStream inputStream) throws IOException{
try (var bufferedReader = new BufferedReader(new InputStreamReader(
new CloseShieldInputStream(inputStream)))) {
String line;
while ((line = bufferedReader.readLine()) != null) {
// do my thing
}
}
}
JIC: Similar to the input shield, Apache Commons I/O also provides an output shield to solve the similar problem with closing a wrapping output stream, - CloseShieldOutputStream
For more detailed considerations, please refer to the original answer. Credits to #stephen-c

Java Best Practices - Need to write to file constantly - BufferedWriter?

So I have a java process that needs to constantly append a new line to a file every 100 milliseconds. I am currently use BufferedWriter for this, but from what I have read, the BufferedWriter object should always be .close()'d when its finished.
If I did this, I would have to create a new BufferedWriter object every few milliseconds, which is not ideal. Are there any issues with creating one static BufferedWriter, and just .flush()'ing it after every write?
Finally, is BufferedWriter the best class to use for this, if performance is a concern? Are there any viable alternatives?
Thanks!
The BufferedWriter should be closed when it's finished. If you're doing something like logging, it's entirely acceptable to hold an open writer in the object responsible for the logging and then close it at the end of the run (or whenever you roll over to a new log file).
What you shouldn't do is simply open the writer and then discard the reference without closing it; this can leak resources, and in the case of something with buffering, might lose the last part of the output.
Are there any issues with creating one static BufferedWriter, and just .flush()'ing it after every write?
There is nothing wrong with a single, long-lived BufferedWriter. In fact it is a good idea. (Whether you use a static or something else is a different issue ... but that design decision does not impact on functionality and performance.)
Calling flush after each write is more questionable from a performance perspective. It will cause your application to make a lot of write syscalls ... which is expensive. I would only do that if you need the logging to be written immediately. The alternatively it to flush on a timer ... or not flush at all, and rely on the (final) close of the BufferedWriter to flush any outstanding data.
But either way, a long-lived BufferedWriter that you flush is likely to be better than creating, writing and closing lots of BufferedWriter objects.

Have you ever seen a Java File close() throw an exception?

Has anyone ever seen an exception thrown when calling close method on any closable object?
An IOException will be thrown on close if the final flush fails. Possible causes include:
the file system is full, or the user is over quota,
hard disc errors,
a file system was forcibly unmounted,
a remote file system is unavailable due to networking or other problems,
(possibly) a character encoding error if writing to the file via an OutputStreamWriter or similar,
a device error if the "file" is a device file,
a lost connection if the closeable is a network stream,
a broken pipe if the closeable is a pipe to external process,
and so on.
I have certainly seen some of these. Others are unlikely.
However, if the data you are writing is important then you should allow for close failing. For example, if your application is writing out a critical file the file system fills up, your application had better notice this before it replaces the old copy of the file with the truncated version.
Yes, it's not that rare, IMHO if you are working with anything other than non-local disk files.
Close() works if at that point your closable is still valid and open. Many things like pipes, remote files, etc., can die prematurely.
In addition, I have seen code that ignores errors on open and write and still tries to close (e.g., in a finally block).
Not in terms of file-io, but in terms of sockets the close will raise IOException when the other side has aborted the connection. For example, when you fire a HTTP request on a (large) webpage and then immediately navigate away by clicking another link on the webpage (while it isn't finished loading), then the server side will get an IOException (or a subclass like ClientAbortException in Tomcat servers and clones) when the outputstream of the HTTP response is to be flushed/closed.
Old post and long since answered but here's a real example:
The following code will except out when bufferedWriter.close() is called. This happens because the BufferedWriter's underlying Writer (the FileWriter) has already been closed and when a BufferedWriter closes, it first attempts to flush any data in its buffer to its underlying Writer.
File newFile = new File("newFile.txt");
FileWriter fileWriter = new FileWriter(newFile);
BufferedWriter bufferedWriter = new BufferedWriter(fileWriter);
bufferedWriter.write("Hello World");
fileWriter.close();
bufferedWriter.close();
Note: If there's no data in the buffer [comment out the write() line or add a flush() call] then no exception will be generated
I haven't, but it's possible. Imagine if there's an OutputStream that for some reason hasn't written to the file yet. Well, calling close() will flush out the data, but if the file is locked - then an IOException would be raised.
Try yanking a USB drive with an open file on it. If it doesn't give an exception I'd be pretty surprised.
I guess you could try to force this by unplugging the disk your file is on. But on any Closable? I think it would be easy to get something that uses a socket to throw an exception upon closing.
I have - in my unit tests against mocks ;)

FindBugs: "may fail to close stream" - is this valid in case of InputStream?

In my Java code, I start a new process, then obtain its input stream to read it:
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
FindBugs reports an error here:
may fail to close stream
Pattern id: OS_OPEN_STREAM, type: OS, category: BAD_PRACTICE
Must I close the InputStream of another process? And what's more, according to its Javadoc, InputStream#close() does nothing. So is this a false positive, or should I really close the input stream of the process when I'm done?
In this case, you want to close() the Reader, which will close its underlying streams. Yes, it's always good practice to close streams, even if at the moment you know the implementation you're looking at doesn't do anything (though, in fact, it does here!). What if that changed later?
FindBugs is only there to warn about possible errors; it can't always know for sure.
Finally yes, your Java process owns the process and Process object you spawned. You most definitely need to close that and the output stream. Nobody else is using them, and, it's important to do such things to avoid OS-related stream funny business.
InputStream is an abstract class - just because its implementation does nothing doesn't mean that the actual type of object returned by process.getInputStream() doesn't.
It's possible that failing to close the input stream in this particular case would do no harm - but I personally wouldn't count on it. Close it like you'd close any other input stream. Aside from anything else, that makes your code more robust in case you ever decide to change it to read from something else - it would be all too easy to (say) read from a file instead, and not notice that you're not closing the FileInputStream.
I think its always a good practice to close all the streams you open. Preferably in the finally{} block. Since it does nothing as java says, why not call the close() method. Its of no harm.

Categories

Resources