Java Best Practices - Need to write to file constantly - BufferedWriter? - java

So I have a java process that needs to constantly append a new line to a file every 100 milliseconds. I am currently use BufferedWriter for this, but from what I have read, the BufferedWriter object should always be .close()'d when its finished.
If I did this, I would have to create a new BufferedWriter object every few milliseconds, which is not ideal. Are there any issues with creating one static BufferedWriter, and just .flush()'ing it after every write?
Finally, is BufferedWriter the best class to use for this, if performance is a concern? Are there any viable alternatives?
Thanks!

The BufferedWriter should be closed when it's finished. If you're doing something like logging, it's entirely acceptable to hold an open writer in the object responsible for the logging and then close it at the end of the run (or whenever you roll over to a new log file).
What you shouldn't do is simply open the writer and then discard the reference without closing it; this can leak resources, and in the case of something with buffering, might lose the last part of the output.

Are there any issues with creating one static BufferedWriter, and just .flush()'ing it after every write?
There is nothing wrong with a single, long-lived BufferedWriter. In fact it is a good idea. (Whether you use a static or something else is a different issue ... but that design decision does not impact on functionality and performance.)
Calling flush after each write is more questionable from a performance perspective. It will cause your application to make a lot of write syscalls ... which is expensive. I would only do that if you need the logging to be written immediately. The alternatively it to flush on a timer ... or not flush at all, and rely on the (final) close of the BufferedWriter to flush any outstanding data.
But either way, a long-lived BufferedWriter that you flush is likely to be better than creating, writing and closing lots of BufferedWriter objects.

Related

Reusing one BufferedWriter instance for one file along application lifetime

I have a multi-threading application where some of my threads should read data from queue and write them to a file. The problem here is that I am confusing should I create new BufferedWriter instance every time when one of my threads reads value from queue and writes it to same file or I can have just one BufferedWriter instance and flush() every time. One problem in second solution is that I should detect when I should close the BufferedWriter without using Java 7 perfect solution for closing resources in try-catch block.
Does the second solution solves some performance issues?
What are best practices on this?
I'd recommend using one BufferedWriter for writing to the file that is shared by all threads. For the sake of performance, the BufferedWriter should be kept open until the application decides that there is no more output. (Opening and closing files is relatively expensive.)
You also need to have the application threads use some kind of locking (e.g. synchronize on the BufferedWriter) to ensure that they don't try to write at the same time. The BufferedWriter class is not thread-safe.
The try/finally or try-with-resources approach to managing file resources is important in cases where you are opening lots of files. If you are only dealing with one file, and it needs to be open for the entire duration of the application, then resource management is not so critical. (But you do need to make sure that you either close or flush before the application exits.)
But I think BufferedWriter is thread-safe, because underlying implementation of write() methods using synchronized blocks
Well in that sense, yes it is. However, that assumes that when your thread writes data, it does it in a single write(...) call.
Note also that BufferedWriter is not specified to be thread-safe, even if it is thread-safe in the current implementation.
A BufferedWriter should ever lead to one file/Writer/OutputStream; if you have many targets you will need many buffers. If you want to write to the same file from multiple threads you will need to synchronize on the earliest bit; you can synchronized on the BufferredWriter if you don't have more high-level constructs that the character stream. If you synchronize on the BufferedWriter, you will not need to call flush() after the end of each access.

Flush on Java application exit

Suppose a Java application writes to a file using BufferedWriter API (and does not call flush after every write). I guess that if the application exits with System.exit the buffer is not flushed and so the file might be corrupted.
Suppose also that the application component, which decides to exit, is not aware of the component, which writes to the file.
What is the easiest and correct way to solve the "flush problem" ?
You may use the Runtime.addShutdownHook method, which can be used to add a jvm shutdown hook. This is basically a unstarted Thread, which executes on shutdown of the Java Virtual Machine.
So if you have a handle of the file available for that thread, then you can try to close the stream and flush the output.
Note: Although it seems feasible to use this, but I believe there will be implementation challenges to it because of the fact that whether your file handle is not stale when your shutdown hook is called. So the better approach should be to close your streams gracefully using finally blocks in the code where file operations are done.
You can add a shutdown hook but you need to have a reference to each of these BufferedWriter or other Flushable or Closable objects. You won't gain anything from it. You should perform close() and flush() directly in the code that is manipulating the object.
Think of the Information Expert GRASP pattern, the code manipulating the BufferedWriter is the place that has the information about when an operation is finished and should be flushed, so that's where that logic should go.
If some application component is calling System.exit when things aren't done, I would consider that an abnormal exit, should not return 0 and therefore shouldn't guarantee that streams are flushed.

Do I need to close a ByteArrayInputStream?

Short question,
I saw in some old code where a ByteArrayInputStream was created like:
new BufferedReader(new InputStreamReader(new ByteArrayInputStream(somebytes)));
And then the BufferedReader is used to read out somebytes line by line.
All working fine, but I noticed that the BufferedReader is never closed.
This is all working in a long running websphere application, the somebytes are not terrible big (200k most), it is only invoked a few times a week and we're not experiencing any apparent memory leaks. So I expect that all the objects are successfully garbage collected.
I always (once) learned that input/output streams need to be closed, in a finally statement. Are ByteStreams the exception to this rule?
kind regards
Jeroen.
You don't have to close ByteArrayInputStream, the moment it is not referenced by any variable, garbage collector will release the stream and somebytes (of course assuming they aren't referenced somewhere else).
However it is always a good practice to close every stream, in fact, maybe the implementation creating the stream will change in the future and instead of raw bytes you'll be reading file? Also static code analyzing tools like PMD or FindBugs (see comments) will most likely complain.
If you are bored with closing the stream and being forced to handle impossible IOException, you might use IOUtils:
IOUtils.closeQuietly(stream);
It is always good practice to close your readers. However not closing a ByteArrayInputStream does not have as heavy of a potential negative effect because you are not accessing a file, just a byte array in memory.
As #TomaszNurkiewicz mentioned it's always good to close the opened stream. Another good way to let it do the try block itself. Use try with resource like.......
try ( InputStream inputStream = new ByteArrayInputStream(bytes); Workbook workBook = new XSSFWorkbook(inputStream)) {
here Workbook and InputStream both implements Closeable Interface so once try block completes ( normally or abruptly), stream will be closed for sure.
Resources need to be closed in a finally (or equivalent). But where you just have some bytes, no it doesn't matter. Although when writing, be careful to flush in the happy case.

Better alternative for PipedReader/PipedWriter?

I need to have a buffered char stream, into which I write in one thread and from which I read in another thread. Right now I'm using PipedReader and PipedWriter for it, but those classes cause a performance problem: PipedReader does a wait(1000) when its internal buffer is empty, which causes my application to lag visibly.
Would there be some library which does the same thing as PipedReader/PipedWriter, but with better performance? Or will I have to implement my own wheels?
The problem was that when something is written to the PipedWriter, it does not automatically notify the PipedReader that there is some data to read. When one tries to read PipedReader and the buffer is empty, the PipedReader will loop and wait using a wait(1000) call until the buffer has some data.
The solution is to call PipedWriter.flush() always after writing something to the pipe. All that the flush does is call notifyAll() on the reader. The fix to the code in question looks like this.
(To me the PipedReader/PipedWriter implementation looks very much like a case of premature optimization - why not to notifyAll on every write? Also the readers wait in an active loop, waking up every second, instead of waking only when there is something to read. The code also contains some todo comments, that the reader/writer thread detection which it does is not sophisticated enough.)
This same problem appears to be also in PipedOutputStream. In my current project calling flush() manually is not possible (can't modify Commons IO's IOUtils.copy()), so I fixed it by creating low-latency wrappers for the pipe classes. They work much better than the original classes. :-)
It should be fairly easy to wrap a char stream API around BlockingQueue.
I must say, however, it seems quite perverse that PipedReader would use polling to wait for data. Is this documented somewhere, or did you discover it for yourself somehow?
#Esko Luontola, I've been reading through your code in the sbt package to try to understand what you are doing. It seems like you want to start up a Process and pass input to it, and have the result of the action be teed to different places. Is this at all correct?
I would try modifying the main loop in ReaderToWriterCopier so that instead of doing a read() - a blocking operation that apparently when a PipedReader is involved causes polling - you explicitly wait for the Writer to flush. The documentation is clear that flush causes any Readers to be notified.
I'm not sure how to run your code so I can't get deeper into it. Hope this helps.
I implemented something a little similar, and asked a question whether anyone else had any better thought out and tested code.

FindBugs: "may fail to close stream" - is this valid in case of InputStream?

In my Java code, I start a new process, then obtain its input stream to read it:
BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()));
FindBugs reports an error here:
may fail to close stream
Pattern id: OS_OPEN_STREAM, type: OS, category: BAD_PRACTICE
Must I close the InputStream of another process? And what's more, according to its Javadoc, InputStream#close() does nothing. So is this a false positive, or should I really close the input stream of the process when I'm done?
In this case, you want to close() the Reader, which will close its underlying streams. Yes, it's always good practice to close streams, even if at the moment you know the implementation you're looking at doesn't do anything (though, in fact, it does here!). What if that changed later?
FindBugs is only there to warn about possible errors; it can't always know for sure.
Finally yes, your Java process owns the process and Process object you spawned. You most definitely need to close that and the output stream. Nobody else is using them, and, it's important to do such things to avoid OS-related stream funny business.
InputStream is an abstract class - just because its implementation does nothing doesn't mean that the actual type of object returned by process.getInputStream() doesn't.
It's possible that failing to close the input stream in this particular case would do no harm - but I personally wouldn't count on it. Close it like you'd close any other input stream. Aside from anything else, that makes your code more robust in case you ever decide to change it to read from something else - it would be all too easy to (say) read from a file instead, and not notice that you're not closing the FileInputStream.
I think its always a good practice to close all the streams you open. Preferably in the finally{} block. Since it does nothing as java says, why not call the close() method. Its of no harm.

Categories

Resources