Is there a clean way to give Java objects scope with destruction? - java

Imagine that MyOpenedFile is something wrapping file with opened streams. Then suppose this code:
// method in an Util class
static void safeClose(MyOpenedFile f) {
if (f != null) {
try {
f.close();
} catch(IOException ex) { /* add logging or console output */ }
}
}
Actual method for the question:
void doSomeFileOperation(...) throws IOException, ... {
MyOpenedFile f1 = null;
MyOpenedFile f2 = null;
try {
/* method's "business logic" code beings */
f1 = new MyOpenedFile(...);
//do stuff
f2 = new MyOpenedFile(...);
// do stuff
f1.close(); f1 = null;
// do stuff with f1 closed
f2.close(); f2 = null;
// do stuff with f2 closed
/* method's "business logic" code ends */
} finally {
Util.safeClose(f1); f1 = null;
Util.safeClose(f2); f2 = null;
}
}
Now this is quite messy and especially error-prone (some code in finally block might be very hard to get called in unit tests, for example). For example in C++, destructor would take care of cleanup (either getting called by scoped pointer destructor or directly) and code would be much cleaner.
So, is there better/nicer/cleaner way to wrap above piece of business logic code, so that any exceptions get propagated but both files f1 and f2 get closed (or at least close is attempted on both, even if it fails)?
Also answers pointing to any open source libraries such as Apache Commons, providing nice wrappers are welcome.

A file is a wrapper for a String which contains a file name which may or may not exist. It is stateless so you don't need to close it.
A resource you need to close is FileInputStream or BufferedReader and you can close these implicitly with ARM in Java 7
try(BufferedReader br = new BufferedReader(new FileReader(file))) {
}
This will close br whent he block exits.
http://www.oracle.com/technetwork/articles/java/trywithresources-401775.html

Take a look at The try-with-resources Statement which will close resources after the try-block ends.
The File class you use does not seem to be java.io.File because it does not have any close() method. In that case make sure that your own File class implements Closeable to make it work with ARM.
try (FileInputStream f1 = new FileInputStream("test1.txt");
FileInputStream f2 = new FileInputStream("test2.txt")) {
// Some code
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}

You dont need to close Files (which are representations of files on the file system) as mentioned here:
Do I need to close files I perform File.getName() on?
I assume you are asking more about the File Streams/Readers?
In which case java 7 has a nice new feature:
http://www.vineetmanohar.com/2011/03/java-7-try-with-auto-closable-resources/
If you are working on an older version of java I'd just keep it simple with this:
void doSomeFileOperation(...) throws IOException, ... {
FileInputStream f1 = null;
FileInputStream f2 = null;
try {
// do stuff
} finally {
Util.safeClose(f1);
Util.safeClose(f2);
}
}

An option that comes automatically to my head is: Separate the code that handles files, from the code that does any processing. In this way you can encapsulate the nasty code that handles does the open, close and exception handling.
The other bit is that the sample you have does a lot of extra, unneeded steps
void doSomeFileOperation(...) throws IOException, ... {
File f1 = null;
File f2 = null;
try {
f1 = new File(...);
f2 = new File(...);
// callback to another class / method that does the real work
} finally {
Util.safeClose(f1);
Util.safeClose(f2);
}
}
You don't need to set the File instances to null. If you try to use them, you'll get an exception.
I don wonder what File object you're using. The standard File class in java doesn't have a close() method.

Related

Close file for reading before start writing

I wrote a method which replace some lines in a file (it's not the purpose of this question). Everything works fine, but I'm wondering if file is closed for reading when I start writing. I'd like to ensure that my solution is safe. That's what I've done:
private void replaceCodeInTranslationFile(File file, String code) {
if (file.exists()) {
try (Stream<String> lines = Files.lines(Paths.get(file.getAbsolutePath()), Charset.defaultCharset())) {
String output = this.getLinesWithUpdatedCode(lines, code);
this.replaceFileWithContent(file, output); // is it safe?
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
}
Method replaceFileWithContent() looks like this:
private void replaceFileWithContent(File file, String content) throws IOException {
try (FileOutputStream fileOut = new FileOutputStream(file.getAbsolutePath())) {
fileOut.write(content.getBytes(Charset.defaultCharset()));
}
}
I think that try-with-resources closes resource at the end of a statement, so this code can be potentially the source of problems. Am I correct?
Read / Write lock implementations may be helpful for this kind of scenario to ensure thread safe operations.
Refer this http://tutorials.jenkov.com/java-concurrency/read-write-locks.html for more..

Is it necessary to close each nested OutputStream and Writer separately?

I am writing a piece of code:
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(gzipOutputStream));
Do I need to close every stream or writer like the following?
gzipOutputStream.close();
bw.close();
outputStream.close();
Or will just closing the last stream be fine?
bw.close();
Assuming all the streams get created okay, yes, just closing bw is fine with those stream implementations; but that's a big assumption.
I'd use try-with-resources (tutorial) so that any issues constructing the subsequent streams that throw exceptions don't leave the previous streams hanging, and so you don't have to rely on the stream implementation having the call to close the underlying stream:
try (
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
OutputStreamWriter osw = new OutputStreamWriter(gzipOutputStream);
BufferedWriter bw = new BufferedWriter(osw)
) {
// ...
}
Note you no longer call close at all.
Important note: To have try-with-resources close them, you must assign the streams to variables as you open them, you cannot use nesting. If you use nesting, an exception during construction of one of the later streams (say, GZIPOutputStream) will leave any stream constructed by the nested calls inside it open. From JLS ยง14.20.3:
A try-with-resources statement is parameterized with variables (known as resources) that are initialized before execution of the try block and closed automatically, in the reverse order from which they were initialized, after execution of the try block.
Note the word "variables" (my emphasis).
E.g., don't do this:
// DON'T DO THIS
try (BufferedWriter bw = new BufferedWriter(
new OutputStreamWriter(
new GZIPOutputStream(
new FileOutputStream(createdFile))))) {
// ...
}
...because an exception from the GZIPOutputStream(OutputStream) constructor (which says it may throw IOException, and writes a header to the underlying stream) would leave the FileOutputStream open. Since some resources have constructors that may throw and others don't, it's a good habit to just list them separately.
We can double-check our interpretation of that JLS section with this program:
public class Example {
private static class InnerMost implements AutoCloseable {
public InnerMost() throws Exception {
System.out.println("Constructing " + this.getClass().getName());
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
}
}
private static class Middle implements AutoCloseable {
private AutoCloseable c;
public Middle(AutoCloseable c) {
System.out.println("Constructing " + this.getClass().getName());
this.c = c;
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
c.close();
}
}
private static class OuterMost implements AutoCloseable {
private AutoCloseable c;
public OuterMost(AutoCloseable c) throws Exception {
System.out.println("Constructing " + this.getClass().getName());
throw new Exception(this.getClass().getName() + " failed");
}
#Override
public void close() throws Exception {
System.out.println(this.getClass().getName() + " closed");
c.close();
}
}
public static final void main(String[] args) {
// DON'T DO THIS
try (OuterMost om = new OuterMost(
new Middle(
new InnerMost()
)
)
) {
System.out.println("In try block");
}
catch (Exception e) {
System.out.println("In catch block");
}
finally {
System.out.println("In finally block");
}
System.out.println("At end of main");
}
}
...which has the output:
Constructing Example$InnerMost
Constructing Example$Middle
Constructing Example$OuterMost
In catch block
In finally block
At end of main
Note that there are no calls to close there.
If we fix main:
public static final void main(String[] args) {
try (
InnerMost im = new InnerMost();
Middle m = new Middle(im);
OuterMost om = new OuterMost(m)
) {
System.out.println("In try block");
}
catch (Exception e) {
System.out.println("In catch block");
}
finally {
System.out.println("In finally block");
}
System.out.println("At end of main");
}
then we get the appropriate close calls:
Constructing Example$InnerMost
Constructing Example$Middle
Constructing Example$OuterMost
Example$Middle closed
Example$InnerMost closed
Example$InnerMost closed
In catch block
In finally block
At end of main
(Yes, two calls to InnerMost#close is correct; one is from Middle, the other from try-with-resources.)
You can close the outer most stream, in fact you don't need to retain all the streams wrapped and you can use Java 7 try-with-resources.
try (BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(
new GZIPOutputStream(new FileOutputStream(createdFile)))) {
// write to the buffered writer
}
If you subscribe to YAGNI, or you-aint-gonna-need-it, you should be only adding code you actually need. You shouldn't be adding code you imagine you might need but in reality doesn't do anything useful.
Take this example and imagine what could possibly go wrong if you didn't do this and what the impact would be?
try (
OutputStream outputStream = new FileOutputStream(createdFile);
GZIPOutputStream gzipOutputStream = new GZIPOutputStream(outputStream);
OutputStreamWriter osw = new OutputStreamWriter(gzipOutputStream);
BufferedWriter bw = new BufferedWriter(osw)
) {
// ...
}
Lets start with FileOutputStream which calls open to do all the real work.
/**
* Opens a file, with the specified name, for overwriting or appending.
* #param name name of file to be opened
* #param append whether the file is to be opened in append mode
*/
private native void open(String name, boolean append)
throws FileNotFoundException;
If the file is not found, there is no underlying resource to close, so closing it won't make any difference. If The file exists, it should be throwing a FileNotFoundException. So there is nothing to be gained by trying to close the resource from this line alone.
The reason you need to close the file is when the file is opened successfully, but you later get an error.
Lets look at the next stream GZIPOutputStream
There is code which can throw an exception
private void writeHeader() throws IOException {
out.write(new byte[] {
(byte) GZIP_MAGIC, // Magic number (short)
(byte)(GZIP_MAGIC >> 8), // Magic number (short)
Deflater.DEFLATED, // Compression method (CM)
0, // Flags (FLG)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Modification time MTIME (int)
0, // Extra flags (XFLG)
0 // Operating system (OS)
});
}
This writes the header of the file. Now it would be very unusual for you to be able to open a file for writing but not be able to write even 8 bytes to it, but lets imagine this could happen and we don't close the file afterwards. What does happen to a file if it is not closed?
You don't get any unflushed writes, they are discarded and in this case, there is no successfully written bytes to the stream which isn't buffered at this point anyway. But a file which is not closed doesn't live forever, instead FileOutputStream has
protected void finalize() throws IOException {
if (fd != null) {
if (fd == FileDescriptor.out || fd == FileDescriptor.err) {
flush();
} else {
/* if fd is shared, the references in FileDescriptor
* will ensure that finalizer is only called when
* safe to do so. All references using the fd have
* become unreachable. We can call close()
*/
close();
}
}
}
If you don't close a file at all, it gets closed anyway, just not immediately (and like I said, data which is left in a buffer will be lost this way, but there is none at this point)
What is the consequence of not closing the file immediately? Under normal conditions, you potentially lose some data, and you potentially run out of file descriptors. But if you have a system where you can create files but you can't write anything to them, you have a bigger problem. i.e. it hard to imagine why you are repeatedly trying to create this file despite the fact you are failing.
Both OutputStreamWriter and BufferedWriter don't throw IOException in their constructors, so it not clear what problem they would cause. In The case of BufferedWriter, you could get an OutOfMemoryError. In this case it will immediately trigger a GC, which as we have seen will close the file anyway.
If all of the streams have been instantiated then closing only the outermost is just fine.
The documentation on Closeable interface states that close method:
Closes this stream and releases any system resources associated with it.
The releasing system resources includes closing streams.
It also states that:
If the stream is already closed then invoking this method has no effect.
So if you close them explicitly afterwards, nothing wrong will happen.
I'd rather use try(...) syntax (Java 7), e.g.
try (OutputStream outputStream = new FileOutputStream(createdFile)) {
...
}
It will be fine if you only close the last stream - the close call will be send to the underlying streams, too.
No, the topmost level Stream or reader will ensure that all underlying streams / readers are closed.
Check the close() method implementation of your topmost level stream.
In Java 7, there is a feature try-with-resources. You no need to explicitly close your streams, it will take care of that.

How to prevent InputStream.readObject() from throwing EOFException?

I serialize an object and save it as a file on my HDD. When I'm reading it, in only some occasions it throws EOFException. After couple of hours debugging I am not able to find a problem.
Here is my code:
public void serialize(MyClass myClass,String path) {
FileOutputStream foStream = null;
ObjectOutputStream ooStream = null;
try {
File file = new File(path);
if (!file.exists()) {
file.createNewFile();
}
foStream = new FileOutputStream(file);
ooStream = new ObjectOutputStream(foStream);
ooStream.writeObject(myClass);
} catch (Throwable t) {
log.error(t);
} finally {
if (ooStream != null) {
try {
ooStream.flush();
ooStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
}
For getting Object:
public MyClass deSerialize(String path) {
MyClass myClass=null;
FileInputStream fiStream = null;
ObjectInputStream oiStream = null;
String errorMessage = "";
try {
File file = new File(path);
if (!file.exists()) {
return null;
}
fiStream = new FileInputStream(path);
oiStream = new ObjectInputStream(fiStream);
Object o = oiStream.readObject();
myClass = (MyClass) o;
} catch (Throwable t) {
log.warn(t);
} finally {
if (oiStream != null) {
try {
oiStream.close();
} catch (IOException e) {
log.error(e);
}
}
}
return myClass;
}
Stacktrace:
java.io.EOFException at
java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2498)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1273)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
at java.util.LinkedList.readObject(LinkedList.java:776) at
sun.reflect.GeneratedMethodAccessor583.invoke(Unknown Source) at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585) at
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:946)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1809)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1908)
at
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1832)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1719)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1305)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:348)
Question:
My serialized object is now corrupted and then is it rubbish now?
Because this object is responsible for rendering the UI which saved by user. If User logs in it should render previously saved state of UI. However for some user the file cannot be deserialized.
EOFException means you are trying to read past the end of the file. Normally you don't have any way of knowing whethere there are more objects to read, other than trying it, so you shouldn't regard EOFException as a problem in the first place. If it is thrown in a situation where you think you know there are more objects in the file, e.g. when you have prefixed an object count to the file, it indicates a problem with the code that wrote the file, or possible corruption of the file itself. Another example is a zero length file that shouldn't be zero length. Whatever the problem is, it can't be solved by the reading end, it is already too late.
I cannot see any problem with the writing and reading of the file.
So my best guess is that the problem is at the file level. For example:
you could be writing one file and reading a different one, or
you could be reading the file before the file write completes, or
something else could be clobbering the file in between the running of your write code and read code.
I suggest that you add some tracing code that uses File.length() to find out what the file size is after you've written it and before you read it.
A couple of other possibilities:
the writer and reader code is using different versions of MyClass (or a dependent class) with incompatible representations and the same serialVersionId values, or
you could be using custom readObject and writeObject methods that are incompatible.
In my case, EOF Exception was solved by ensuring the read and writes to the file were thread safe. Like Stephen C answered above, if you try to write to a file which you also are trying to read from say from another thread, you may be stepping on the ObjectInputStream which is going to throw EOF Exception in this case.

FFMPEG in Java issue

I have the following code in a java Web Service:
public boolean makeFile(String fileName, String audio)
{
if (makeUserFolder())
{
File file = new File(getUserFolderPath() + fileName + amr);
FileOutputStream fileOutputStream = null;
try
{
file.createNewFile();
fileOutputStream = new FileOutputStream(file);
fileOutputStream.write(Base64.decode(audio));
return true;
}
catch(FileNotFoundException ex)
{
return false;
}
catch(IOException ex)
{
return false;
}
finally{
try {
fileOutputStream.close();
convertFile(fileName);
} catch (IOException ex) {
Logger.getLogger(FileUtils.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
else
return false;
}
public boolean convertFile(String fileName)
{
Process ffmpeg;
String filePath = this.userFolderPath + fileName;
try {
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath + amr,filePath + mp3);
pb.redirectErrorStream();
ffmpeg = pb.start();
} catch (IOException ex) {
return false;
}
return true;
}
It used to work and now it simply won't execute the ffmpeg conversion for some reason. I thought it was a problem with my file but after running the command from terminal no errors are thrown or anything, thought it was maybe permissions issue but all the permissions have been granted in the folder I'm saving the files. I noticed that the input BufferedReader ins being set to null after running the process, any idea what's happening?
First of all, a small nitpick with your code...when you create the FileOutputStream you create it using a string rather than a File, when you have already created the File before, so you might as well recycle that rather than force the FileOutputStream to instantiate the File itself.
Another small nitpick is the fact that when you are writing out the audio file, you should enclose that in a try block and close the output stream in a finally block. If you are allowed to add a new library to your project, you might use Guava which has a method Files.write(byte[],File), which will take care of all the dirty resource management for you.
The only thing that I can see that looks like a definite bug is the fact that you are ignoring the error stream of ffmpeg. If you are blocking waiting for input on the stdout of ffmpeg, then it will not work.
The easiest way to take care of this bug is to use ProcessBuilder instead of Runtime.
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",filePath+amr,filePath+mp3);
pb.redirectErrorStream(); // This will make both stdout and stderr be redirected to process.getInputStream();
ffmpeg = pb.start();
If you start it this way, then your current code will be able to read both input streams fully. It is possible that the stderr was hiding some error that you were not able to see due to not reading it.
If that was not your problem, I would recommend using absolute paths with ffmpeg...in other words:
String lastdot = file.getName().lastIndexOf('.');
File mp3file = new File(file.getParentFile(),file.getName().substring(0,lastdot)+".mp3");
ProcessBuilder pb = new ProcessBuilder("ffmpeg","-i",file.getAbsolutePath(),mp3file.getAbsolutePath());
// ...
If that doesn't work, I would change ffmpeg to be an absolute path as well (in order to rule out path issues).
Edit: Further suggestions.
I would personally refactor the writing code into its own method, so that you can use it elsewhere necessary. In other other words:
public static boolean write(byte[] content, File to) {
FileOutputStream fos = new FileOutputStream(to);
try {
fos.write(content);
} catch (IOException io) {
// logging code here
return false;
} finally {
closeQuietly(fos);
}
return true;
}
public static void closeQuietly(Closeable toClose) {
if ( toClose == null ) { return; }
try {
toClose.close();
} catch (IOException e) {
// logging code here
}
}
The reason that I made the closeQuietly(Closeable) method is due to the fact that if you do not close it in that way, there is a possibility that an exception will be thrown by the close() method, and that exception will obscure the exception that was thrown originally. If you put these in a utility class (although looking at your code, I assume that the class that it is currently in is named FileUtils), then you will be able to use them throughout your application whenever you need to deal with file output.
This will allow you to rewrite the block as:
File file = new File(getUserFolderPath() + fileName + amr);
file.createNewFile()
write(Base64.decode(audio),file);
convertFile(fileName);
I don't know whether or not you should do this, however if you want to be sure that the ffmpeg process has completed, then you should say ffmpeg.waitFor(); to be sure that it has completed. If you do that, then you should examine ffmpeg.exitValue(); to make sure that it completed successfully.
Another thing that you might want to do is once it has completed, write what it output to a log file so you have a record of what happened, just in case something happens.

Correct way to close nested streams and writers in Java [duplicate]

This question already has answers here:
Is it necessary to close each nested OutputStream and Writer separately?
(7 answers)
Closed 6 years ago.
Note: This question and most of its answers date to before the release of Java 7. Java 7 provides Automatic Resource Management functionality for doing this easilly. If you are using Java 7 or later you should advance to the answer of Ross Johnson.
What is considered the best, most comprehensive way to close nested streams in Java? For example, consider the setup:
FileOutputStream fos = new FileOutputStream(...)
BufferedOS bos = new BufferedOS(fos);
ObjectOutputStream oos = new ObjectOutputStream(bos);
I understand the close operation needs to be insured (probably by using a finally clause). What I wonder about is, is it necessary to explicitly make sure the nested streams are closed, or is it enough to just make sure to close the outer stream (oos)?
One thing I notice, at least dealing with this specific example, is that the inner streams only seem to throw FileNotFoundExceptions. Which would seem to imply that there's not technically a need to worry about closing them if they fail.
Here's what a colleague wrote:
Technically, if it were implemented right, closing the outermost
stream (oos) should be enough. But the implementation seems flawed.
Example:
BufferedOutputStream inherits close() from FilterOutputStream, which defines it as:
155 public void close() throws IOException {
156 try {
157 flush();
158 } catch (IOException ignored) {
159 }
160 out.close();
161 }
However, if flush() throws a runtime exception for some reason, then
out.close() will never be called. So it seems "safest" (but ugly) to
mostly worry about closing FOS, which is keeping the file open.
What is considered to be the hands-down best, when-you-absolutely-need-to-be-sure, approach to closing nested streams?
And are there any official Java/Sun docs that deal with this in fine detail?
When closing chained streams, you only need to close the outermost stream. Any errors will be propagated up the chain and be caught.
Refer to Java I/O Streams for details.
To address the issue
However, if flush() throws a runtime exception for some reason, then out.close() will never be called.
This isn't right. After you catch and ignore that exception, execution will pick back up after the catch block and the out.close() statement will be executed.
Your colleague makes a good point about the RuntimeException. If you absolutely need the stream to be closed, you can always try to close each one individually, from the outside in, stopping at the first exception.
In the Java 7 era, try-with-resources is certainly the way to go. As mentioned in several previous answers, the close request propagates from the outermost stream to the innermost stream. So a single close is all that is required.
try (ObjectInputStream ois = new ObjectInputStream(new FileInputStream(f))) {
// do something with ois
}
There is however a problem with this pattern. The try-with-resources is not aware of the inner FileInputStream, so if the ObjectInputStream constructor throws an exception, the FileInputStream is never closed (until the garbage collector gets to it). The solution is...
try (FileInputStream fis = new FileInputStream(f); ObjectInputStream ois = new ObjectInputStream(fis)) {
// do something with ois
}
This is not as elegant, but is more robust. Whether this is actually a problem will depend on what exceptions can be thrown during construction of the outer object(s). ObjectInputStream can throw IOException which may well get handled by an application without terminating. Many stream classes only throw unchecked exceptions, which may well result in termination of the application.
It is a good practice to use Apache Commons to handle IO related objects.
In the finally clause use IOUtils
IOUtils.closeQuietly(bWriter);
IOUtils.closeQuietly(oWritter);
Code snippet below.
BufferedWriter bWriter = null;
OutputStreamWriter oWritter = null;
try {
oWritter = new OutputStreamWriter( httpConnection.getOutputStream(), "utf-8" );
bWriter = new BufferedWriter( oWritter );
bWriter.write( xml );
}
finally {
IOUtils.closeQuietly(bWriter);
IOUtils.closeQuietly(oWritter);
}
I usually do the following. First, define a template-method based class to deal with the try/catch mess
import java.io.Closeable;
import java.io.IOException;
import java.util.LinkedList;
import java.util.List;
public abstract class AutoFileCloser {
// the core action code that the implementer wants to run
protected abstract void doWork() throws Throwable;
// track a list of closeable thingies to close when finished
private List<Closeable> closeables_ = new LinkedList<Closeable>();
// give the implementer a way to track things to close
// assumes this is called in order for nested closeables,
// inner-most to outer-most
protected final <T extends Closeable> T autoClose(T closeable) {
closeables_.add(0, closeable);
return closeable;
}
public AutoFileCloser() {
// a variable to track a "meaningful" exception, in case
// a close() throws an exception
Throwable pending = null;
try {
doWork(); // do the real work
} catch (Throwable throwable) {
pending = throwable;
} finally {
// close the watched streams
for (Closeable closeable : closeables_) {
if (closeable != null) {
try {
closeable.close();
} catch (Throwable throwable) {
if (pending == null) {
pending = throwable;
}
}
}
}
// if we had a pending exception, rethrow it
// this is necessary b/c the close can throw an
// exception, which would remove the pending
// status of any exception thrown in the try block
if (pending != null) {
if (pending instanceof RuntimeException) {
throw (RuntimeException) pending;
} else {
throw new RuntimeException(pending);
}
}
}
}
}
Note the "pending" exception -- this takes care of the case where an exception thrown during close would mask an exception we might really care about.
The finally tries to close from the outside of any decorated stream first, so if you had a BufferedWriter wrapping a FileWriter, we try to close the BuffereredWriter first, and if that fails, still try to close the FileWriter itself. (Note that the definition of Closeable calls for close() to ignore the call if the stream is already closed)
You can use the above class as follows:
try {
// ...
new AutoFileCloser() {
#Override protected void doWork() throws Throwable {
// declare variables for the readers and "watch" them
FileReader fileReader =
autoClose(fileReader = new FileReader("somefile"));
BufferedReader bufferedReader =
autoClose(bufferedReader = new BufferedReader(fileReader));
// ... do something with bufferedReader
// if you need more than one reader or writer
FileWriter fileWriter =
autoClose(fileWriter = new FileWriter("someOtherFile"));
BufferedWriter bufferedWriter =
autoClose(bufferedWriter = new BufferedWriter(fileWriter));
// ... do something with bufferedWriter
}
};
// .. other logic, maybe more AutoFileClosers
} catch (RuntimeException e) {
// report or log the exception
}
Using this approach you never have to worry about the try/catch/finally to deal with closing files again.
If this is too heavy for your use, at least think about following the try/catch and the "pending" variable approach it uses.
The colleague raises an interesting point, and there are grounds for arguing either way.
Personally, I would ignore the RuntimeException, because an unchecked exception signifies a bug in the program. If the program is incorrect, fix it. You can't "handle" a bad program at runtime.
This is a surprisingly awkward question. (Even assuming the acquire; try { use; } finally { release; } code is correct.)
If the construction of the decorator fails, then you wont be closing the underlying stream. Therefore you do need to close the underlying stream explicitly, whether in the finally after use or, more diifcult after successfully handing over the resource to the decorator).
If an exception causes execution to fail, do you really want to flush?
Some decorators actually have resources themselves. The current Sun implementation of ZipInputStream for instance has non-Java heap memory allocated.
It has been claimed that (IIRC) two thirds of the resources uses in the Java library are implemented in a clearly incorrect manner.
Whilst BufferedOutputStream closes even on an IOException from flush, BufferedWriter closes correctly.
My advice: Close resources as directly as possible and don't let them taint other code. OTOH, you can spend too much time on this issue - if OutOfMemoryError is thrown it's nice to behave nicely, but other aspects of your program are probably a higher priority and library code is probably broken in this situation anyway. But I'd always write:
final FileOutputStream rawOut = new FileOutputStream(file);
try {
OutputStream out = new BufferedOutputStream(rawOut);
... write stuff out ...
out.flush();
} finally {
rawOut.close();
}
(Look: No catch!)
And perhaps use the Execute Around idiom.
The Java SE 7 try-with-resources doesn't seem to be mentioned. It eliminates needing to explicitly do a close completely, and I quite like the idea.
Unfortunately, for Android development this sweet only becomes available by using Android Studio (I think) and targeting Kitkat and above.
Also you dont have to close all nested streams
check this
http://ckarthik17.blogspot.com/2011/02/closing-nested-streams.html
I use to close streams like this, without nesting try-catch in finally blocks
public class StreamTest {
public static void main(String[] args) {
FileOutputStream fos = null;
BufferedOutputStream bos = null;
ObjectOutputStream oos = null;
try {
fos = new FileOutputStream(new File("..."));
bos = new BufferedOutputStream(fos);
oos = new ObjectOutputStream(bos);
}
catch (Exception e) {
}
finally {
Stream.close(oos,bos,fos);
}
}
}
class Stream {
public static void close(AutoCloseable... array) {
for (AutoCloseable c : array) {
try {c.close();}
catch (IOException e) {}
catch (Exception e) {}
}
}
}
Sun's JavaDocs include RuntimeExceptions in their documentation, as shown by InputStream's read(byte[], int, int) method; documented as throwing NullPointerException and IndexOutOfBoundsException.
FilterOutputStream's flush() is only documented as throwing IOException, thus it doesn't actually throw any RuntimeExceptions. Any that could be thrown would most likely be wrapped in an IIOException.
It could still throw an Error, but there's not much you can do about those; Sun recommends that you don't try to catch them.

Categories

Resources