I know it has been the point of many previous questions to close or not to close a ServletOutputStream like here: Should I close the servlet outputstream? or here: Should one call .close() on HttpServletResponse.getOutputStream()/.getWriter()? or with another focus here: Do I need to flush the servlet outputstream?
The general consensus seems to be not to close it because you are not owning it in a more strict sense. (The HttpServletResponse owns it.)
But what about e.g. these constructs:
PrintWriter out = new PrintWriter( new OutputStreamWriter( resp.getOutputStream(), MY.ENCODING ) );
Now I'm clearly the owner of the PrintWriter which has some additional buffers which at least needs to get flushed (and which are flushed e.g. by closing it).
What is the general consensus here? Do I need to close the PrintWriter (or any other such construct as for this matter.)?
EDIT: There are valid arguments for closing the stream, too. Notable e.g. not wanting something else writing on the stream. And meanwhile we have with try-with-resource constructs which might change the picture. See my other question here: Eclipse complaining over resource leak when not closing ServletOutputStream
This might change nothing (and my general feeling is to not close the stream, too) but try-with-resource above all literally screams for code like:
try( Something out = new Something( resp.getOutputStream() ) ){
out.print( "Foo" );
}
instead of
Something out;
try {
out = new Something( resp.getOutputStream() );
out.print( "Foo" )
} finally {
if( out != null && out.isUnFlushedWhatever() ) out.flush();
}
The OutputStream is something you're not creating, you just query a reference to it with ServletResponse.getOutputStream(). Therefore if you put something around it (e.g. OutputStreamWriter or ZipOutputStream) the wrapper stream or writer will just write to it.
It is implementation dependant whether closing a wrapper stream or writer closes the underlying stream, so you should not close that. But since in most of the cases the wrappers only use the underlying stream to write bytes, it is more than enough to flush the wrapper.
In cases where the wrapper needs some finalizing, it should be (and generally is) the wrapper's responsibility to provide this finalizing functionality in a separate method. For example ZipOutputStream provides a finish() method which finishes writing the contents of the ZIP output stream without closing the underlying stream.
Summarizing:
You should not close the wrapper, but check if it provides some finalizing method without closing the underlying stream, which you should obviously call.
Related
I'm using an ObjectInputStream to call readObject for reading in serialized Objects. I would like to avoid having this method block, so I'm looking to use something like Inputstream.available().
InputStream.available() will tell you there are bytes available and that read() will not block. Is there an equivalent method for seriailzation that will tell you if there are Objects available and readObject will not block?
No. Although you could use the ObjectInputStream in another thread and check to see whether that has an object available. Generally polling isn't a great idea, particularly with the poor guarantees of InputStream.available.
The Java serialization API was not designed to support an available() function. If you implement your own object reader/writer functions, you can read any amount of data off the stream you like, and there is no reporting method.
So readObject() does not know how much data it will read, so it does not know how many objects are available.
As the other post suggested, your best bet is to move the reading into a separate thread.
I have an idea that by adding another InputStream into the chain one can make availability information readable by the client:
HACK!
InputStream is = ... // where we actually read the data
BufferedInputStream bis = new BufferedInputStream(is);
ObjectInputStream ois = new ObjectInputStream(bis);
if( bis.available() > N ) {
Object o = ois.readObject();
}
The tricky point is value of N. It should be big enough to cover both serialization header and object data. If those are varying wildly, no luck.
The BufferedInputStream works for me, and why not just check if(bis.available() > 0) instead of a N value, this works perfectly for me.
I think ObjectInputStream.readObject blocks(= waits until) when no input is to be read. So if there is any input at all in the stream aka if(bis.available() > 0) ObjectInputStream.readObject will not block. Keep in mind that ObjectInputStream.readObject might throw a ClassNotFoundException, and that is't a problem at all to me.
I may be overthinking this, but I just wrote the code:
try (InputStream in = ModelCodeGenerator.class.getClassLoader().getResourceAsStream("/model.java.txt"))
{
modelTemplate = new SimpleTemplate(CharStreams.toString(new InputStreamReader(in, "ascii")));
}
Which means the InputStreamReader is never closed (but in this case we know its close method just closes the underlying InputStream.)
One could write it as:
try (InputStreamReader reader = new InputStreamReader(...))
But this seems worse. If InputStreamReader throws for some reason, the InputStream won't ever be closed, right? This is a common problem in C++ with constructors that call other constructors. Exceptions can cause memory/resource leaks.
Is there a best practice here?
Which means the InputStreamReader is never closed
Eh? In your code it is... And it will certainly handle the .close() of your resource stream as well. See below for more details...
As #SotiriosDelimanolis mentions however you can declare more than one resource in the "resource block" of a try-with-resources statement.
You have another problem here: .getResourceAsStream() can return null; you may therefore have an NPE.
I'd do this if I were you:
final URL url = ModelCodeGenerator.class.getClassLoader()
.getResource("/model.java.txt");
if (url == null)
throw new IOException("resource not found");
try (
final InputStream in = url.openStream();
final Reader reader = new InputStreamReader(in, someCharsetOrDecoder);
) {
// manipulate resources
}
There is a very important point to consider however...
Closeable does extend AutoCloseable, yes; in fact it only differs, "signature wise", by the exception thrown (IOException vs Exception). But there is a fundamental difference in behavior.
From the javadoc of AutoCloseable's .close() (emphasis mine):
Note that unlike the close method of Closeable, this close method is not required to be idempotent. In other words, calling this close method more than once may have some visible side effect, unlike Closeable.close which is required to have no effect if called more than once. However, implementers of this interface are strongly encouraged to make their close methods idempotent.
And indeed, the javadoc of Closeable is clear about this:
Closes this stream and releases any system resources associated with it. If the stream is already closed then invoking this method has no effect.
You have two very important points:
by contract, a Closeable also takes care of all resources associated with it; so, if you close a BufferedReader which wraps a Reader which wraps an InputStream, all three are closed;
should you call .close() more than once, there is no further side effect.
This also means, of course, that you can choose the paranoid option and keep a reference to all Closeable resources and close them all; beware however if you have AutoCloseable resources into the mix which are not Closeable!
But this seems worse. If InputStreamReader throws for some reason, the
InputStream won't ever be closed, right?
That's right (although unlikely, the InputStreamReader constructor doesn't really do much).
The try-with-resources lets you declare as many resources as you'd like. Declare one for the wrapped resource, and another for the InputStreamReader.
try (InputStream in = ModelCodeGenerator.class
.getClassLoader()
.getResourceAsStream("/model.java.txt");
InputStreamReader reader = new InputStreamReader(in)) {...}
Note that getResourceAsStream can potentially return null, which would cause the InputStreamReader constructor to throw a NullPointerException. If you want to deal with that differently, adapt how you retrieve the resource that's meant to be wrapped.
The tutorial linked above presents this example
try (
java.util.zip.ZipFile zf =
new java.util.zip.ZipFile(zipFileName);
java.io.BufferedWriter writer =
java.nio.file.Files.newBufferedWriter(outputFilePath, charset)
) {
with the explanation
In this example, the try-with-resources statement contains two
declarations that are separated by a semicolon: ZipFile and
BufferedWriter. When the block of code that directly follows it
terminates, either normally or because of an exception, the close
methods of the BufferedWriter and ZipFile objects are automatically
called in this order. Note that the close methods of resources are
called in the opposite order of their creation.
In short I need to do two things with one stream.
I need to pass a stream through a method to see if the bytes of that stream are of a particular type.
I need to create a new class using that stream once that check is completed.
I'm very new to streams and I know that they are "one way streets." So I think I have a bad design in my code or something if I find myself needing to reuse a stream.
Here is a snippit of the logic:
byte[] header = new byte[1024];
//reads entire array or until EOF whichever is first
bis.mark(header.length);
bis.read(header);
if(isFileType(header)) {
bis.reset();
_data.put(fileName, new MyClass(bis)); // Stream is now closed...
methodForFinalBytes(bis);
} else {
// Do other stuff;
}
It depends entirely on whether the InputStream implementation supports mark(). See http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html#markSupported(). Calling reset() on a stream that doesn't support mark() may throw an exception.
BufferedInputStream and ByteArrayInputStream support mark(), but others don't.
Generally, you can't reset an InputStream to get back to the start. There are, however the mark() / reset() methods, which make a stream remember the current position and you can rewind the stream to the marked position with reset().
Problem is, they are optional and may not be supported by the particular stream class in use. BufferedInputStream does support mark() / reset() (although within buffer limits). You can wrap your InputStream in a BufferedInputStream, immediately mark() and then run your detection code (but make sure it does not read ahead further than the buffer size, you can specify the buffer size in the BufferedInputStream constrcutor). Then call reset() and really read the stream.
EDIT: If you use ByteArrayInputStream anyway, that one supports mark/reset over its entire length (naturally).
I need to design an API method which takes an OutputStream as a parameter.
Is it a good practice to close the stream inside the API method or let the caller close it?
test(OutputStream os) {
os.close() //???
}
I think it should be symmetric.
If you do not open that stream (which is likely to be your case), you should not close it, either, in general.
Unless the purpose of the API is to "finish up the stream", you should let the caller close. He had it first, he was responsible for it, and he may decide that he wants to write some stuff to the stream that your API didn't originally envision. Keep your functionality seperated; its more composable.
Let user close it. As you are taking OutputStream in argument so we can think that user has already created and opened it. So if you close in your method it will be not good. And if you are just taking new OutputStream as argument and opens it in your method then no need to take it as argument and you can also close it in your method.
Different use-cases require different patterns, for example, depending on whether the caller needs to read from or write to the stream after the call has completed.
The key API design rule is that the API should specify whether it is the caller or called method's responsibility to close the stream.
Having said that, it is generally simpler and safer if the code that opens a stream is also responsible for closing it.
Consider the case where methodA is supposed to open a stream and pass it to methodB, but an exception is thrown between the stream being opened and methodB entering the try / finally statement that is ultimately responsible for closing it. You need to code it something like the following to ensure that streams don't leak:
public void methodA() throws IOException {
InputStream myStream = new FileInputStream(...);
try {
// do stuff with stream
methodB(myStream);
} finally {
myStream.close();
}
}
/**
* #param myStream this method is responsible for closing myStream.
*/
public void methodB(InputStream myStream) throws IOException {
try {
// do more stuff with myStream
} finally {
myStream.close();
}
}
This won't leak an open stream as a result of exceptions (or errors!) thrown in either methodA or methodB. (It works for the standard stream types because the Closable API specifies that close has no effect when called on a stream that is already closed.)
I have always been slightly confused with the amount of different IO implementations in Java, and now that I am completely stuck in my project development, I was taking my time to read up on useful stuff meanwhile.
I have realized that there is no newbie-friendly comparison (apart from short explanation at API for Writer class) between the different subclasses of the Writer class. So I figured I'd fire away the question, what are those different subclasses good for?
For example, I usually use a FileWriter wrapped with a BufferedWriter for my outputs to files but I have always been irritated by the fact that there is no println() like method, and one has to use newLine() every second line (to make the output human readable). PrintWriterhas the println() method but no constructor that supports appending however...
I'd really appreciate if you could give me your two cents from your experience, or a nice guide/how-to you might have stumbled upon.
EDIT: Thanks for the replies everyone, I really appreciate the info passed on here. It's a bit unfortunate that the whole append() thing ended up being in focus, it merely meant it as an example. My question was mostly referring to the need and use of all the different implementations, which I guess was mentioned somewhat in a couple of the answers.
It's hard to pick one answer as accepted, since there are three really solid answers, each has contributed to my understanding of the problem. I am gonna have to go with Anon, this time as he's got the least amount of rep. points (I presume he's new on SO). He's 15 answers some of which are really well formulated, and 0 questions asked. Good contribution I'd say, and that is worth promoting.
That being said, ColinD and Jay also provided really good answers, and have pointed out interesting ideas. Especially Jay's comment about Java automatically wrapping a BufferedWriter was worth noting. Thanks again guys, really appreciated!
The java.io classes generally follow the Decorator pattern. So, while PrintWriter does not have the specific constructor you might want, it does have a constructor that takes another Writer, so you can do something like the following:
FileOutputStream fos = null;
try
{
fos = new FileOutputStream("foo.txt");
PrintWriter out = new PrintWriter(
new BufferedWriter(
new OutputStreamWriter(fos, "UTF-8")));
// do what you want to do
out.flush();
out.close();
}
finally
{
// quietly close the FileOutputStream (see Jakarta Commons IOUtils)
}
As a general usage note, you always want to wrap a low-level Writer (eg FileWriter or OutputStreamWriter) in a BufferedWriter, to minimize actual IO operations. However, this means that you need to explicitly flush and close the outermost Writer, to ensure that all content is written.
And then you need to close the low-level Writer in a finally block, to ensure that you don't leak resources.
Edit:
Looking at MForster's answer made me take another look at the API for FileWriter. And I realized that it doesn't take an explicit character set, which is a Very Bad Thing. So I've edited my code snippet to use a FileOutputStream wrapped by an OutputStreamWriter that takes an explicit character set.
FileWriter is generally not an acceptable class to use. It does not allow you to specify the Charset to use for writing, which means you are stuck with whatever the default charset of the platform you're running on happens to be. Needless to say, this makes it impossible to consistently use the same charset for reading and writing text files and can lead to corrupted data.
Rather than using FileWriter, you should be wrapping a FileOutputStream in an OutputStreamWriter. OutputStreamWriter does allow you to specify a charset:
File file = ...
OutputStream fileOut = new FileOutputStream(file);
Writer writer = new BufferedWriter(new OutputStreamWriter(fileOut, "UTF-8"));
To use PrintWriter with the above, just wrap the BufferedWriter in a PrintWriter:
PrintWriter printWriter = new PrintWriter(writer);
You could also just use the PrintWriter constructor that takes a File and the name of a charset:
PrintWriter printWriter = new PrintWriter(file, "UTF-8");
This works just fine for your particular situation, and actually does the exact same thing as the code above, but it's good to know how to build it by wrapping the various parts.
The other Writer types are mostly for specialized uses:
StringWriter is just a Writer that can be used to create a String. CharArrayWriter is the same for char[].
PipedWriter for piping to a PipedReader.
Edit:
I noticed that you commented on another answer about the verbosity of creating a writer this way. Note that there are libraries such as Guava that help reduce the verbosity of common operations. Take, for example, writing a String to a file in a specific charset. With Guava you can just write:
Files.write(text, file, Charsets.UTF_8);
You can also create a BufferedWriter like this:
BufferedWriter writer = Files.newWriter(file, Charsets.UTF_8);
PrintWriter doesn't have a constructor that takes an "append" parameter, but FileWriter does. And it seems logical to me that that's where it belongs. PrintWriter doesn't know if you're writing to a file, a socket, the console, a string, etc. What would it mean to "append" on writes to a socket?
So the right way to do what you want is simply:
PrintWriter out=new PrintWriter(new BufferedWriter(new FileWriter(myfile, append)));
Interesting side note: If you wrap an OutputStream in a PrintWriter, Java automatically inserts a BufferedWriter in the middle. But if you wrap a Writer in a PrintWriter, it does not. So nothing is gained by saying:
PrintWriter out=new PrintWriter(new BufferedWriter(new OutputStreamWriter(new FileOutputStream(myfile))));
Just leave off the BufferedWriter and the OutputStreamWriter, you get them for free anyway. I have no idea if there is some good reason for the inconsistency.
It's true that you can't specify a character encoding in a FileWriter as ColinD notes. I don't know that that makes it "unacceptable". I almost always am perfectly happy to accept the default encoding. Maybe if you're using a language other than English this is an issue.
The need to wrap Writers or OutputStreams in layers was confusing to me when I first started using Java. But once you get the hang of it, it's no big deal. You just have to bend your mind into the write framework. Each writer has a function. Think of it like, I want to print to a file, so I need to wrap a FileWriter in a PrintWriter. Or, I want to convert an output stream to a writer, so I need an OutputStreamWriter. Etc.
Or maybe you just get used to the ones you use all the time. Figure it out once and remember how you did it.
You can create an appending PrintWriter like this:
OutputStream os = new FileOutputStream("/tmp/out", true);
PrintWriter writer = new PrintWriter(os);
Edit: Anon's post is right about both using a BufferedWriter in between and specifying the encoding.