My app needs to do the following:
Open a FileInputStream, and obtain the underlying FileDescriptor (via getFd())
Create new FileInputStream objects based on the above FileDescriptor
So far, I only needed one FileDescriptor, so I used to close it by calling close() on the original stream (i.e. on the stream which getFd() I called). I use it because some Android API methods have such a parameter.
Now that I will have more FileInputStream objects at the same time, when will the FileDescriptor be closed? (My guess: when all FileInputStream objects are closed?)
I belive you are right. A small test shows that the FileDescriptor becomes invalid after its FileInputStream is closed. Note that, in case of more than one FileInputStream for the same FileDescriptor, the FileDescriptor becomes invalid as soon as its first FileInputStream is closed, i.e. it does not matter if you close first fis1 and then fis2 or the other way around:
FileInputStream fis1 = new FileInputStream("/tmp/adb.log");
FileDescriptor fd = fis1.getFD();
FileInputStream fis2 = new FileInputStream(fd);
System.out.println(fd.valid());
fis1.close();
System.out.println(fd.valid());
fis2.close();
System.out.println(fd.valid());
Output is:
true
false
false
Do not forget to close the stream in a finally block, to make sure you close it also in case of an I/O (read/write) error.
Android FileInputStream has a concept of fd ownership.
isFdOwner is true when you create the stream from File.
Is it false when you create it from existing FileDescriptor.
(actually there is a hidden constructor which allows you to specify
whether the new stream is also an owner or not,
but 'false' works great in almost all situations)
If you close FileInputStream that was opened with File - it will be closed.
Closing streams that were opened with fd - will do nothing.
Related
I'm uploading zip of excel files as multipart file, but when I create Workbook object of first file, the stream gets closed and I'm not able to read next files.
its working with single file in zip but not with multiple files.
can anybody help? TIA.
try {
ZipInputStream zis = new ZipInputStream(multipartFile.getInputStream());
ZipEntry zipEntry;
while((zipEntry = zis.getNextEntry()) != null) {
XSSFWorkbook workbook = new XSSFWorkbook(zis);
readWorkbook(workbook);
}
zis.close();
} catch (Exception e) {
LOG.error(e);
}
only option is to wrap it so you can intercept close() and prevent it from closing the zip. Something like:
var wrapper = new FilterInputStream(zis) {
#Override public void close() {}
}
Then pass wrapper and not zis to new XSSFWorkbook.
NB: WARNING - your code is severely hampered, essentially buggy. You need to ensure you close that ZipInputStream at the very end, and you're not doing that now. Your exception handling is really bad. You must end all exception blocks with a hard exit, just logging it isn't good enough. throw new RuntimeException("uncaught", e); is the fallback option (many IDEs ship with e.printStackTrace() as default. This is a known extremely stupid default, update your IDE to fix this. Use try-with-resources as well to ensure closing always happens (just calling close() isn't good enough; that close() call you do have in your snippet won't be invoked if exceptions occur.
When reading from a Zipfile, you have an InputStream for the archive. Then you traverse the different entries in there and for each of them you again have an InputStream to read from. Make sure you do not close the entries' InputStreams as that event will close the archive stream.
In your case it may be that the constructor for XSSFWorkbook or the readWorkbook method is closing the stream.
I am new to programming, I need help in understanding the difference between 2 ways of creating a fileinputstream object for reading files. I have seen examples on internet, some have used first one and others second one. I am confused which is better and why?
FileInputStream file = new FileInputStream(new File(path));
FileInputStream file = new FileInputStream(path);
Both are fine. The second one calls the first implicitly.
public FileInputStream(String name) throws FileNotFoundException {
this(name != null ? new File(name) : null);
}
If you have a reference to the file which should be read, use the former. Else, you should probably use the latter (if you only have the path).
Don't use either in 2015. Use Files.newInputStream() instead. In a try-with-resources statement, at that:
final Path path = Paths.get("path/to/file");
try (
final InputStream in = Files.newInputStream(path);
) {
// do stuff with "in"
}
More generally, don't use anything File in new code in 2015 if you can avoid it. JSR 203, aka NIO2, aka java.nio.file, is incomparably better than java.io.File. And it has been there since 2011.
The FileInputStream Class has three constructors. Described in the official documentation:
FileInputStream(File file)
Creates a FileInputStream by opening a connection to an actual file, the file named by the File object file in the file system.
FileInputStream(String name)
Creates a FileInputStream by opening a connection to an actual file, the file named by the path name name in the file system.
FileInputStream(FileDescriptor fdObj)
Creates a FileInputStream by using the file descriptor fdObj, which represents an existing connection to an actual file in the file system.
As you see here there is no real difference.
Actually they both have the same way to open a file. The first constructor calls
SecurityManager.checkRead(File.getPath())
And the second one uses the same checkRead() as
SecurityManager.checkRead(name)
if you want use
FileInputStream file = new FileInputStream(new File(path));
for create FileInputStream need more time, if I don't mistake, because this constructor doing some checks with security manager
There is not much difference between the two , as
FileInputStream file = new FileInputStream(path)
implicitly calling other.
public FileInputStream(String name) throws FileNotFoundException {
this(name != null ? new File(name) : null);
}
But to make better use of two available constructors, we can use constructor taking File argument when there is already a File object so we will be avoiding creation of another file object which will be created implicitly If we are using another constructor
Secondly, It is better to create FileinputStream object only after checking the existence of file which can be checked by using file.exists() in that case we can avoid FileNotFoundException.
I want to create a zip archive in Java where each contained file is produced by serializing some objects. I have a problem with correctly closing the streams.
The code looks like this:
try (OutputStream os = new FileOutputStream(file);
ZipOutputStream zos = new ZipOutputStream(os);) {
ZipEntry ze;
ObjectOutputStream oos;
ze = new ZipEntry("file1");
zos.putNextEntry(ze); // start first file in zip archive
oos = new ObjectOutputStream(zos);
oos.writeObject(obj1a);
oos.writeObject(obj1b);
// I want to close oos here without closing zos
zos.closeEntry(); // end first file in zip archive
ze = new ZipEntry("file2");
zos.putNextEntry(ze); // start second file in zip archive
oos = new ObjectOutputStream(zos);
oos.writeObject(obj2a);
oos.writeObject(obj2b);
// And here again
zos.closeEntry(); // end second file in zip archive
}
I know of course that I should close each stream after finishing using it, so I should close the ObjectOutputStreams in the indicated positions. However, closing the ObjectOutputStreams would also close the ZipOutputStream that I still need.
I do not want to omit the call to ObjectOutputStream.close() because I do not want to rely on the fact that it currently does not more than flush() and reset().
I also cannot use a single ObjectOutputStream instance because then I miss the stream header that is written by the constructor (each single file in the zip archive would not be a full object serialization file, and I could not de-serialize them independently).
The same problem occurs when reading the file again.
The only way I see would be to wrap the ZipOutputStream in some kind of "CloseProtectionOutputStream" that would forward all methods except close() before giving it to the ObjectOutputStream. However, this seems rather hacky and I wonder if I missed a nicer solution in the API.
If your OutputStream wrapper throws an exception when closed more than once, it is not a hack. You can create a wrapper for each zip entry.
From an architectural point of view, I think the ObjectOutputStream author should have provided an option to disable close() cascading. You are just workarounding his lacking API.
In this case, and for all the reasons you mentioned, I would simply not pipe my ObjectOutputStream to the ZipOutputStream. Instead, serialize to a byte[] and then write that straight into the ZipOutputStream. This way, you are free to close the ObjectOutputStream and each byte[] you produce will have the proper header from the serializer. One down side is you wind up with a byte[] in memory that you didn't have before but if you get rid of it right away, assuming we're not talking about millions of objects, the garbage collector shouldn't have a hard time cleaning up.
Just my two cents...
It at least sounds less hacky than a stream subclass that changes the close() behavior.
If you're intending to throw the ObjectOutputStream away anyway, then it should be sufficient to call flush() rather than close(), but as you say in the question the safest approach is probably to use a wrapper around the underlying ZipOutputStream that blocks the close() call. Apache commons-io has CloseShieldOutputStream for this purpose.
I have a ZipInputStream that contains a number of XML files that I want to apply a transform to. The following piece of code loads the XSLT and ZIP files and loops through the ZIP entries, attempting to apply the transform to each one. However it appears the transform function is closing the input stream after performing the transform, causing the getNextEntry() function to fail because the stream is closed.
Is there is a simple way around this problem (to keep the input stream open allowing the ZipInputStream to move to the next entry) or am I missing something more fundamental here?
TransformerFactory tFactory = TransformerFactory.newInstance();
Transformer transformer = tFactory.newTransformer(new StreamSource(xsltFileName));
FileInputStream fis = new FileInputStream(fileName);
ZipInputStream zis = new ZipInputStream(fis);
ZipEntry ze = null;
while ((ze = zis.getNextEntry()) != null)
{
String newFileName = ze.getName();
transformer.transform(new StreamSource(zis), new StreamResult(new FileOutputStream(newFileName)));
}
I have attempted to search for a solution but don't seem to be coming up with anything that makes sense. I'd appreciate any ideas or feedback.
One possible solution is to extend ZipInputStream (it's not final) and override the close method to do nothing. Of course you need to make sure then to close it your self. You can do that with a second custom close method that simply calls super.close().
class MyZIS extends ZipInputStream {
public MyZIS(InputStream in) {
super(in);
}
#Override
public void close() throws IOException {
}
public void myClose() throws IOException {
super.close();
}
}
Generally the accepted protocol is that "he who creates an input stream should close it after use" and it appears your XSLT processor (Xalan?) isn't following this convention. If that's the case, then a workaround (apart from moving to a different XSLT processor!) is to write a filter stream that wraps the ZipInputStream and passes on all calls to the underlying ZipInputStream, except for the close() call which it intercepts.
You should actually be using the class ZipFile for reading the zip archive. Then you get the inputstream for the zip entry like this:
zipfile.getInputStream(zipEntry);
Perhaps what you need to do is then read the zip input into a temporary buffer, then use that as the source to the transformer. My understanding is that the transformer needs to read the entire stream to determine what the transform should be, therefore, even if it didn't close the input stream, the next read would hit EOF.
Perhaps something like this? (no optimization has been done)
byte[] bytes = new byte[(int) entry.getSize()];
zis.read(bytes);
ByteArrayInputStream out = new ByteArrayInputStream(bytes);
transformer.transform(new StreamSource(zis), new StreamResult(new FileOutputStream(newFileName))); }
There is a property you can set : IsStreamOwner, when this is false the underlying stream will not be closed.
I have the following piece of code in a try/catch block
InputStream inputstream = conn.getInputStream();
InputStreamReader inputstreamreader = new InputStreamReader(inputstream);
BufferedReader bufferedreader = new BufferedReader(inputstreamreader);
My question is that when I have to close these streams in the finally block, do I have to close all the 3 streams or just closing the befferedreader will close all the other streams ?
By convention, wrapper streams (which wrap existing streams) close the underlying stream when they are closed, so only have to close bufferedreader in your example. Also, it is usually harmless to close an already closed stream, so closing all 3 streams won't hurt.
Normally it is ok to just close the most outer stream, because by convention it must trigger close on the underlying streams.
So normally code looks like this:
BufferedReader in = null;
try {
in = new BufferedReader(new InputStreamReader(conn.getInputStream()));
...
in.close(); // when you care about Exception-Handling in case when closing fails
}
finally {
IOUtils.closeQuietly(in); // ensure closing; Apache Commons IO
}
Nevertheless there may be rare cases where an underlying stream constructor raises an exception where the stream is already opened. In that case the above code won't close the underlying stream because the outer constructor was never called and in is null. So the finally block does not close anything leaving the underlying stream opened.
Since Java 7 you can do this:
try (OutputStream out1 = new ...; OutputStream out2 = new ...) {
...
out1.close(); //if you want Exceptions-Handling; otherwise skip this
out2.close(); //if you want Exceptions-Handling; otherwise skip this
} // out1 and out2 are auto-closed when leaving this block
In most cases you do not want Exception-Handling when raised while closing so skip these explicit close() calls.
Edit
Here's some code for the non-believers where it is substantial to use this pattern. You may also like to read Apache Commons IOUtils javadoc about closeQuietly() method.
OutputStream out1 = null;
OutputStream out2 = null;
try {
out1 = new ...;
out2 = new ...;
...
out1.close(); // can be skipped if we do not care about exception-handling while closing
out2.close(); // can be skipped if we ...
}
finally {
/*
* I've some custom methods in my projects overloading these
* closeQuietly() methods with a 2nd param taking a logger instance,
* because usually I do not want to react on Exceptions during close
* but want to see it in the logs when it happened.
*/
IOUtils.closeQuietly(out1);
IOUtils.closeQuietly(out2);
}
Using #Tom's "advice" will leave out1 opened when creation of out2 raises an exception. This advice is from someone talking about It's a continual source of errors for obvious reasons. Well, I may be blind, but it's not obvious to me. My pattern is idiot-safe in every use-case I can think of while Tom's pattern is error-prone.
Closing the outermost one is sufficient (i.e. the BufferedReader). Reading the source code of BufferedReader we can see that it closes the inner Reader when its own close method is called:
513 public void close() throws IOException {
514 synchronized (lock) {
515 if (in == null)
516 return;
517 in.close();
518 in = null;
519 cb = null;
520 }
521 }
522 }
As a rule of thumb, you should close everything in the reverse order that you opened them.
I would close all of them in the inverse order from which you have opened them, as if when opening them would push the reader to a stack and closing would pop the reader from the stack.
In the end, after closing all, the "reader stack" must be empty.
You only need to close the actual resource. You should close the resource even if constructing decorators fails. For output, you should flush the most decorator object in the happy case.
Some complications:
Sometimes the decorators are different resources (some compression implementations use the C heap).
Closing decorators in sad cases actually causes flushes, with ensuing confusion such as not actually closing the underlying resource.
It looks like you underlying resource is a URLConnection, which doesn't have a disconnect/close method as such.
You may wish to consider using the Execute Around idiom so you don't have to duplicate this sort of thing.