If I use FileSystems.getDefault().getPath() it holds resource at FileSystems.getDefault(). Few tools like sonar, coverity give high impact issues of resource leak for using this.
If I replace it with Paths.get() all the such tools quietly accept this and there is no error or warning of resource leak.
If we see the implementation of Paths.get(), it is literally doing FileSystems.getDefault().getPath()
My question here is, how does java handle resource leak for Paths.get() because the code is exactly same but we don't have reference to FileSystems.getDefault() to explicitly close it?
Your tools are reporting a false positive. That's because the declared type returned by FileSystems.getDefault(), FileSystem, implements Closeable. That means that ideally you should close it. However, the default file system is an exception (it even throws an UnsupportedOperationException). You're tools cannot make that distinction.
This happens more often than you'd think. Some examples I've seen too often:
The result of Objects.requireNonNull. The input is returned as-is, but if the input is AutoCloseable my IDE treats it as a new resource.
The input stream or reader from a servlet request, and the output stream or writer from a servet response.
There are some cases where tools and IDEs can be smart. For instance, if I declare a variable as ByteArrayInputStream, ByteArrayOutputStream, StringReader or StringWriter, then my IDE knows they don't need to be closed. However, when I return these from a method as InputStream, OutputStream, Reader or Writer respectively, my IDE starts complaining if I don't close them.
If you know that it is a false positive, you can use #SuppressWarnings("resource") to ignore warnings by some tools. That can often be applied to single variables:
#SuppressWarnings("resource") FileSystem defaultFS = FileSystems.getDefault();
Path path = defaultFS.getPath("foo");
However, even then sometimes your tools will complain, and you have to suppress the resource warnings for the entire method. If that's the case, try to keep your method (and therefore the scope of the warning suppression) as small as possible; split off code to a new method if needed.
Related
I saw this example on the web and in the Book "Effective Java" (by Joshua Bloch).
try(BufferedWriter writer = new BufferedWriter(new FileWriter(fileName))){
writer.write(str); // do something with the file we've opened
}
catch(IOException e){
// handle the exception
}
There is no problem with this example, BufferedWriter which will automatically get closed and that, in turn, will close the FileWriter; however, in other case, if we declare 2 nested resources this way:
try (AutoClosable res = new Impl2(new Impl1())) {... }
I guess it might happen that new Impl1() performs fine, but new Impl2() crashes, and in this case Java would have no reference to the Impl1, in order to close it.
Shouldn't it be a better practice to always declare multiple resources independently (even if not required in this case), like this?
try(FileWriter fw = new FileWriter(fileName);
BufferedWriter writer = new BufferedWriter(fw)){ ... }
After some searching I was able to find this article: https://dzone.com/articles/carefully-specify-multiple-resources-in-single-try
By definition JLS 14.20.3 a ResourceList consists of Resources separated by ;. Based on that we can conclude that a nested initialization like AutoClosable res = new Impl2(new Impl1()) is a single resource. Because of that rules defined for try-with-resources with multiple resources won't apply here, important ones being:
Resources are initialized in left-to-right order. If a resource fails to initialize (that is, its initializer expression throws an exception), then all resources initialized so far by the try-with-resources statement are closed. If all resources initialize successfully, the try block executes as normal and then all non-null resources of the try-with-resources statement are closed.
Resources are closed in the reverse order from that in which they were initialized. A resource is closed only if it initialized to a non-null value. An exception from the closing of one resource does not prevent the closing of other resources. Such an exception is suppressed if an exception was thrown previously by an initializer, the try block, or the closing of a resource.
What is more, Implt1#close() won't be called unless it is explicitly called inside of Impl2#close()
In short, it is better to declare multiple resources in separate statements separated with ;, like so:
try(Impl1 impl1 = new Impl1();
Impl2 impl2 = new Impl2(impl1))
If the exception happens in the try part, there is no resource leakage (assuming Impl1 is written well). Notice that a raised exception in Impl1() will not reach Impl2 as the constructor argument is evaluated before calling it.
try (AutoClosable res = new Impl2(new Impl1())) {
So it is fine to nest such wrapping constructors; better style if the code does not become to long.
One remark: FileWriter and FileReader are old utility classes using the platform encoding, which will differ per application installation.
Path path = Paths.get(fileName);
try (BufferedWriter writer =
Files.newBufferedWriter(path, StandardCharsets.UTF8)) {
First of all, for the sake of a little sanity check, I would say, that I could not find the example, you provided, in Joshua Bloch's Effective Java (3rd Edition, 2018). If you are reading the previous version, it should be better to get the latest one. If it is my bad, please refer particular page number to me.
Now with respect to the question itself.
Let's begin with JSL §14.20.3, which says, that Resources are separated by ;. This means, that disregarding of how long our decoration chain of the object creations will be (e.g. new Obj1(new Obj2(...new ObjK(..))); it will be treated as a one single resource, as it is a one definition/statement.
As we now know what does a Single Resource constitute, let me shed some light from my observation.
Reasons, why it is better to define resources separately
JSL §14.20.3 also states, that:
If a resource fails to initialize (that is, its initializer expression throws an exception), then all resources initialized so far by the try-with-resources statement are closed. If all resources initialize successfully, the try block executes as normal and then all non-null resources of the try-with-resources statement are closed.
Q: What does that mean for us?
A: If that single resource is a chain of objects passed into wrapping constructors, it does not mean, that throwing an exception from the initialization phase will close those embedded resources, rather the .close() method will be invoked on the parent (wrapper, enclosing) objects, and therefore, you might easily end up with a resource leak.
On the other hand, you can be resting assured that all resources will be closed, if you have defined them separately.
JSL §14.20.3 also states, that:
An exception from the closing of one resource does not prevent the closing of other resources. Such an exception is suppressed if an exception was thrown previously by an initializer, the try block, or the closing of a resource.
Q: What does that mean for us?
A: If you have declared your resources separately, then it does not matter which throws exception and which does not. They all will be closed successfully.
Unfortunately the book you mentioned (at least the 3rd version it) does not cover the question you brought up here; however, I did some more research and found that Core Java (10th Edition), by Cay S. Horstmann, confirms the point I referred to above, in its §7.2.5:
When the block exits normally, or when there was an exception, the in.close()
method is called, exactly as if you had used a finally block.
and
No matter how the block exits, both in and out are closed.
in and out are Autoclosable objects in the examples given in the book.
Q: What does this mean?
A: It means, that exceptions thrown from any phase of one of the resources, does not anyhow affect how another resources are closed.
Based on all above, I think, that it also depends on how the resources are implemented. E.g. if the enclosing resource, upon an invocation of its .close(), implements the logic to close the enclosed resource, then your concern:
I guess it might happen that new Impl1() performs fine, but new Impl2() crashes, and in this case Java would have no reference to the Impl1, in order to close it.
will not really be a problem in your particular case, as the container resource which is being closed, will hold a reference to the contained resource and eventually will close it (or will just implement its closure);
However, it is still a bad idea to construct resources in the decoration of chaining them into wrapper constructors, due to several reasons I brought up.
P. S.
In any case, disregarding of how the resources are implemented or how you chain them or etc. everything, including JLS, Core Java book, OCP Java SE 8 book, and points brought here, indicate that it's always best to declare your resources separately.
The File class in Java contain methods that utilize boolean values to indicate the successfulness of the operation being carried out. Users of said methods are required to check the return value every time it is being called.
Below is the snippet of the documentations taken from mkdir() stating the requirement:
public boolean mkdir()
Creates the directory named by this file, assuming its parents exist. Use mkdirs if you also want to create missing parents.
Note that this method does not throw IOException on failure. Callers
must check the return value.
There's also a case with createNewFile() which (even weirder) use both boolean values as well as thrown exceptions to indicate successfulness:
public boolean createNewFile() throws IOException
Creates a new, empty file on the file system according to the path
information stored in this file. This method returns true if it
creates a file, false if the file already existed. Note that it
returns false even if the file is not a file (because it's a
directory, say).
...
Note that this method does not throw IOException if the file already
exists, even if it's not a regular file. Callers should always check
the return value, and may additionally want to call isFile.
Now, this seems inconvenient at best, because the user would have to anticipate two kind of error scenarios instead of just using a simple try-catch block.
What's the reason behind this fuss?
Because that's the way they designed it, over twenty years ago. If you can get the developers out of their retirement homes and off their Zimmer frames you might get a better answer. Otherwise we are all just guessing.
However you don't need to call these methods as often as some people here seem to think. For example, isFile()/exists()/delete()/createNewFile() are all redundant before new FileInputStream(...) or new FileOutputStream(...), which will throw exactly the exceptions you are looking for. Calling File.exists()/delete()/createNewFile() before either of these or the corresponding FileReader/Writer constructors is worse than useless, it is a positive waste of time and space, doing work that the constructor (or rather the operating system code invoked by the constructor) has to repeat. I doubt that I have ever used File.createNewFile() in 20 years.
I have a class called SQLProvider, it contains methods for both opening and closing a SQLite database connection. Using annotations or another approach, is it possible to flag a compiler warning if the open method is used without also calling close?
{
SQLProvider provider = new SQLProvider();
provider.open()
// display a compiler warning unless provider.close() is also invoked in this code block
}
I am not sure it would be the best approach but you can make your class implement Closeable interface . As per the Eclipse documenatation, Eclipse will display warning :
When enabled, the compiler will issue an error or a warning if a local variable holds a value of type 'java.lang.AutoCloseable' (compliance >= 1.7) or a value of type 'java.io.Closeable' (compliance <= 1.6) and if flow analysis shows that the method 'close()' is not invoked locally on that value.
There is currently no such facility in the standard Java toolchain.
In addition to the Eclipse compiler warnings (see New Idiot's answer), some static code analysers can warn you about this:
For PMD, the CloseResource design rule covers this (or at least some subcases ...)
FindBugs can also do this; e.g. FindBugs - "may fail to close stream" when using ObjectOutputStream
But the problem is these warnings are more or less heuristic. They only understand certain standard idioms for reliably closing resources. If you do it some other way (which is still provably reliable) they are likely to produce a false warning. This is not necessarily bad (because doing this a non-standard way is liable to fool / puzzle future readers!). However the possibility of false positives may tempt developers to ignore or (worse still) suppress the warnings.
There are a couple of additional issues:
These checks may give a false negative for resource objects that have a close() method without implementing Closeable or AutoCloseable.
These checks may give a false negative for Closeable / AutoCloseable resource objects where the close() operation is a no-op; for example, StringWriter.
These issues may explain why the standard Java toolchain doesn't support this.
I am using the getResponseBody() method of the org.apache.commons.httpclient.methods.PostMethod class. However, I am always getting a message written to the console at runtime:
WARNING: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
In the code I have to write the response to a byte array anyway, so it is the getResponseBody() method that I ought to use. But is there an easy way that I can suppress the warning message so I don't have to look at it at every run?
If it was a compiler error, I'd use the #SuppressWarnings annotation, but this isn't a compile-time issue; it happens at runtime. Also, I could use getResponseBodyAsStream to write to a ByteArrayOutputStream, but this seems like a hacky way to get around the warning (extra lines of code to do what getResponseBody() is already doing for me).
My guess is that the answer involves System.out or System.err manipulation, but is there a good way to do this?
If you want to cleanly stop this log entry, there's a tunable max that triggers the warning. I saw this looking at the code.
int limit = getParams().getIntParameter(HttpMethodParams.BUFFER_WARN_TRIGGER_LIMIT, 1024*1024);
if ((contentLength == -1) || (contentLength > limit)) {
LOG.warn("Going to buffer response body of large or unknown size. "
+"Using getResponseBodyAsStream instead is recommended.");
}
HttpMethodBase.setParams() looks like the place to set HttpMethodParams.BUFFER_WARN_TRIGGER_LIMIT to the desired value.
that warning happends when httpClient have no idea about the length of the return data
you should set the content-length attribute in your server end
response.addHeader("Content-Type", "text/html; charset=utf-8");
response.addHeader("Content-Length", String.valueOf(output.getBytes("utf8").length));
after that, that warning should disapear
I would recommend that you do as the warning suggests and use a stream rather than a byte array. If the response you're trying to push is particularly large (suppose it's a large file), you will load it all into memory, and that'd be a very bad thing.
You're really better off using streams.
That said, you might hack around it by replacing System.err or System.out temporarily. They're just PrintStream objects, and they're settable with the setOut and setErr methods.
PrintStream oldErr = System.err;
PrintStream newErr = new PrintStream(new ByteArrayOutputStream());
System.setErr(newErr);
// do your work
System.setErr(oldErr);
Edit:
I agree that it would be preferable to
use streams, but as it is now, the
target API where I need to put the
response is a byte array. If
necessary, we can do a refactor to the
API that will allow it to take a
stream; that would be better. The
warning is definitely there for a
reason.
If you can modify the API, do so. Stream processing is the best way to go in this case. If you can't due to internal pressures or whatever, go #John M's route and pump up the BUFFER_WARN_TRIGGER_LIMIT -- but make sure you have a known contentLength, or even that route will fail.
Is the library outputting through log4j? if so, editing the log4j.properties to set the output for this class to "ERROR" would work, e.g.
log4j.logger.org.apache.commons.httpclient.methods.PostMethod=ERROR
This issue has already been debated on the ASF JIRA. There are two ways to resolve this:
Set a higher value for BUFFER_WARN_TRIGGER_LIMIT. A high enough value is more likely to suppress the warning; however, that defeats the purpose of the warning itself. If you are buffering a lot of data to be eventually parsed in one pass, you are better off reading the data from a stream into an array before working on the array.
Set the logging level to a higher value for HttpClient, if you are comfortable with ERROR or FATAL.
Assuming the warning is written to stderr, you can always suppress the warning by piping stderr to /dev/null, or whatever the equivalent on your system is.
to simply 'save' the stderr messages then print them after completion of the main task
PrintStream origErr = System.err;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PrintStream newErr = new PrintStream(baos);
System.setErr(newErr);
====== do stuff ======
System.setErr(origErr);
System.err.print(baos);
I'm writing a test for a piece of code that has an IOException catch in it that I'm trying to cover. The try/catch looks something like this:
try {
oos = new ObjectOutputStream(new FileOutputStream(cacheFileName));
} catch (IOException e) {
LOGGER.error("Bad news!", e);
} finally {
The easiest way seems to make FileOutputStream throw a FileNotFoundException, but perhaps I'm going about this all the wrong way.
Anyone out there have any tips?
You could set cacheFileName to an invalid name or to one you know doesn't exist.
From your comment:
Yes, I suppose the question should
really have been "How do I create a
file that does not exist on both Linux
and Windows?" On windows, I can use
'new File("X:/")', where X: is a drive
letter that does not exist. On Linux,
this does not work because that is a
valid file name.
Look at java.io.File.createTempFile. Use it to create the file and then delete it.
Probably pass it something like:
File tempFile;
tempFile = createTempFile(getClass().getName(),
Long.toString(System.currentTimeMillis());
tempFile.delete();
That should give you a unique name in a platform indendent manner that you can safely use without (much) fear of it existing.
There are two parts to any test: getting it to happen and measuring that you got the correct result.
Fault Injection
The easiest answer is the one that's already been mentioned, which is to set cacheFileName to a file that will never exist. This is likely the most practical answer in this situation.
However, to cause an arbitrary condition such as an IOException, what you really want is Fault Injection. This forces faults in your code without forcing you to instrument your source code. Here are a few methods for doing this:
Mock objects You could use a factory method to create an overridden ObjectOutputStream or FileOutputStream. In test code the implementation would throw an IOException when you wanted to, and in production code would not modify the normal behavior.
Dependency Injection In order to get your Mock Object in the right place you could use a framework such as Spring or Seam to "inject" the appropriate object into the class that's doing the work. You can see that these frameworks even have a priority for objects that will be injected, so that during unit testing you can override the production objects with test objects.
Aspect Oriented Programming Instead of changing the structure of your code at all, you can use AOP to inject the fault in the right place. For instance, using AspectJ you could define a Pointcut where you wanted the exception to be thrown from, and have the Advice throw the desired exception.
There are other answers to fault injection on Java; for instance a product called AProbe pioneered what could be called AOP in C long ago, and they also have a Java product.
Validation
Getting the exception thrown is a good start, but you also have to validate that you got the right result. Assuming that the code sample you have there is correct, you want to validate that you logged that exception. Someone above mentioned using a Mock object for your logger, which is a viable option. You can also use AOP here to catch the call to the logger.
I assume that the logger is log4j; to solve a similar problem, I implemented my own log4j appender which captures log4j output: I specifically capture only ERROR and FATAL, which are likely to be the interesting log messages in such a case. The appender is referenced in log4j.xml and is activated during the test run to capture error log output. This is essentially a mock object, but I didn't have to restructure all my code that got a log4j Logger.
I'm writing a test for a piece of code
that has an IOException catch in it
that I'm trying to cover.
I'm not entirely sure I understand your goal, but if you want to test if the exception is thrown, you can tell the test you expect it to throw the exception:
#Test(expected=IOException.class)
Your test will then fail if the exception is not thrown, and succeed if it is thrown (like, if the cacheFileName file does not exist).
A FileNotFoundException would obviously trigger the catch. The javadoc states the cases where it will be thrown.
You should also consider that the ObjectOutputStream constructor can throw an IOException so may want to cover this case in your tests.
Two easy ways would be either set cacheFileName to a non-existent file or set the specified file to read-only access.
-John
As the code is currently written, you could try to mock out the error() call on the LOGGER object and check to see if it gets called when you expect an IOException.
Your desire to test may have uncovered a fundamental problem with the code as it's written. An error is occurring but there's no boolean or flag value (set the filename to a special pattern) that provides other sections of code to determine if writing to the file was successful. If this is contained in a function, maybe you could return a boolean or set an object level variable.
cacheFileName = "thisFileShouldNeverExistAndIfItDoesYouAreScrewingUpMyTests";
Sure you could take steps and jump thru hoops to programatically make sure that the file name will never, ever, ever exist, or you could use a String that will never exist in 99.99999% of cases.
I hope this is what you meant.
if(new File(cachedFile).exists()) {
oos = new ObjectOutputStream(new FileOutputStream(cacheFileName));
//do your code here
} else {
throw new FileNotFoundException("File doesn't exist!");
}