Related
I started using javax.annotation especially to warn the next developer who maybe will be working with my code in the future.
But while I was using the javax.annotation #Nonnull annotation, a question came into my mind:
If you mark f.e. a parameter of a method thorugh the #Nonnull annotation that it haves to have a value,
do you still need to handle the case, that the next developer who is using your code could be parsing null to your function?
If found one con argument and one pro argument to still handle the special cases.
con: The code is cleaner, especially if you have multiple parameters that you mark with #Nonnull
private void foo(#Nonnull Object o)
{
/*do something*/
}
vs
public void foo(Object o)
throws NullPointerException
{
if (o == null)
{
throw new NullPointerException("Given Object must have a value!");
}
/*do something*/
}
pro: It could cause unhandled errors if the next developer ignore the annotations.
This is an unsolved problem in the nullity annotation space. There are 2 viewpoints that sound identical but result, in fact, in the exact opposite. Given a parameter void foo(#NonNull String param), what does that imply?
It's compiler-checkable documentation that indicates you should not pass null as param here. It does not mean that it is impossible to do this, or that one ought to consider it impossible. Simply that one should not - it's compiler-checkable documentation that the method has no defined useful behaviour if you pass null here.
The compiler is extended to support these annotations to treat it as a single type - the type of param is #NonNull String - and the compiler knows what that means and will in fact ensure this. The type of the parameter is #NonNull String and therefore cannot be null, just like it can't be, say, an InputStream instance either.
Crucially, then, the latter means a null check is flagged as silly code, whereas the former means lack of a null check is marked as bad. Hence, opposites. The former considered a nullcheck a warnable offense (with something along the lines of param can never be null here), for the same reason this is silly code:
void foo(String arg) {
if (!(arg instanceof String)) throw new IllegalArgumentException("arg");
}
That if clause cannot possibly fire. The mindset of various nullchecker frameworks is identical here, and therefore flags it as silly code:
void foo(#NonNull String arg) {
if (arg == null) throw new NullPointerException("arg");
}
The simple fact is, plenty of java devs do not enable annotation-based nullity checking, and even if they did, there are at least 10 competing annotations and many of them mean completely different things, and work completely differently. The vast majority will not be using a checking framework that works as you think it should, therefore, the advice to remove the nullcheck because it is silly is actively a bad thing - you should add that nullcheck. The linting tools that flag this down are misguided; they want to pretend to live in a world where every java programmer on the planet uses their tool. This isn't try and is unlikely to ever become true, hence, wrong.
A few null checking frameworks are sort of living both lives and will allow you to test if an argument marked as #NonNull is null, but only if the if body starts with throw, otherwise it's flagged.
To answer your questions:
You should nullcheck. After all, other developers that use your code may not get the nullity warnings from the nullcheck tool (either other team members working on the same code base but using slightly different tools and/or configurations of those tools, or, your code is a library and another project uses it, a more obvious route to a situation with different tools/configs). The best way to handle a null failure is a compile time error. A close second is an exception that is clear about the problem and whose stack trace can be used to very quickly solve the bug. A distant third is random bizarreness that takes a whole to debug - and that explicit nullcheck means you get nice fallback: If for whatever reason the write-time tooling doesn't catch the problem, the check will then simply turn it into the second, still quite acceptable case of an exception at the point of failure that is clear about what happened and where to fix it.
Lombok's #NonNull annotation can generate it for you, if you want. Now you have the best of both worlds: Just a #NonNull annotation (no clutter) and yet a runtime exception if someone does pass null anyway (DISCLAIMER: I'm one of the core contributors to Lombok).
If your linting tool complains about 'pointless null check' on the line if (param == null) throw new NullPointerException("param");, find the option in the linting tool to exclude if-checks that result in throw statements. If the linting tool cannot be configured to ignore this case, do not use the linting tool, find a better one.
Note that modern JVMs will throw a NullPointerException with the name of the expression as message if you dereference a null pointer, which may obviate the need to write an explicit check. However, now you're dependent on that method always dereferencing that variable forever more; if ever someone changes it and e.g. assigns it to a field and returns, now you have a problem: It should have thrown the exception, in order to ensure the bug is found quickly and with an exception that explains what happened and where to go and fix the problem. Hence I wouldn't rely on the JVM feature for your NPEs.
Error messages should be as short as they can be whilst not skimping on detail. They should also not end in punctuation; especially exclamation marks. Every exception tends to be noteworthy enough to warrant an exclamation mark - but it gets tedious to read them, so do not add them. In fact, the proper thing to throw, is this: throw new NullPointerException("o"). - and you might want to rename that parameter to something more readable if you find o ugly. Parameters are mostly public API info (JVM-technically they are not, but javadoc does include them, which is the basis of API docs, so you should consider them public, and therefore, they should have clear names. Which you can then reuse). That exception conveys all relevant information to a programmer: The nature of the problem (null was sent to code that does not know how to handle this), and where (the stack trace does that automatically), and the specifics (which thing was null). Your message is much longer and doesn't add anything more. At best you can say your message might be understood by a non-coder, except this is both not true (as if a stack trace is something random joe computeruser is going to understand), and irrelevant (it's not like they can fix the problem even if they do know what it means). Using exception messages as UI output just doesn't work, so don't try.
You may want to adjust your style guides and allow braceless if statements provided that the if expression is simple (no && or ||). Possibly add an additional rule that the single statement is a control statement - break;, continue;, return (something);, or throw something;. This will significantly improve readability for multiparams. The point of a style guide is to create legible code. Surely this:
if (param1 == null) throw new NullPointerException("param1");
if (param2 == null) throw new NullPointerException("param2");
is far more legible, especially considering this method has more lines than just those two, than this:
if (param1 == null) {
throw new NullPointerException("param1");
}
if (param2 == null) {
throw new NullPointerException("param2");
}
Styleguides are just a tool. If your styleguide is leading to less productivity and harder to read code, the answer should be obvious. Fix or replace the tool.
I saw many times that using the functional API in java is really verbose and error-prone when we have to deal with checked exceptions.
E.g: it's really convenient to write (and easier to read) code like
var obj = Objects.requireNonNullElseGet(something, Other::get);
Indeed, it also avoids to improper multiple invokation of getters, like when you do
var obj = something.get() != null ? something.get() : other.get();
// ^^^^ first ^^^^ ^^^^ second ^^^^
BUT everything becomes a jungle when you have to deal with checked exceptions, and I saw sometimes this really ugly code style:
try {
Objects.requireNonNullElseGet(obj, () -> {
try {
return invokeMethodWhichThrows();
} catch (Exception e) {
throw new RuntimeException(e);
}
});
} catch (RuntimeException r){
Throwable cause = r.getCause();
if(cause == null)
throw r;
else
throw cause;
}
which only intent is to handle checked exceptions like when you write code without lambdas. Now, I know that those cases can be better expressed with the ternary operator and a variable to hold the result of something.get(), but that's also the case for Objects.requireNonNullElse(a, b), which is there, in the java.util package of the JDK.
The same can be said for logging frameworks' methods which take Suppliers as parameters and evaluate them only if needed, BUT if you need to handle checked exceptions in those supplier you need to invoke them and explicitly check for the log level.
if(LOGGER.isDebugEnabled())
LOGGER.debug("request from " + resolveIPOrThrow());
Some similar reasonament can be maid also for Futures, but let me go ahead.
My question is: why is Functional API in java not handling checked exceptions?
For example having something like a ThrowingSupplier interface, like the one below, can potentially fit the need of dealing with checked exceptions, guarantee type consistency and better code readability.
interface ThrowingSupplier<O, T extends Exception> {
O get() throws T;
}
Then we need to duplicate methods that uses Suppliers to have an overload that uses ThrowingSuppliers and throws exceptions. But we as java developers have been used to this kind of duplication (like with Stream, IntStream, LongStream, or methods with overloads to handle int[], char[], long[], byte[], ...), so it's nothing too strange for us.
I would really appreciate if someone who has deep knowledge of the JDK argues about why checked exceptions have been excluded from the functional API, if there was a way to incorporate them.
This question can be interpreted as 'why did those who made this decision decide it this way', which is asking: "Please summarize 5 years of serious debate - specifically what Brian Goetz and co thought about it", which is impossible, unless your name is Brian Goetz. He does not answer questions on SO as far as I know. You can go spelunking in de archives of the lambda-dev mailing list if you want.
One could make an informed guess, though.
In-scope vs Beyond-scope
There are 3 transparancies that lambdas do not have.
Control flow.
Checked exceptions.
Mutable local variables.
Control flow transparency
Take this code, as an example:
private Map<String, PhoneNumber> phonebook = ...;
public PhoneNumber findPhoneNumberOf(String personName) {
phonebook.entrySet().stream().forEach(entry -> {
if (entry.getKey().equals(personName)) return entry.getValue();
});
return null;
}
This code is silly (why not just do a .get, or if we must stream through the thing, why not use .filter and .findFirst, but if you look past that, it doesn't even work: You cannot return the method from within that lambda. That return statement returns the lambda (and thus is a compiler error, the lambda you pass to forEach returns void). You can't continue or break a loop that is outside the lambda from inside it, either.
Contrast to a for loop that can do it just fine:
for (var entry : phonebook.entrySet()) {
if (entry.getKey().equals(personName)) return entry.getValue();
}
return null;
does exactly what you think, and works fine.
Checked exception transparency
This is the one you are complaining about. This doesn't compile:
public void printFiles(Path... files) throws IOException {
Arrays.stream(files).forEach(p -> System.out.println(Files.readString(p)));
}
The fact that the context allows you to throw IOExceptions doesn't help: The above does not compile, because 'can throw IOExceptions' as a status doesn't 'transfer' to the inside of the lambda.
There's a theme here: Rewrite it to a normal for loop and it compiles and works precisely the way you want to. So why, exactly, can't we make lambdas work the same way?
mutable local variables
This doesn't work:
int x = 0;
someList.stream().forEach(k -> x++);
System.out.println("Count: " + x);
You can neither modify local variables declared outside the lambda, nor even read them unless they are (effectively) final. Why not?
These are all GOOD things.. depending on scope layering
So far it seems really stupid that lambdas aren't transparent in these 3 regards. But it turns into a good thing in a slightly different context. Imagine instead of .stream().forEach something a little bit different:
class DoubleNullException extends Exception {} // checked!
public class Example {
private TreeSet<String> words;
public Example() throws DoubleNullException {
int comparisonCount = 0;
this.words = new TreeSet<String>((a, b) -> {
comparisonCount++;
if (a == null && b == null) throw new DoubleNullException();
});
System.out.println("Comparisons performed: " + comparisonCount);
}
}
Let's image the 3 transparencies did work. The above code makes use of two of them (tries to mutate comparisonCount, and tries to throw DoubleNullException from inside to outside).
The above code makes absolutely no sense. The compiler errors are very much desired. That comparator is not going to run until perhaps next week in a completely different thread. It runs whenever you add the second element to the set, which is a field, so who knows who is going to do that and which thread would do it. The constructor has long since ceased running - local vars are 'on the stack' and thus the local var has disappeared. Nevermind that the printing would always print 'comparisons made: 0' here, the statement 'comparisonCount++:' would be trying to increment a memory position that no longer holds that variable at all.
Even if we 'fix' this (the compiler realizes that a local is used in a lambda and hoists it onto heap, this is what most other languages do), the code still makes no sense as a concept: That print statement wouldn't print. Also, that comparator can be called from multiple threads so... do we now allow volatile on our local vars? Quite the can of worms! In current java, a local variable cannot possibly suffer from thread concurrency synchronization issues because it is not possible to share the variable (you can share the object the variable points at, not the variable itself) with another thread.
The reason you ARE allowed to mess with (effectively) final locals is because you can just make a copy, and that's what the compiler does for you. Copies are fine - if nobody changes anything.
The exception similarly doesn't work: It's the code that calls thatSet.add(someElement) that would get the DoubleNullException. The fact that somebody wrote:
Example ex;
try {
ex = new Example();
} catch (DoubleNullException e) {
throw new WrappedEx(e);
}
ex.add(null);
ex.add(null); // BOOM
The line with the remark (BOOM) would throw the DoubleNullEx. It 'breaks' the checked exception rules: That line would compile (set.add doesn't throw DNEx), but isn't in a context where throwing DNEx is allowed. The catch block that is in the above snippet cannot ever run.
See how it all falls apart, and nothing makes sense?
The key clue is: What happens to the lambda? Is it 'transported'?
For some situations, you hand a lambda straight to a method, and that method has a 'use it and lose it' mentality: That method you handed the lambda to will run it 0, 1, or many times, but the key is: It runs it right then and there and once the method you handed the lambda to returns, that lambda is gone. The thing you handed the lambda to did not store it in a field or hand it to other code that stores it in a field, nor did that method transport the lambda to another thread.
In such cases (the method is use-it-then-lose-it), the transparencies would certainly be handy and wouldn't "break" anything.
But when the method you hand the lambda to does transport it to a field (such as the constructor of TreeSet which stores the passed comparator in a field, so that future .add calls can call it), the transparencies break down and make no sense.
Lambdas in java are for both and therefore the lack of transparency (in all 3 regards) actually makes sense. It's just annoying when you have a use-it-then-lose-it situation.
POTENTIAL FUTURE JAVA FIX: I've championed it before but so far, it fell on mostly deaf ears. Next time I see Brian I might bring it up again. Imagine an annotation or other marker you can stick on the parameter of a method that says: "I shall use it or lose it". The compiler will then ensure you do not transport it (the only thing the compiler will let you do with that param is call .invoke() on it. You can't call anything else, nor can you assign it or hand it to anything else unless you hand it to a method that also marked that parameter as #UseItOrLoseIt. Then the compiler can make the transparency happen with some tactical wrapping for control flow, and for checked exception flow, just by not complaining (checked exceptions are a figment of javac's imagination. The runtime does not have checked exceptions. Which is why scala, kotlin, and other runs-on-the-JVM languages can do it).
Actually THEY CAN!
As your question ends with - you can actually write O get() throws T. So why do the various functional interfaces, such as Supplier, not do this?
Mostly because it's a pain. I'm honestly not sure why e.g. list's forEach is not defined as:
public <T extends Throwable> forEach(ThrowingConsumer<? super E, ? super T> consumer) throws T {
for (E elem : this) consumer.consume(elem);
}
Which would work fine and compile (with ThrowingConsumer having the obvious impl). Or even that Consumer as we have it is declared with the <O, T extends Exception> part.
It's a bit of a hassle. The way lambdas 'work' is that the compiler has to infer from context what functionalinterface you are implementing which notably includes having to bind all the generics out. Adding exception binding to this mix makes it even harder. IDEs tend to get a little confused if you're in the middle of writing code in a 'throwing lambda' and start red-underlining rather a lot, and auto-complete and the like is no help, because the IDE can't be useful in that context until it knows.
Lambdas as a system were also designed to backwards compatibly replace any existing usages of the concept, such as swing's ActionListener. Such listeners couldn't throw either, so having the interfaces in the java.util.function package be similar would be more familiar and slightly more java idiomatic, possibly.
The throws T solution would help but isn't a panacea. It solves, to an extent, the lack of checked exception transparency, but does nothing to solve either mutable local var transparency or control flow transparency. Perhaps the conclusion is simply: The benefits of doing it are more limited than you think, the costs are higher than you think. The cost/benefit analysis says: Bad idea, so it wasn't done.
I have an Android error handler which never returns (it logs a message and throws an error exception). If I call my error handler from a method which normally returns a value, the Android Studio lint checker reports an error because no value is returned. Is there a way either to tell Android Studio either that my error handler does not return or that the point in the code after calling it is in fact unreachable.
Of course I could put in an unnecessary return statement returning a dummy value of the correct type, but this is inelegant and clutters up my app with an unreachable statement.
I can't find a code inspection to disable to prevent the error, but even if there is one to disable, that would stop it reporting really missing return statements.
Just to repeat, this is not a Java syntax issue. People have said that a Java method must return a value of the type declared. This is
(a) not relevant
(b) not true.
The correct statement is that a Java method, if it returns, must return a value of the declared type. This bit of code
public long getUnsignedLong(String columnName)
throws NumberFormatException, NoColumnException
{
String s = getString(columnName, "getUnsignedLong");
if ((s != null) && s.matches("^[0-9]+$")) {
return Long.parseLong(s);
}
else
{
throw(new NumberFormatException("Bad number " + s));
}
}
is perfectly valid Java, and AS does not complain about it. Indeed if I insert an unnecessary return like this
public long getUnsignedLong(String columnName)
throws NumberFormatException, NoColumnException
{
String s = getString(columnName, "getUnsignedLong");
if ((s != null) && s.matches("^[0-9]+$")) {
return Long.parseLong(s);
}
else
{
throw(new NumberFormatException("Bad number " + s));
}
return 0;
}
AS complains that it is unreachable.
My problem with throwing the exception is that if it actually happens, what my app's user sees is a popup window saying that the app has stopped and asking the user if they want to disable it. This isn't very helpful to the user and isn't very helpful to me when the user reports back to me that it has happened. So instead of throwing the exception I call my fatal error handler which looks like this:-
// Always invoked with fatal = true
// Report a fatal error.
// We send a message to the log file (if logging is enabled).
// If this thread is the UI thread, we display a Toast:
// otherwise we show a notification.
// Then we throw an Error exception which will cause Android to
// terminate the thread and display a (not so helpful) message.
public MyLog(Context context, boolean fatal, String small, String big) {
new Notifier(context, small, big);
new MyLog(context, big, false);
throw(new Error());
}
Yes, I know that argument fatal isn't referenced, but its presence arranges that this particular overload of my error handler is called, and you can see that it throws an exception and doesn't return.
My actual problem is that if I replace the throw in getUnsignedLong by a call to my fatal error handler, which doesn't return, AS complains at the end of getUnsignedLong that it is returning without a value. Well, it's wrong: this point is just as unreachable as it was before. I tried putting a contract in front of MyLog saying that it always fails, but this doesn't help, and pressing right arrow at the AS error report doesn't offer any way of suppressing it. I could put in a dummy return statement or a dummy throw, but either of these would in fact be unreachable code, and I regard this as inelegant and unnecessary.
So my question stands as it was originally asked: how do I tell Android Studio that a method does not return?
Your problem seems to be a Java problem.
In Java a non-void method must return the type it should return. In your case it's the same.
So the simple solution would be to return a dummy value. It's for the sake of the compiler.
The best (harder) solution is to avoid having that kind of construction. If a method normally returns a value, the method will have a return, and in case of an error an exception can occurs. An error handler should handle an error, which means that it's not the default behavior.
Also, you may want to have a look here: unreachable-code-error-vs-dead-code-warning-in-java
If you worry about unit test coverage. When you do unit test, there is always some parts of the code that you can't reach. That's one of the reason why we almost never want a coverage of 100
A method is basically code that you want to access in different ways (easier to call one method in a line than code >50 lines, etc.). When you create the method you either call it:
"void" (never returns an value),
"String" (returns a set of characters/letters),
"int" (returns a number within it's bounds. Integer Types),
"boolean" (returns a 2-type value, true or false).
And more...
In those methods you cand do whatever you want, but make sure that in the end they return the value type specified in the initialization.
Example:
int y = 2;
boolean booleanMethod(){
y = 6;
return true; //or false, doesn't really matter in this case.
}
boolean trueOrFalse(){
if (y == 2) return true;
else return false;
}
//or
void method(int nr){
nr = 10;
}
Those are just some basic examples of methods in Java*, because your problem was a syntax one, not really an AS one.
AS complains that it is unreachable.
Android Studio is correct. Android Studio is correctly implementing the reachability rules in the Java Language Specification.
This is a Java semantics issue. (It is not a syntax issue, but lets not split hairs.)
Lets create a simple but complete example to illustrate why Android Studio is correct:
public class Test {
public int method1() throws Exception {
int result;
method2();
return result;
}
public boolean method2() throws Exception {
throw new Exception("bad stuff");
}
}
$ javac Test.java
Test.java:5: error: variable result might not have been initialized
return result;
^
1 error
(I don't have a copy of Android Studio to hand, but that would give a similar compilation error to javac. Try it.)
Why is this an error? After all, by your reasoning, the return statement should be unreachable.
Here's the problem. That is NOT what the JLS says. In fact, JLS 14.21 says the following, among other things. (These statements are excerpts, and I have added the numbers for clarity. "iff" is an abbreviation that the JLS uses for "if and only if".)
"The block that is the body of a constructor, method, instance initializer, or static initializer is reachable."
"The first statement in a non-empty block that is not a switch block is reachable iff the block is reachable."
"A local variable declaration statement can complete normally iff it is reachable."
"Every other statement S in a non-empty block that is not a switch block is reachable iff the statement preceding S can complete normally."
"An expression statement can complete normally iff it is reachable."
Consider the body of method1.
By #1 - the block is reachable
By #2 - the declaration of result is reachable.
By #3 - the declaration can complete normally
By #4 - the call to method2() is reachable
By #5 - the call can return normally
By #4 - the return statement is reachable.
But it is also clear that if we reached the return statement, then result will not have been definitely initialized. (This is obviously true. And JLS 16 bears this out.)
OK so why did they specify Java this way?
In the general case, method1 and method2 can be in separate compilation units; e.g. cases A and B. That means that the methods can be compiled at different times, and only brought together at runtime. Now, if the compiler needs to analyze the body of B.method2 to determine if the return in A.method1 is reachable, then consider what happens if:
the code for B.method2 is modified and B recompiled after compiling A, or
a subclass C of B is loaded in which C.method2 returns normally.
In short, if we need to take account of the flow within of method2 when analyzing reachability in method, we cannot come to a come to a answer at compile time.
Conclusions:
The JLS clearly says / means that the (my) example program is erroneous.
It is not a specification mistake.
If Android Studio (or javac) didn't call the example erroneous, then it wouldn't be a valid implementation of Java.
There are some patterns for checking whether a parameter to a method has been given a null value.
First, the classic one. It is common in self-made code and obvious to understand.
public void method1(String arg) {
if (arg == null) {
throw new NullPointerException("arg");
}
}
Second, you can use an existing framework. That code looks a little nicer because it only occupies a single line. The downside is that it potentially calls another method, which might make the code run a little slower, depending on the compiler.
public void method2(String arg) {
Assert.notNull(arg, "arg");
}
Third, you can try to call a method without side effects on the object. This may look odd at first, but it has fewer tokens than the above versions.
public void method3(String arg) {
arg.getClass();
}
I haven't seen the third pattern in wide use, and it feels almost as if I had invented it myself. I like it for its shortness, and because the compiler has a good chance of optimizing it away completely or converting it into a single machine instruction. I also compile my code with line number information, so if a NullPointerException is thrown, I can trace it back to the exact variable, since I have only one such check per line.
Which check do you prefer, and why?
Approach #3: arg.getClass(); is clever, but unless this idiom see widespread adoption, I'd prefer the clearer, more verbose methods as opposed to saving a few characters. I'm a "write once, read many" kind of programmer.
The other approaches are self-documenting: there's a log message you can use to clarify what happened - this log message is use when reading the code and also at run-time. arg.getClass(), as it stands, is not self-documenting. You could use a comment at least o clarify to reviewers of the code:
arg.getClass(); // null check
But you still don't get a chance to put a specific message in the runtime like you can with the other methods.
Approach #1 vs #2 (null-check+NPE/IAE vs assert): I try to follow guidelines like this:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
Use assert to check parameters on private methods
assert param > 0;
Use null check + IllegalArgumentException to check parameters on public methods
if (param == null) throw new IllegalArgumentException("param cannot be null");
Use null check + NullPointerException where needed
if (getChild() == null) throw new NullPointerException("node must have children");
HOWEVER, since this is question may be about catching potential null issues most efficiently, then I have to mention my preferred method for dealing with null is using static analysis, e.g. type annotations (e.g. #NonNull) a la JSR-305. My favorite tool for checking them is:
The Checker Framework:
Custom pluggable types for Java
https://checkerframework.org/manual/#checker-guarantees
If its my project (e.g. not a library with a public API) and if I can use the Checker Framework throughout:
I can document my intention more clearly in the API (e.g. this parameter may not be null (the default), but this one may be null (#Nullable; the method may return null; etc). This annotation is right at the declaration, rather than further away in the Javadoc, so is much more likely to be maintained.
static analysis is more efficient than any runtime check
static analysis will flag potential logic flaws in advance (e.g. that I tried to pass a variable that may be null to a method that only accepts a non-null parameter) rather than depending on the issue occurring at runtime.
One other bonus is that the tool lets me put the annotations in a comment (e.g. `/#Nullable/), so my library code can compatible with type-annotated projects and non-type-annotated projects (not that I have any of these).
In case the link goes dead again, here's the section from GeoTools Developer Guide:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
5.1.7 Use of Assertions, IllegalArgumentException and NPE
The Java language has for a couple of years now made an assert keyword available; this keyword can be used to perform debug only checks.
While there are several uses of this facility, a common one is to check method parameters on private (not public) methods. Other uses are
post-conditions and invariants.
Reference: Programming With Assertions
Pre-conditions (like argument checks in private methods) are typically easy targets for assertions. Post-conditions and invariants are sometime
less straighforward but more valuable, since non-trivial conditions have more risks to be broken.
Example 1: After a map projection in the referencing module, an assertion performs the inverse map projection and checks the result
with the original point (post-condition).
Example 2: In DirectPosition.equals(Object) implementations, if the result is true, then the assertion ensures that
hashCode() are identical as required by the Object contract.
Use Assert to check Parameters on Private methods
private double scale( int scaleDenominator ){
assert scaleDenominator > 0;
return 1 / (double) scaleDenominator;
}
You can enable assertions with the following command line parameter:
java -ea MyApp
You can turn only GeoTools assertions with the following command line parameter:
java -ea:org.geotools MyApp
You can disable assertions for a specific package as shown here:
java -ea:org.geotools -da:org.geotools.referencing MyApp
Use IllegalArgumentExceptions to check Parameters on Public Methods
The use of asserts on public methods is strictly discouraged; because the mistake being reported has been made in client code - be honest and
tell them up front with an IllegalArgumentException when they have screwed up.
public double toScale( int scaleDenominator ){
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}
Use NullPointerException where needed
If possible perform your own null checks; throwing a IllegalArgumentException or NullPointerException with detailed information
about what has gone wrong.
public double toScale( Integer scaleDenominator ){
if( scaleDenominator == null ){
throw new NullPointerException( "scaleDenominator must be provided");
}
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}
Aren't you optimizing a biiiiiiiiiiiiiiit too prematurely!?
I would just use the first. It's clear and concise.
I rarely work with Java, but I assume there's a way to have Assert only operate on debug builds, so that would be a no-no.
The third gives me the creeps, and I think I would immediately resort to violence if I ever saw it in code. It's completely unclear what it's doing.
You can use the Objects Utility Class.
public void method1(String arg) {
Objects.requireNonNull(arg);
}
see http://docs.oracle.com/javase/7/docs/api/java/util/Objects.html#requireNonNull%28T%29
You should not be throwing NullPointerException. If you want a NullPointerException, just dont check the value and it will be thrown automatically when the parameter is null and you attempt to dereference it.
Check out the apache commons lang Validate and StringUtils classes.
Validate.notNull(variable) it will throw an IllegalArgumentException if "variable" is null.
Validate.notEmpty(variable) will throw an IllegalArgumentException if "variable" is empty (null or zero length".
Perhaps even better:
String trimmedValue = StringUtils.trimToEmpty(variable) will guarantee that "trimmedValue" is never null. If "variable" is null, "trimmedValue" will be the empty string ("").
In my opinion, there are three issues with the third method:
The intent is unclear to the casual reader.
Even though you have line number information, line numbers change. In a real production system, knowing that there was a problem in SomeClass at line 100 doesn't give you all the info you need. You also need to know the revision of the file in question and be able to get to that revision. All in all, a lot of hassle for what appears to be very little benefit.
It is not at all clear why you think the call to arg.getClass can be optimized away. It is a native method. Unless HotSpot is coded to have specific knowledge of the method for this exact eventuality, it'll probably leave the call alone since it can't know about any potential side-effects of the C code that gets called.
My preference is to use #1 whenever I feel there's a need for a null check. Having the variable name in the error message is great for quickly figuring out what exactly has gone wrong.
P.S. I don't think that optimizing the number of tokens in the source file is a very useful criterion.
The first method is my preference because it conveys the most intent. There are often shortcuts that can be taken in programming but my view is that shorter code is not always better code.
x==null is super fast, and it can be a couple of CPU clocks (incl. the branch prediction which is going to succeed). AssertNotNull will be inlined, so no difference there.
x.getClass() should not be faster than x==null even if it uses trap. (reason: the x will be in some register and checking a register vs an immediate value is fast, the branch is going to be predicted properly as well)
Bottom line: unless you do something truly weird, it'd be optimized by the JVM.
The first option is the easiest one and also is the most clear.
It's not common in Java, but in C and C++ where the = operator can be included in a expression in the if statement and therefore lead to errors, it's often recommended to switch places between the variable and the constant like this:
if (NULL == variable) {
...
}
instead of:
if (variable == NULL) {
...
}
preventing errors of the type:
if (variable = NULL) { // Assignment!
...
}
If you make the change, the compiler will find that kind of errors for you.
While I agree with the general consensus of preferring to avoid the getClass() hack, it is worth noting that, as of OpenJDK version 1.8.0_121, javac will use the getClass() hack to insert null checks prior to creating lambda expressions. For example, consider:
public class NullCheck {
public static void main(String[] args) {
Object o = null;
Runnable r = o::hashCode;
}
}
After compiling this with javac, you can use javap to see the bytecode by running javap -c NullCheck. The output is (in part):
Compiled from "NullCheck.java"
public class NullCheck {
public NullCheck();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: aconst_null
1: astore_1
2: aload_1
3: dup
4: invokevirtual #2 // Method java/lang/Object.getClass:()Ljava/lang/Class;
7: pop
8: invokedynamic #3, 0 // InvokeDynamic #0:run:(Ljava/lang/Object;)Ljava/lang/Runnable;
13: astore_2
14: return
}
The instruction set at "lines" 3, 4 and 7 are basically invoking o.getClass(), and discarding the result. If you run NullCheck, you'll get a NullPointerException thrown from line 4.
Whether this is something that the Java folks concluded was a necessary optimization, or it is just a cheap hack, I don't know. However, based on John Rose's comment at https://bugs.openjdk.java.net/browse/JDK-8042127?focusedCommentId=13612451&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13612451, I suspect that it may indeed be the case that the getClass() hack, which produces an implicit null check, may be ever so slightly more performant than its explicit counterpart. That said, I would avoid using it unless careful benchmarking showed that it made any appreciable difference.
(Interestingly, the Eclipse Compiler For Java (ECJ) does not include this null check, and running NullCheck as compiled by ECJ will not throw a n NPE.)
I'd use the built-in Java assert mechanism.
assert arg != null;
The advantage of this over all the other methods is that it can be switched off.
I prefer method 4, 5 or 6, with #4 being applied to public API methods and 5 / 6 for internal methods, although #6 would be more frequently applied to public methods.
/**
* Method 4.
* #param arg A String that should have some method called upon it. Will be ignored if
* null, empty or whitespace only.
*/
public void method4(String arg) {
// commons stringutils
if (StringUtils.isNotBlank(arg) {
arg.trim();
}
}
/**
* Method 5.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// Let NPE sort 'em out.
arg.trim();
}
/**
* Method 6.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// use asserts, expect asserts to be enabled during dev-time, so that developers
// that refuse to read the documentations get slapped on the wrist for still passing
// null. Assert is a no-op if the -ae param is not passed to the jvm, so 0 overhead.
assert arg != null : "Arg cannot be null"; // insert insult here.
arg.trim();
}
The best solution to handle nulls is to not use nulls. Wrap third-party or library methods that may return nulls with null guards, replacing the value with something that makes sense (such as an empty string) but does nothing when used. Throw NPE's if a null really shouldn't be passed, especially in setter methods where the passed object doesn't get called right away.
There is no vote for this one, but I use a slight variation of #2, like
erStr += nullCheck (varName, String errMsg); // returns formatted error message
Rationale: (1) I can loop over a bunch of arguments, (2) The nullCheck method is tucked away in a superclass and (3) at the end of the loop,
if (erStr.length() > 0)
// Send out complete error message to client
else
// do stuff with variables
In the superclass method, your #3 looks nice, but I wouldn't throw an exception (what is the point, somebody has to handle it, and as a servlet container, tomcat will ignore it, so it might as well be this())
Regards, - M.S.
First method. I would never do the second or the third method, not unless they are implemented efficiently by the underlying JVM. Otherwise, those two are just prime examples of premature optimization (with the third having a possible performance penalty - you don't want to be dealing and accessing class meta-data in general access points.)
The problem with NPEs is that they are things that cross-cut many aspects of programming (and my aspects, I mean something deeper and more profound that AOP). It is a language design problem (not saying that the language is bad, but that it is one fundamental short-coming... of any language that allows null pointers or references.)
As such, it is best to simply deal with it explicitly as in the first method. All other methods are (failed) attempts to simplify a model of operations, an unavoidable complexity that exists on the underlying programming model.
It is a bullet that we cannot avoid to bite. Deal with it explicitly as it is - in the general case that is - the less painful down the road.
I believe that the fourth and the most useful pattern is to do nothing. Your code will throw NullPointerException or other exception a couple of lines later (if null is illegal value) and will work fine if null is OK in this context.
I believe that you should perform null check only if you have something to do with it. Checking to throw exception is irrelevant in most cases.
Just do not forget to mention in javadoc whether the parameter can be null.
This question already has answers here:
What are the effects of exceptions on performance in Java?
(18 answers)
Closed 9 years ago.
Do you know how expensive exception throwing and handling in java is?
We had several discussions about the real cost of exceptions in our team. Some avoid them as often as possible, some say the loss of performance by using exceptions is overrated.
Today I found the following piece of code in our software:
private void doSomething()
{
try
{
doSomethingElse();
}
catch(DidNotWorkException e)
{
log("A Message");
}
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
throw new DidNotWorkException();
}
goOnAgain();
}
How is the performance of this compared to
private void doSomething()
{
doSomethingElse();
goOn();
}
private void doSomethingElse()
{
if(isSoAndSo())
{
log("A Message");
return;
}
goOnAgain();
}
I don't want to discuss code aesthetic or anything, it's just about runtime behaviour!
Do you have real experiences/measurements?
Exceptions are not free... so they are expensive :-)
The book Effective Java covers this in good detail.
Item 39 Use exceptions only for exceptional conditions.
Item 40 Use exceptions for recoverable conditions
The author found that exceptions resulted in the code tunning 70 times slower for his test case on his machine with his particular VM and OS combo.
The slowest part of throwing an exception is filling in the stack trace.
If you pre-create your exception and re-use it, the JIT may optimize it down to "a machine level goto."
All that having been said, unless the code from your question is in a really tight loop, the difference will be negligible.
The slow part about exceptions is building the stack trace (in the constructor of java.lang.Throwable), which depends on stack depth. Throwing in itself is not slow.
Use exceptions to signal failures. The performance impact then is negligible and the stack trace helps to pin-point the failure's cause.
If you need exceptions for control flow (not recommended), and profiling shows that exceptions are the bottleneck, then create an Exception subclass that overrides fillInStackTrace() with an empty implementation. Alternatively (or additionally) instantiate only one exception, store it in a field and always throw the same instance.
The following demonstrates exceptions without stack traces by adding one simple method to the micro benchmark (albeit flawed) in the accepted answer:
public class DidNotWorkException extends Exception {
public Throwable fillInStackTrace() {
return this;
}
}
Running it using the JVM in -server mode (version 1.6.0_24 on Windows 7) results in:
Exception:99ms
Boolean:12ms
Exception:92ms
Boolean:11ms
The difference is small enough to be ignorable in practice.
I haven't bothered to read up on Exceptions but doing a very quick test with some modified code of yours I come to the conclusion that the Exception circumstance quite a lot slower than the boolean case.
I got the following results:
Exception:20891ms
Boolean:62ms
From this code:
public class Test {
public static void main(String args[]) {
Test t = new Test();
t.testException();
t.testBoolean();
}
public void testException() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomethingException();
System.out.println("Exception:" + (System.currentTimeMillis()-start) + "ms");
}
public void testBoolean() {
long start = System.currentTimeMillis();
for(long i = 0; i <= 10000000L; ++i)
doSomething();
System.out.println("Boolean:" + (System.currentTimeMillis()-start) + "ms");
}
private void doSomethingException() {
try {
doSomethingElseException();
} catch(DidNotWorkException e) {
//Msg
}
}
private void doSomethingElseException() throws DidNotWorkException {
if(!isSoAndSo()) {
throw new DidNotWorkException();
}
}
private void doSomething() {
if(!doSomethingElse())
;//Msg
}
private boolean doSomethingElse() {
if(!isSoAndSo())
return false;
return true;
}
private boolean isSoAndSo() { return false; }
public class DidNotWorkException extends Exception {}
}
I foolishly didn't read my code well enough and previously had a bug in it (how embarassing), if someone could triple check this code I'd very much appriciate it, just in case I'm going senile.
My specification is:
Compiled and run on 1.5.0_16
Sun JVM
WinXP SP3
Intel Centrino Duo T7200 (2.00Ghz, 977Mhz)
2.00 GB Ram
In my opinion you should notice that the non-exception methods don't give the log error in doSomethingElse but instead return a boolean so that the calling code can deal with a failure. If there are multiple areas in which this can fail then logging an error inside or throwing an Exception might be needed.
This is inherently JVM specific, so you should not blindly trust whatever advice is given, but actually measure in your situation. It shouldn't be hard to create a "throw a million Exceptions and print out the difference of System.currentTimeMillis" to get a rough idea.
For the code snippet you list, I would personally require the original author to thoroughly document why he used exception throwing here as it is not the "path of least surprises" which is crucial to maintaining it later.
(Whenever you do something in a convoluted way you cause unneccesary work to be done by the reader in order to understand why you did it like that instead of just the usual way - that work must be justified in my opinion by the author carefully explaining why it was done like that as there MUST be a reason).
Exceptions are a very, very useful tool, but should only be used when necessary :)
I have no real measurements, but throwing an exception is more expensive.
Ok, this is a link regarding the .NET framework, but I think the same applies to Java as well:
exceptions & performance
That said, you should not hesitate to use them when appropriate. That is : do not use them for flow-control, but use them when something exceptional happend; something that you didn't expect to happen.
I think if we stick to using exceptions where they are needed (exceptional conditions), the benefits far outweigh any performance penalty you might be paying. I say might since the cost is really a function of the frequency with which exceptions are thrown in the running application.
In the example you give, it looks like the failure is not unexpected or catastrophic, so the method should really be returning a bool to signal its success status rather than using exceptions, thus making them part of regular control flow.
In the few performace improvement works that I have been involved in, cost of exceptions has been fairly low. You would be spending far more time time in improving the complexity of common, hightly repeating operations.
Thank you for all the responses.
I finally followed Thorbjørn's suggestion and wrote a little test programm, measuring the performance myself. The result is: No difference between the two variants (in matters of performance).
Even though I didn't ask about code aesthetics or something, i.e. what the intention of exceptions was etc. most of you addressed also that topic. But in reality things are not always that clear... In the case under consideration the code was born a long time ago when the situation in which the exception is thrown seemed to be an exceptional one. Today the library is used differently, behaviour and usage of the different applications changed, test coverage is not very well, but the code still does it's job, just a little bit too slow (That's why I asked for performance!!). In that situation, I think, there should be a good reason for changing from A to B, which, in my opinion, can't be "That's not what exceptions were made for!".
It turned out that the logging ("A message") is (compared to everything else happening) very expensive, so I think, I'll get rid of this.
EDIT:
The test code is exactly like the one in the original post, called by a method testPerfomance() in a loop which is surrounded by System.currentTimeMillis()-calls to get the execution time...but:
I reviewed the test code now, turned of everything else (the log statement) and looping a 100 times more, than before and it turns out that you save 4.7 sec for a million calls when using B instead of A from the original post. As Ron said fillStackTrace is the most expensive part (+1 for that) and you can save nearly the same (4.5 sec) if you overwrite it (in the case you don't need it, like me). All in all it's still a nearly-zero-difference in my case, since the code is called 1000 times an hour and the measurements show I can save 4.5 millis in that time...
So, my 1st answer part above was a little misleading, but what I said about balancing the cost-benefit of a refactoring remains true.
I think you're asking this from slightly the wrong angle. Exceptions are designed to be used to signal exceptional cases, and as a program flow mechanism for those cases. So the question you should be asking is, does the "logic" of the code call for exceptions.
Exceptions are generally designed to perform well enough in the use for which they are intended. If they're used in such a way that they're a bottleneck, then above all, that's probably an indication that they're just being used for "the wrong thing" full stop-- i.e. what you have underlyingly is a program design problem rather than a performance problem.
Conversely, if the exception appears to be being "used for the right thing", then that probably means it'll also perform OK.
Let's say exception won't occur when trying to execute statements 1 and 2. Are there ANY performance hits between those two sample-codes?
If no, what if the DoSomething() method has to do a huuuge amount of work (loads of calls to other methods, etc.)?
1:
try
{
DoSomething();
}
catch (...)
{
...
}
2:
DoSomething();