I started using javax.annotation especially to warn the next developer who maybe will be working with my code in the future.
But while I was using the javax.annotation #Nonnull annotation, a question came into my mind:
If you mark f.e. a parameter of a method thorugh the #Nonnull annotation that it haves to have a value,
do you still need to handle the case, that the next developer who is using your code could be parsing null to your function?
If found one con argument and one pro argument to still handle the special cases.
con: The code is cleaner, especially if you have multiple parameters that you mark with #Nonnull
private void foo(#Nonnull Object o)
{
/*do something*/
}
vs
public void foo(Object o)
throws NullPointerException
{
if (o == null)
{
throw new NullPointerException("Given Object must have a value!");
}
/*do something*/
}
pro: It could cause unhandled errors if the next developer ignore the annotations.
This is an unsolved problem in the nullity annotation space. There are 2 viewpoints that sound identical but result, in fact, in the exact opposite. Given a parameter void foo(#NonNull String param), what does that imply?
It's compiler-checkable documentation that indicates you should not pass null as param here. It does not mean that it is impossible to do this, or that one ought to consider it impossible. Simply that one should not - it's compiler-checkable documentation that the method has no defined useful behaviour if you pass null here.
The compiler is extended to support these annotations to treat it as a single type - the type of param is #NonNull String - and the compiler knows what that means and will in fact ensure this. The type of the parameter is #NonNull String and therefore cannot be null, just like it can't be, say, an InputStream instance either.
Crucially, then, the latter means a null check is flagged as silly code, whereas the former means lack of a null check is marked as bad. Hence, opposites. The former considered a nullcheck a warnable offense (with something along the lines of param can never be null here), for the same reason this is silly code:
void foo(String arg) {
if (!(arg instanceof String)) throw new IllegalArgumentException("arg");
}
That if clause cannot possibly fire. The mindset of various nullchecker frameworks is identical here, and therefore flags it as silly code:
void foo(#NonNull String arg) {
if (arg == null) throw new NullPointerException("arg");
}
The simple fact is, plenty of java devs do not enable annotation-based nullity checking, and even if they did, there are at least 10 competing annotations and many of them mean completely different things, and work completely differently. The vast majority will not be using a checking framework that works as you think it should, therefore, the advice to remove the nullcheck because it is silly is actively a bad thing - you should add that nullcheck. The linting tools that flag this down are misguided; they want to pretend to live in a world where every java programmer on the planet uses their tool. This isn't try and is unlikely to ever become true, hence, wrong.
A few null checking frameworks are sort of living both lives and will allow you to test if an argument marked as #NonNull is null, but only if the if body starts with throw, otherwise it's flagged.
To answer your questions:
You should nullcheck. After all, other developers that use your code may not get the nullity warnings from the nullcheck tool (either other team members working on the same code base but using slightly different tools and/or configurations of those tools, or, your code is a library and another project uses it, a more obvious route to a situation with different tools/configs). The best way to handle a null failure is a compile time error. A close second is an exception that is clear about the problem and whose stack trace can be used to very quickly solve the bug. A distant third is random bizarreness that takes a whole to debug - and that explicit nullcheck means you get nice fallback: If for whatever reason the write-time tooling doesn't catch the problem, the check will then simply turn it into the second, still quite acceptable case of an exception at the point of failure that is clear about what happened and where to fix it.
Lombok's #NonNull annotation can generate it for you, if you want. Now you have the best of both worlds: Just a #NonNull annotation (no clutter) and yet a runtime exception if someone does pass null anyway (DISCLAIMER: I'm one of the core contributors to Lombok).
If your linting tool complains about 'pointless null check' on the line if (param == null) throw new NullPointerException("param");, find the option in the linting tool to exclude if-checks that result in throw statements. If the linting tool cannot be configured to ignore this case, do not use the linting tool, find a better one.
Note that modern JVMs will throw a NullPointerException with the name of the expression as message if you dereference a null pointer, which may obviate the need to write an explicit check. However, now you're dependent on that method always dereferencing that variable forever more; if ever someone changes it and e.g. assigns it to a field and returns, now you have a problem: It should have thrown the exception, in order to ensure the bug is found quickly and with an exception that explains what happened and where to go and fix the problem. Hence I wouldn't rely on the JVM feature for your NPEs.
Error messages should be as short as they can be whilst not skimping on detail. They should also not end in punctuation; especially exclamation marks. Every exception tends to be noteworthy enough to warrant an exclamation mark - but it gets tedious to read them, so do not add them. In fact, the proper thing to throw, is this: throw new NullPointerException("o"). - and you might want to rename that parameter to something more readable if you find o ugly. Parameters are mostly public API info (JVM-technically they are not, but javadoc does include them, which is the basis of API docs, so you should consider them public, and therefore, they should have clear names. Which you can then reuse). That exception conveys all relevant information to a programmer: The nature of the problem (null was sent to code that does not know how to handle this), and where (the stack trace does that automatically), and the specifics (which thing was null). Your message is much longer and doesn't add anything more. At best you can say your message might be understood by a non-coder, except this is both not true (as if a stack trace is something random joe computeruser is going to understand), and irrelevant (it's not like they can fix the problem even if they do know what it means). Using exception messages as UI output just doesn't work, so don't try.
You may want to adjust your style guides and allow braceless if statements provided that the if expression is simple (no && or ||). Possibly add an additional rule that the single statement is a control statement - break;, continue;, return (something);, or throw something;. This will significantly improve readability for multiparams. The point of a style guide is to create legible code. Surely this:
if (param1 == null) throw new NullPointerException("param1");
if (param2 == null) throw new NullPointerException("param2");
is far more legible, especially considering this method has more lines than just those two, than this:
if (param1 == null) {
throw new NullPointerException("param1");
}
if (param2 == null) {
throw new NullPointerException("param2");
}
Styleguides are just a tool. If your styleguide is leading to less productivity and harder to read code, the answer should be obvious. Fix or replace the tool.
Related
After checking the JavaDocs for a method I was thinking of using, requiredNonNull, I stumbled across the first one with the single parameter (T obj).
However what is the actual purpose of this particular method with this signature? All it simply does is throw and NPE which I'm somewhat positive (as a I may be missing something obvious here) would be thrown anyway.
Throws:
NullPointerException - if obj is null
The latter actually makes sense in terms of debugging certain code, as the doc also states, it's primarily designed for parameter validation
public static <T> T requireNonNull(T obj,String message)
Checks that the specified object reference is not null and throws a customized NullPointerException if it is.
Therefore I can print specific information along with the NPE to make debugging a hell of a lot easier.
With this in mind I highly doubt I would come across a situation where I'd rather just use the former instead. Please do enlighten me.
tl;dr - Why would you ever use the overload which doesn't take a message.
A good principle when writing software is to catch errors as early as possible. The quicker you notice, for example, a bad value such as null being passed to a method, the easier it is to find out the cause and fix the problem.
If you pass null to a method that is not supposed to receive null, a NullPointerException will probably happen somewhere, as you already noticed. However, the exception might not happen until a few methods further down, and when it happens somewhere deep down, it will be more difficult to find the exact source of the error.
So, it's better when methods check their arguments up front and throw an exception as soon as they find an invalid value such as null.
edit - About the one-parameter version: even though you won't provide an error message, checking arguments and throwing an exception early will be more useful than letting the null pass down until an exception happens somewhere deeper down. The stack trace will point to the line where you used Objects.requireNonNull(...) and it should be obvious to you as a developer that that means you're not supposed to pass null. When you let a NullPointerException happen implicitly you don't know if the original programmer had the intent that the variable should not be null.
It is a utility method. Just a shortcut! (shortcut designers have their ways of doing their shortcut style).
Why throwing in the first place?
Security and Debugging.
Security: to not allow any illegal value in a sensitive place. (makes inner algorithm more sure about what are they doing and what are they having).
Debugging: for the program to die fast when something unexpected happens.
Question: Is there an implementation of the Elvis operator scheduled for any future Java release? Or is there any Library that brings it to Java?
I have read that
it was proposed for Java SE 7 but didn't make it into that release
http://www.oracle.com/technetwork/articles/java/java8-optional-2175753.html
I know Java 8 allows this
String name = computer.flatMap(Computer::getSoundcard)
.flatMap(Soundcard::getUSB)
.map(USB::getVersion)
.orElse("UNKNOWN");
but I it's s bit too much for my taste. SO if anyone could point me out any project / library that would bring the Groovy like/C# like syntax to Java for Null Checks, would be greatly appreciated.
Edit: By Elvis operator I mean this:
String version = computer?.getSoundcard()?.getUSB()?.getVersion();
or similar
No. There are no current or future plans to reconsider the null-safe operators in Java.
A long time ago, there was a function
public static Foo getFoo(Bar bar) {
return bar.getFoo();
}
And people just couldn't agree what should happen if bar was null.
First of all, there would be people who would claim that violating the intent of the function should be penalized with a checked exception.
public static Foo getFoo(Bar bar) throws FooNotFoundException {
if (bar == null) throw new FooNotFoundException();
return bar.getFoo();
}
The caller of the function would then be forced to take this scenario in account. It would have to catch the exception or rethrow it. It would be so forceful that it would annoy people, and soon they would argue that this shouldn't be a checked exception, but a runtime exception, to make it less forceful.
public static Foo getFoo(Bar bar) {
if (bar == null) throw new FooNotFoundException();
return bar.getFoo();
}
Some would claim that there would be no point in throwing an exception, since java would already throw an exception anyway: a Nullpointer exception.
Those who would fear unexpected exceptions, would just make their code more rebust and would add null-checks at the start of their functions, which would simply return null.
public static Foo getFoo(Bar bar) {
if (bar == null) return null;
return bar.getFoo();
}
Soon people would be arguing if it would be ok to return empty lists, or whether those empty lists should actually be null values as well. "Surely, you don't want to check nullability before each and every iteration, do you?" would the other side argue. And soon, they would be creating all kind of constructs to avoid nullability entirely.
public static Foo getFoo(Bar bar) {
if (bar == null) return Foo.Empty;
return bar.getFoo();
}
Each approach would have a wide range of consequences. And those consequences would make it difficult to combine different approaches. Those consequences would result in coding rules where each individual rule would be connected to the next rule. The way they supported each other would give the impression that each and every rule was undisputable. And finally coding rules would become like religions with reasoning which only made sense inside the scope of the full ruleset.
Depending on your choice of ruleset you would have difficulties using certain frameworks. In the end, the empty-list with unchecked-exceptions religion became dominant. This religion can be summarized as follows:
you should avoid returning null values.
if a list is empty, you return it as is.
if you iterate a list, you never have to check for null.
a method should never throw a checked exception
checked exceptions should be wrapped in runtime exceptions.
strings shouldn't ever be null, instead they should be "".
And apparently this religion got so strong that it managed to influence the framework and language specification.
compiler optimizations for empty lists
an Optional class
value types
Some external libraries and editors would actually try to re-unite the different teams by providing annotations (#Null and #NotNull). The IDE would just mark all violations for you. A simple but effective solution. Nevertheless, the JDK never included its own #Null or #NotNull, instead each library had to ship their own.
And taking all of this in account, right now, it is very unlikely that there will ever be an elvis operator in java. If you want to code in java, you better forget about null.
Or to put it in the words of Tony Hoare (the inventor of null):
I call it my billion-dollar mistake. It was the invention of the null
reference in 1965. At that time, I was designing the first
comprehensive type system for references in an object oriented
language (ALGOL W). My goal was to ensure that all use of references
should be absolutely safe, with checking performed automatically by
the compiler. But I couldn't resist the temptation to put in a null
reference, simply because it was so easy to implement. This has led to
innumerable errors, vulnerabilities, and system crashes, which have
probably caused a billion dollars of pain and damage in the last forty
years.
Personally, I think this absolutely makes no sense, given the fact that every decent programming language has a null. Some even have multiple ones to indicate different kinds of nullabilities. After all, even in mathematics there are undefined values.
Anyway, if you don't have an elvis operator, you can still
Foo foo = bar == null ? null : bar.getFoo();
And this just fits perfectly in the spirit of java. After all, in 2021 java is a very explicit language.
There are some patterns for checking whether a parameter to a method has been given a null value.
First, the classic one. It is common in self-made code and obvious to understand.
public void method1(String arg) {
if (arg == null) {
throw new NullPointerException("arg");
}
}
Second, you can use an existing framework. That code looks a little nicer because it only occupies a single line. The downside is that it potentially calls another method, which might make the code run a little slower, depending on the compiler.
public void method2(String arg) {
Assert.notNull(arg, "arg");
}
Third, you can try to call a method without side effects on the object. This may look odd at first, but it has fewer tokens than the above versions.
public void method3(String arg) {
arg.getClass();
}
I haven't seen the third pattern in wide use, and it feels almost as if I had invented it myself. I like it for its shortness, and because the compiler has a good chance of optimizing it away completely or converting it into a single machine instruction. I also compile my code with line number information, so if a NullPointerException is thrown, I can trace it back to the exact variable, since I have only one such check per line.
Which check do you prefer, and why?
Approach #3: arg.getClass(); is clever, but unless this idiom see widespread adoption, I'd prefer the clearer, more verbose methods as opposed to saving a few characters. I'm a "write once, read many" kind of programmer.
The other approaches are self-documenting: there's a log message you can use to clarify what happened - this log message is use when reading the code and also at run-time. arg.getClass(), as it stands, is not self-documenting. You could use a comment at least o clarify to reviewers of the code:
arg.getClass(); // null check
But you still don't get a chance to put a specific message in the runtime like you can with the other methods.
Approach #1 vs #2 (null-check+NPE/IAE vs assert): I try to follow guidelines like this:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
Use assert to check parameters on private methods
assert param > 0;
Use null check + IllegalArgumentException to check parameters on public methods
if (param == null) throw new IllegalArgumentException("param cannot be null");
Use null check + NullPointerException where needed
if (getChild() == null) throw new NullPointerException("node must have children");
HOWEVER, since this is question may be about catching potential null issues most efficiently, then I have to mention my preferred method for dealing with null is using static analysis, e.g. type annotations (e.g. #NonNull) a la JSR-305. My favorite tool for checking them is:
The Checker Framework:
Custom pluggable types for Java
https://checkerframework.org/manual/#checker-guarantees
If its my project (e.g. not a library with a public API) and if I can use the Checker Framework throughout:
I can document my intention more clearly in the API (e.g. this parameter may not be null (the default), but this one may be null (#Nullable; the method may return null; etc). This annotation is right at the declaration, rather than further away in the Javadoc, so is much more likely to be maintained.
static analysis is more efficient than any runtime check
static analysis will flag potential logic flaws in advance (e.g. that I tried to pass a variable that may be null to a method that only accepts a non-null parameter) rather than depending on the issue occurring at runtime.
One other bonus is that the tool lets me put the annotations in a comment (e.g. `/#Nullable/), so my library code can compatible with type-annotated projects and non-type-annotated projects (not that I have any of these).
In case the link goes dead again, here's the section from GeoTools Developer Guide:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
5.1.7 Use of Assertions, IllegalArgumentException and NPE
The Java language has for a couple of years now made an assert keyword available; this keyword can be used to perform debug only checks.
While there are several uses of this facility, a common one is to check method parameters on private (not public) methods. Other uses are
post-conditions and invariants.
Reference: Programming With Assertions
Pre-conditions (like argument checks in private methods) are typically easy targets for assertions. Post-conditions and invariants are sometime
less straighforward but more valuable, since non-trivial conditions have more risks to be broken.
Example 1: After a map projection in the referencing module, an assertion performs the inverse map projection and checks the result
with the original point (post-condition).
Example 2: In DirectPosition.equals(Object) implementations, if the result is true, then the assertion ensures that
hashCode() are identical as required by the Object contract.
Use Assert to check Parameters on Private methods
private double scale( int scaleDenominator ){
assert scaleDenominator > 0;
return 1 / (double) scaleDenominator;
}
You can enable assertions with the following command line parameter:
java -ea MyApp
You can turn only GeoTools assertions with the following command line parameter:
java -ea:org.geotools MyApp
You can disable assertions for a specific package as shown here:
java -ea:org.geotools -da:org.geotools.referencing MyApp
Use IllegalArgumentExceptions to check Parameters on Public Methods
The use of asserts on public methods is strictly discouraged; because the mistake being reported has been made in client code - be honest and
tell them up front with an IllegalArgumentException when they have screwed up.
public double toScale( int scaleDenominator ){
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}
Use NullPointerException where needed
If possible perform your own null checks; throwing a IllegalArgumentException or NullPointerException with detailed information
about what has gone wrong.
public double toScale( Integer scaleDenominator ){
if( scaleDenominator == null ){
throw new NullPointerException( "scaleDenominator must be provided");
}
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}
Aren't you optimizing a biiiiiiiiiiiiiiit too prematurely!?
I would just use the first. It's clear and concise.
I rarely work with Java, but I assume there's a way to have Assert only operate on debug builds, so that would be a no-no.
The third gives me the creeps, and I think I would immediately resort to violence if I ever saw it in code. It's completely unclear what it's doing.
You can use the Objects Utility Class.
public void method1(String arg) {
Objects.requireNonNull(arg);
}
see http://docs.oracle.com/javase/7/docs/api/java/util/Objects.html#requireNonNull%28T%29
You should not be throwing NullPointerException. If you want a NullPointerException, just dont check the value and it will be thrown automatically when the parameter is null and you attempt to dereference it.
Check out the apache commons lang Validate and StringUtils classes.
Validate.notNull(variable) it will throw an IllegalArgumentException if "variable" is null.
Validate.notEmpty(variable) will throw an IllegalArgumentException if "variable" is empty (null or zero length".
Perhaps even better:
String trimmedValue = StringUtils.trimToEmpty(variable) will guarantee that "trimmedValue" is never null. If "variable" is null, "trimmedValue" will be the empty string ("").
In my opinion, there are three issues with the third method:
The intent is unclear to the casual reader.
Even though you have line number information, line numbers change. In a real production system, knowing that there was a problem in SomeClass at line 100 doesn't give you all the info you need. You also need to know the revision of the file in question and be able to get to that revision. All in all, a lot of hassle for what appears to be very little benefit.
It is not at all clear why you think the call to arg.getClass can be optimized away. It is a native method. Unless HotSpot is coded to have specific knowledge of the method for this exact eventuality, it'll probably leave the call alone since it can't know about any potential side-effects of the C code that gets called.
My preference is to use #1 whenever I feel there's a need for a null check. Having the variable name in the error message is great for quickly figuring out what exactly has gone wrong.
P.S. I don't think that optimizing the number of tokens in the source file is a very useful criterion.
The first method is my preference because it conveys the most intent. There are often shortcuts that can be taken in programming but my view is that shorter code is not always better code.
x==null is super fast, and it can be a couple of CPU clocks (incl. the branch prediction which is going to succeed). AssertNotNull will be inlined, so no difference there.
x.getClass() should not be faster than x==null even if it uses trap. (reason: the x will be in some register and checking a register vs an immediate value is fast, the branch is going to be predicted properly as well)
Bottom line: unless you do something truly weird, it'd be optimized by the JVM.
The first option is the easiest one and also is the most clear.
It's not common in Java, but in C and C++ where the = operator can be included in a expression in the if statement and therefore lead to errors, it's often recommended to switch places between the variable and the constant like this:
if (NULL == variable) {
...
}
instead of:
if (variable == NULL) {
...
}
preventing errors of the type:
if (variable = NULL) { // Assignment!
...
}
If you make the change, the compiler will find that kind of errors for you.
While I agree with the general consensus of preferring to avoid the getClass() hack, it is worth noting that, as of OpenJDK version 1.8.0_121, javac will use the getClass() hack to insert null checks prior to creating lambda expressions. For example, consider:
public class NullCheck {
public static void main(String[] args) {
Object o = null;
Runnable r = o::hashCode;
}
}
After compiling this with javac, you can use javap to see the bytecode by running javap -c NullCheck. The output is (in part):
Compiled from "NullCheck.java"
public class NullCheck {
public NullCheck();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: aconst_null
1: astore_1
2: aload_1
3: dup
4: invokevirtual #2 // Method java/lang/Object.getClass:()Ljava/lang/Class;
7: pop
8: invokedynamic #3, 0 // InvokeDynamic #0:run:(Ljava/lang/Object;)Ljava/lang/Runnable;
13: astore_2
14: return
}
The instruction set at "lines" 3, 4 and 7 are basically invoking o.getClass(), and discarding the result. If you run NullCheck, you'll get a NullPointerException thrown from line 4.
Whether this is something that the Java folks concluded was a necessary optimization, or it is just a cheap hack, I don't know. However, based on John Rose's comment at https://bugs.openjdk.java.net/browse/JDK-8042127?focusedCommentId=13612451&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13612451, I suspect that it may indeed be the case that the getClass() hack, which produces an implicit null check, may be ever so slightly more performant than its explicit counterpart. That said, I would avoid using it unless careful benchmarking showed that it made any appreciable difference.
(Interestingly, the Eclipse Compiler For Java (ECJ) does not include this null check, and running NullCheck as compiled by ECJ will not throw a n NPE.)
I'd use the built-in Java assert mechanism.
assert arg != null;
The advantage of this over all the other methods is that it can be switched off.
I prefer method 4, 5 or 6, with #4 being applied to public API methods and 5 / 6 for internal methods, although #6 would be more frequently applied to public methods.
/**
* Method 4.
* #param arg A String that should have some method called upon it. Will be ignored if
* null, empty or whitespace only.
*/
public void method4(String arg) {
// commons stringutils
if (StringUtils.isNotBlank(arg) {
arg.trim();
}
}
/**
* Method 5.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// Let NPE sort 'em out.
arg.trim();
}
/**
* Method 6.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// use asserts, expect asserts to be enabled during dev-time, so that developers
// that refuse to read the documentations get slapped on the wrist for still passing
// null. Assert is a no-op if the -ae param is not passed to the jvm, so 0 overhead.
assert arg != null : "Arg cannot be null"; // insert insult here.
arg.trim();
}
The best solution to handle nulls is to not use nulls. Wrap third-party or library methods that may return nulls with null guards, replacing the value with something that makes sense (such as an empty string) but does nothing when used. Throw NPE's if a null really shouldn't be passed, especially in setter methods where the passed object doesn't get called right away.
There is no vote for this one, but I use a slight variation of #2, like
erStr += nullCheck (varName, String errMsg); // returns formatted error message
Rationale: (1) I can loop over a bunch of arguments, (2) The nullCheck method is tucked away in a superclass and (3) at the end of the loop,
if (erStr.length() > 0)
// Send out complete error message to client
else
// do stuff with variables
In the superclass method, your #3 looks nice, but I wouldn't throw an exception (what is the point, somebody has to handle it, and as a servlet container, tomcat will ignore it, so it might as well be this())
Regards, - M.S.
First method. I would never do the second or the third method, not unless they are implemented efficiently by the underlying JVM. Otherwise, those two are just prime examples of premature optimization (with the third having a possible performance penalty - you don't want to be dealing and accessing class meta-data in general access points.)
The problem with NPEs is that they are things that cross-cut many aspects of programming (and my aspects, I mean something deeper and more profound that AOP). It is a language design problem (not saying that the language is bad, but that it is one fundamental short-coming... of any language that allows null pointers or references.)
As such, it is best to simply deal with it explicitly as in the first method. All other methods are (failed) attempts to simplify a model of operations, an unavoidable complexity that exists on the underlying programming model.
It is a bullet that we cannot avoid to bite. Deal with it explicitly as it is - in the general case that is - the less painful down the road.
I believe that the fourth and the most useful pattern is to do nothing. Your code will throw NullPointerException or other exception a couple of lines later (if null is illegal value) and will work fine if null is OK in this context.
I believe that you should perform null check only if you have something to do with it. Checking to throw exception is irrelevant in most cases.
Just do not forget to mention in javadoc whether the parameter can be null.
I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.
I’m from a .NET background and now dabbling in Java.
Currently, I’m having big problems designing an API defensively against faulty input. Let’s say I’ve got the following code (close enough):
public void setTokens(Node node, int newTokens) {
tokens.put(node, newTokens);
}
However, this code can fail for two reasons:
User passes a null node.
User passes an invalid node, i.e. one not contained in the graph.
In .NET, I would throw an ArgumentNullException (rather than a NullReferenceException!) or an ArgumentException respectively, passing the name of the offending argument (node) as a string argument.
Java doesn’t seem to have equivalent exceptions. I realize that I could be more specific and just throw whatever exception comes closest to describing the situation, or even writing my own exception class for the specific situation.
Is this the best practice? Or are there general-purpose classes similar to ArgumentException in .NET?
Does it even make sense to check against null in this case? The code will fail anyway and the exception’s stack trace will contain the above method call. Checking against null seems redundant and excessive. Granted, the stack trace will be slightly cleaner (since its target is the above method, rather than an internal check in the HashMap implementation of the JRE). But this must be offset against the cost of an additional if statement, which, furthermore, should never occur anyway – after all, passing null to the above method isn’t an expected situation, it’s a rather stupid bug. Expecting it is downright paranoid – and it will fail with the same exception even if I don’t check for it.
[As has been pointed out in the comments, HashMap.put actually allows null values for the key. So a check against null wouldn’t necessarily be redundant here.]
The standard Java exception is IllegalArgumentException. Some will throw NullPointerException if the argument is null, but for me NPE has that "someone screwed up" connotation, and you don't want clients of your API to think you don't know what you're doing.
For public APIs, check the arguments and fail early and cleanly. The time/cost barely matters.
Different groups have different standards.
Firstly, I assume you know the difference between RuntimeExceptions (unchecked) and normal Exceptions (checked), if not then see this question and the answers. If you write your own exception you can force it to be caught, whereas both NullPointerException and IllegalArgumentException are RuntimeExceptions which are frowned on in some circles.
Secondly, as with you, groups I've worked with but don't actively use asserts, but if your team (or consumer of the API) has decided it will use asserts, then assert sounds like precisely the correct mechanism.
If I was you I would use NullPointerException. The reason for this is precedent. Take an example Java API from Sun, for example java.util.TreeSet. This uses NPEs for precisely this sort of situation, and while it does look like your code just used a null, it is entirely appropriate.
As others have said IllegalArgumentException is an option, but I think NullPointerException is more communicative.
If this API is designed to be used by outside companies/teams I would stick with NullPointerException, but make sure it is declared in the javadoc. If it is for internal use then you might decide that adding your own Exception heirarchy is worthwhile, but personally I find that APIs which add huge exception heirarchies, which are only going to be printStackTrace()d or logged are just a waste of effort.
At the end of the day the main thing is that your code communicates clearly. A local exception heirarchy is like local jargon - it adds information for insiders but can baffle outsiders.
As regards checking against null I would argue it does make sense. Firstly, it allows you to add a message about what was null (ie node or tokens) when you construct the exception which would be helpful. Secondly, in future you might use a Map implementation which allows null, and then you would lose the error check. The cost is almost nothing, so unless a profiler says it is an inner loop problem I wouldn't worry about it.
In Java you would normally throw an IllegalArgumentException
If you want a guide about how to write good Java code, I can highly recommend the book Effective Java by Joshua Bloch.
It sounds like this might be an appropriate use for an assert:
public void setTokens(Node node, int newTokens) {
assert node != null;
tokens.put(node, newTokens);
}
Your approach depends entirely on what contract your function offers to callers - is it a precondition that node is not null?
If it is then you should throw an exception if node is null, since it is a contract violation. If it isnt then your function should silently handle the null Node and respond appropriately.
I think a lot depends on the contract of the method and how well the caller is known.
At some point in the process the caller could take action to validate the node before calling your method. If you know the caller and know that these nodes are always validated then i think it is ok to assume you'll get good data. Essentially responsibility is on the caller.
However if you are, for example, providing a third party library that is distributed then you need to validate the node for nulls, etcs...
An illegalArugementException is the java standard but is also a RunTimeException. So if you want to force the caller to handle the exception then you need to provided a check exception, probably a custom one you create.
Personally I'd like NullPointerExceptions to ONLY happen by accident, so something else must be used to indicate that an illegal argument value was passed. IllegalArgumentException is fine for this.
if (arg1 == null) {
throw new IllegalArgumentException("arg1 == null");
}
This should be sufficient to both those reading the code, but also the poor soul who gets a support call at 3 in the morning.
(and, ALWAYS provide an explanatory text for your exceptions, you will appreciate them some sad day)
like the other : java.lang.IllegalArgumentException.
About checking null Node, what about checking bad input at the Node creation ?
I don't have to please anybody, so what I do now as canonical code is
void method(String s)
if((s != null) && (s instanceof String) && (s.length() > 0x0000))
{
which gets me a lot of sleep.
Others will disagree.