Java: How to check for null pointers efficiently - java

There are some patterns for checking whether a parameter to a method has been given a null value.
First, the classic one. It is common in self-made code and obvious to understand.
public void method1(String arg) {
if (arg == null) {
throw new NullPointerException("arg");
}
}
Second, you can use an existing framework. That code looks a little nicer because it only occupies a single line. The downside is that it potentially calls another method, which might make the code run a little slower, depending on the compiler.
public void method2(String arg) {
Assert.notNull(arg, "arg");
}
Third, you can try to call a method without side effects on the object. This may look odd at first, but it has fewer tokens than the above versions.
public void method3(String arg) {
arg.getClass();
}
I haven't seen the third pattern in wide use, and it feels almost as if I had invented it myself. I like it for its shortness, and because the compiler has a good chance of optimizing it away completely or converting it into a single machine instruction. I also compile my code with line number information, so if a NullPointerException is thrown, I can trace it back to the exact variable, since I have only one such check per line.
Which check do you prefer, and why?

Approach #3: arg.getClass(); is clever, but unless this idiom see widespread adoption, I'd prefer the clearer, more verbose methods as opposed to saving a few characters. I'm a "write once, read many" kind of programmer.
The other approaches are self-documenting: there's a log message you can use to clarify what happened - this log message is use when reading the code and also at run-time. arg.getClass(), as it stands, is not self-documenting. You could use a comment at least o clarify to reviewers of the code:
arg.getClass(); // null check
But you still don't get a chance to put a specific message in the runtime like you can with the other methods.
Approach #1 vs #2 (null-check+NPE/IAE vs assert): I try to follow guidelines like this:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
Use assert to check parameters on private methods
assert param > 0;
Use null check + IllegalArgumentException to check parameters on public methods
if (param == null) throw new IllegalArgumentException("param cannot be null");
Use null check + NullPointerException where needed
if (getChild() == null) throw new NullPointerException("node must have children");
HOWEVER, since this is question may be about catching potential null issues most efficiently, then I have to mention my preferred method for dealing with null is using static analysis, e.g. type annotations (e.g. #NonNull) a la JSR-305. My favorite tool for checking them is:
The Checker Framework:
Custom pluggable types for Java
https://checkerframework.org/manual/#checker-guarantees
If its my project (e.g. not a library with a public API) and if I can use the Checker Framework throughout:
I can document my intention more clearly in the API (e.g. this parameter may not be null (the default), but this one may be null (#Nullable; the method may return null; etc). This annotation is right at the declaration, rather than further away in the Javadoc, so is much more likely to be maintained.
static analysis is more efficient than any runtime check
static analysis will flag potential logic flaws in advance (e.g. that I tried to pass a variable that may be null to a method that only accepts a non-null parameter) rather than depending on the issue occurring at runtime.
One other bonus is that the tool lets me put the annotations in a comment (e.g. `/#Nullable/), so my library code can compatible with type-annotated projects and non-type-annotated projects (not that I have any of these).
In case the link goes dead again, here's the section from GeoTools Developer Guide:
http://data.opengeo.org/GEOT-290810-1755-708.pdf
5.1.7 Use of Assertions, IllegalArgumentException and NPE
The Java language has for a couple of years now made an assert keyword available; this keyword can be used to perform debug only checks.
While there are several uses of this facility, a common one is to check method parameters on private (not public) methods. Other uses are
post-conditions and invariants.
Reference: Programming With Assertions
Pre-conditions (like argument checks in private methods) are typically easy targets for assertions. Post-conditions and invariants are sometime
less straighforward but more valuable, since non-trivial conditions have more risks to be broken.
Example 1: After a map projection in the referencing module, an assertion performs the inverse map projection and checks the result
with the original point (post-condition).
Example 2: In DirectPosition.equals(Object) implementations, if the result is true, then the assertion ensures that
hashCode() are identical as required by the Object contract.
Use Assert to check Parameters on Private methods
private double scale( int scaleDenominator ){
assert scaleDenominator > 0;
return 1 / (double) scaleDenominator;
}
You can enable assertions with the following command line parameter:
java -ea MyApp
You can turn only GeoTools assertions with the following command line parameter:
java -ea:org.geotools MyApp
You can disable assertions for a specific package as shown here:
java -ea:org.geotools -da:org.geotools.referencing MyApp
Use IllegalArgumentExceptions to check Parameters on Public Methods
The use of asserts on public methods is strictly discouraged; because the mistake being reported has been made in client code - be honest and
tell them up front with an IllegalArgumentException when they have screwed up.
public double toScale( int scaleDenominator ){
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}
Use NullPointerException where needed
If possible perform your own null checks; throwing a IllegalArgumentException or NullPointerException with detailed information
about what has gone wrong.
public double toScale( Integer scaleDenominator ){
if( scaleDenominator == null ){
throw new NullPointerException( "scaleDenominator must be provided");
}
if( scaleDenominator > 0 ){
throw new IllegalArgumentException( "scaleDenominator must be greater than 0");
}
return 1 / (double) scaleDenominator;
}

Aren't you optimizing a biiiiiiiiiiiiiiit too prematurely!?
I would just use the first. It's clear and concise.
I rarely work with Java, but I assume there's a way to have Assert only operate on debug builds, so that would be a no-no.
The third gives me the creeps, and I think I would immediately resort to violence if I ever saw it in code. It's completely unclear what it's doing.

You can use the Objects Utility Class.
public void method1(String arg) {
Objects.requireNonNull(arg);
}
see http://docs.oracle.com/javase/7/docs/api/java/util/Objects.html#requireNonNull%28T%29

You should not be throwing NullPointerException. If you want a NullPointerException, just dont check the value and it will be thrown automatically when the parameter is null and you attempt to dereference it.
Check out the apache commons lang Validate and StringUtils classes.
Validate.notNull(variable) it will throw an IllegalArgumentException if "variable" is null.
Validate.notEmpty(variable) will throw an IllegalArgumentException if "variable" is empty (null or zero length".
Perhaps even better:
String trimmedValue = StringUtils.trimToEmpty(variable) will guarantee that "trimmedValue" is never null. If "variable" is null, "trimmedValue" will be the empty string ("").

In my opinion, there are three issues with the third method:
The intent is unclear to the casual reader.
Even though you have line number information, line numbers change. In a real production system, knowing that there was a problem in SomeClass at line 100 doesn't give you all the info you need. You also need to know the revision of the file in question and be able to get to that revision. All in all, a lot of hassle for what appears to be very little benefit.
It is not at all clear why you think the call to arg.getClass can be optimized away. It is a native method. Unless HotSpot is coded to have specific knowledge of the method for this exact eventuality, it'll probably leave the call alone since it can't know about any potential side-effects of the C code that gets called.
My preference is to use #1 whenever I feel there's a need for a null check. Having the variable name in the error message is great for quickly figuring out what exactly has gone wrong.
P.S. I don't think that optimizing the number of tokens in the source file is a very useful criterion.

The first method is my preference because it conveys the most intent. There are often shortcuts that can be taken in programming but my view is that shorter code is not always better code.

x==null is super fast, and it can be a couple of CPU clocks (incl. the branch prediction which is going to succeed). AssertNotNull will be inlined, so no difference there.
x.getClass() should not be faster than x==null even if it uses trap. (reason: the x will be in some register and checking a register vs an immediate value is fast, the branch is going to be predicted properly as well)
Bottom line: unless you do something truly weird, it'd be optimized by the JVM.

The first option is the easiest one and also is the most clear.
It's not common in Java, but in C and C++ where the = operator can be included in a expression in the if statement and therefore lead to errors, it's often recommended to switch places between the variable and the constant like this:
if (NULL == variable) {
...
}
instead of:
if (variable == NULL) {
...
}
preventing errors of the type:
if (variable = NULL) { // Assignment!
...
}
If you make the change, the compiler will find that kind of errors for you.

While I agree with the general consensus of preferring to avoid the getClass() hack, it is worth noting that, as of OpenJDK version 1.8.0_121, javac will use the getClass() hack to insert null checks prior to creating lambda expressions. For example, consider:
public class NullCheck {
public static void main(String[] args) {
Object o = null;
Runnable r = o::hashCode;
}
}
After compiling this with javac, you can use javap to see the bytecode by running javap -c NullCheck. The output is (in part):
Compiled from "NullCheck.java"
public class NullCheck {
public NullCheck();
Code:
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: return
public static void main(java.lang.String[]);
Code:
0: aconst_null
1: astore_1
2: aload_1
3: dup
4: invokevirtual #2 // Method java/lang/Object.getClass:()Ljava/lang/Class;
7: pop
8: invokedynamic #3, 0 // InvokeDynamic #0:run:(Ljava/lang/Object;)Ljava/lang/Runnable;
13: astore_2
14: return
}
The instruction set at "lines" 3, 4 and 7 are basically invoking o.getClass(), and discarding the result. If you run NullCheck, you'll get a NullPointerException thrown from line 4.
Whether this is something that the Java folks concluded was a necessary optimization, or it is just a cheap hack, I don't know. However, based on John Rose's comment at https://bugs.openjdk.java.net/browse/JDK-8042127?focusedCommentId=13612451&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13612451, I suspect that it may indeed be the case that the getClass() hack, which produces an implicit null check, may be ever so slightly more performant than its explicit counterpart. That said, I would avoid using it unless careful benchmarking showed that it made any appreciable difference.
(Interestingly, the Eclipse Compiler For Java (ECJ) does not include this null check, and running NullCheck as compiled by ECJ will not throw a n NPE.)

I'd use the built-in Java assert mechanism.
assert arg != null;
The advantage of this over all the other methods is that it can be switched off.

I prefer method 4, 5 or 6, with #4 being applied to public API methods and 5 / 6 for internal methods, although #6 would be more frequently applied to public methods.
/**
* Method 4.
* #param arg A String that should have some method called upon it. Will be ignored if
* null, empty or whitespace only.
*/
public void method4(String arg) {
// commons stringutils
if (StringUtils.isNotBlank(arg) {
arg.trim();
}
}
/**
* Method 5.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// Let NPE sort 'em out.
arg.trim();
}
/**
* Method 6.
* #param arg A String that should have some method called upon it. Shouldn't be null.
*/
public void method5(String arg) {
// use asserts, expect asserts to be enabled during dev-time, so that developers
// that refuse to read the documentations get slapped on the wrist for still passing
// null. Assert is a no-op if the -ae param is not passed to the jvm, so 0 overhead.
assert arg != null : "Arg cannot be null"; // insert insult here.
arg.trim();
}
The best solution to handle nulls is to not use nulls. Wrap third-party or library methods that may return nulls with null guards, replacing the value with something that makes sense (such as an empty string) but does nothing when used. Throw NPE's if a null really shouldn't be passed, especially in setter methods where the passed object doesn't get called right away.

There is no vote for this one, but I use a slight variation of #2, like
erStr += nullCheck (varName, String errMsg); // returns formatted error message
Rationale: (1) I can loop over a bunch of arguments, (2) The nullCheck method is tucked away in a superclass and (3) at the end of the loop,
if (erStr.length() > 0)
// Send out complete error message to client
else
// do stuff with variables
In the superclass method, your #3 looks nice, but I wouldn't throw an exception (what is the point, somebody has to handle it, and as a servlet container, tomcat will ignore it, so it might as well be this())
Regards, - M.S.

First method. I would never do the second or the third method, not unless they are implemented efficiently by the underlying JVM. Otherwise, those two are just prime examples of premature optimization (with the third having a possible performance penalty - you don't want to be dealing and accessing class meta-data in general access points.)
The problem with NPEs is that they are things that cross-cut many aspects of programming (and my aspects, I mean something deeper and more profound that AOP). It is a language design problem (not saying that the language is bad, but that it is one fundamental short-coming... of any language that allows null pointers or references.)
As such, it is best to simply deal with it explicitly as in the first method. All other methods are (failed) attempts to simplify a model of operations, an unavoidable complexity that exists on the underlying programming model.
It is a bullet that we cannot avoid to bite. Deal with it explicitly as it is - in the general case that is - the less painful down the road.

I believe that the fourth and the most useful pattern is to do nothing. Your code will throw NullPointerException or other exception a couple of lines later (if null is illegal value) and will work fine if null is OK in this context.
I believe that you should perform null check only if you have something to do with it. Checking to throw exception is irrelevant in most cases.
Just do not forget to mention in javadoc whether the parameter can be null.

Related

Handling null, even if annotations are used

I started using javax.annotation especially to warn the next developer who maybe will be working with my code in the future.
But while I was using the javax.annotation #Nonnull annotation, a question came into my mind:
If you mark f.e. a parameter of a method thorugh the #Nonnull annotation that it haves to have a value,
do you still need to handle the case, that the next developer who is using your code could be parsing null to your function?
If found one con argument and one pro argument to still handle the special cases.
con: The code is cleaner, especially if you have multiple parameters that you mark with #Nonnull
private void foo(#Nonnull Object o)
{
/*do something*/
}
vs
public void foo(Object o)
throws NullPointerException
{
if (o == null)
{
throw new NullPointerException("Given Object must have a value!");
}
/*do something*/
}
pro: It could cause unhandled errors if the next developer ignore the annotations.
This is an unsolved problem in the nullity annotation space. There are 2 viewpoints that sound identical but result, in fact, in the exact opposite. Given a parameter void foo(#NonNull String param), what does that imply?
It's compiler-checkable documentation that indicates you should not pass null as param here. It does not mean that it is impossible to do this, or that one ought to consider it impossible. Simply that one should not - it's compiler-checkable documentation that the method has no defined useful behaviour if you pass null here.
The compiler is extended to support these annotations to treat it as a single type - the type of param is #NonNull String - and the compiler knows what that means and will in fact ensure this. The type of the parameter is #NonNull String and therefore cannot be null, just like it can't be, say, an InputStream instance either.
Crucially, then, the latter means a null check is flagged as silly code, whereas the former means lack of a null check is marked as bad. Hence, opposites. The former considered a nullcheck a warnable offense (with something along the lines of param can never be null here), for the same reason this is silly code:
void foo(String arg) {
if (!(arg instanceof String)) throw new IllegalArgumentException("arg");
}
That if clause cannot possibly fire. The mindset of various nullchecker frameworks is identical here, and therefore flags it as silly code:
void foo(#NonNull String arg) {
if (arg == null) throw new NullPointerException("arg");
}
The simple fact is, plenty of java devs do not enable annotation-based nullity checking, and even if they did, there are at least 10 competing annotations and many of them mean completely different things, and work completely differently. The vast majority will not be using a checking framework that works as you think it should, therefore, the advice to remove the nullcheck because it is silly is actively a bad thing - you should add that nullcheck. The linting tools that flag this down are misguided; they want to pretend to live in a world where every java programmer on the planet uses their tool. This isn't try and is unlikely to ever become true, hence, wrong.
A few null checking frameworks are sort of living both lives and will allow you to test if an argument marked as #NonNull is null, but only if the if body starts with throw, otherwise it's flagged.
To answer your questions:
You should nullcheck. After all, other developers that use your code may not get the nullity warnings from the nullcheck tool (either other team members working on the same code base but using slightly different tools and/or configurations of those tools, or, your code is a library and another project uses it, a more obvious route to a situation with different tools/configs). The best way to handle a null failure is a compile time error. A close second is an exception that is clear about the problem and whose stack trace can be used to very quickly solve the bug. A distant third is random bizarreness that takes a whole to debug - and that explicit nullcheck means you get nice fallback: If for whatever reason the write-time tooling doesn't catch the problem, the check will then simply turn it into the second, still quite acceptable case of an exception at the point of failure that is clear about what happened and where to fix it.
Lombok's #NonNull annotation can generate it for you, if you want. Now you have the best of both worlds: Just a #NonNull annotation (no clutter) and yet a runtime exception if someone does pass null anyway (DISCLAIMER: I'm one of the core contributors to Lombok).
If your linting tool complains about 'pointless null check' on the line if (param == null) throw new NullPointerException("param");, find the option in the linting tool to exclude if-checks that result in throw statements. If the linting tool cannot be configured to ignore this case, do not use the linting tool, find a better one.
Note that modern JVMs will throw a NullPointerException with the name of the expression as message if you dereference a null pointer, which may obviate the need to write an explicit check. However, now you're dependent on that method always dereferencing that variable forever more; if ever someone changes it and e.g. assigns it to a field and returns, now you have a problem: It should have thrown the exception, in order to ensure the bug is found quickly and with an exception that explains what happened and where to go and fix the problem. Hence I wouldn't rely on the JVM feature for your NPEs.
Error messages should be as short as they can be whilst not skimping on detail. They should also not end in punctuation; especially exclamation marks. Every exception tends to be noteworthy enough to warrant an exclamation mark - but it gets tedious to read them, so do not add them. In fact, the proper thing to throw, is this: throw new NullPointerException("o"). - and you might want to rename that parameter to something more readable if you find o ugly. Parameters are mostly public API info (JVM-technically they are not, but javadoc does include them, which is the basis of API docs, so you should consider them public, and therefore, they should have clear names. Which you can then reuse). That exception conveys all relevant information to a programmer: The nature of the problem (null was sent to code that does not know how to handle this), and where (the stack trace does that automatically), and the specifics (which thing was null). Your message is much longer and doesn't add anything more. At best you can say your message might be understood by a non-coder, except this is both not true (as if a stack trace is something random joe computeruser is going to understand), and irrelevant (it's not like they can fix the problem even if they do know what it means). Using exception messages as UI output just doesn't work, so don't try.
You may want to adjust your style guides and allow braceless if statements provided that the if expression is simple (no && or ||). Possibly add an additional rule that the single statement is a control statement - break;, continue;, return (something);, or throw something;. This will significantly improve readability for multiparams. The point of a style guide is to create legible code. Surely this:
if (param1 == null) throw new NullPointerException("param1");
if (param2 == null) throw new NullPointerException("param2");
is far more legible, especially considering this method has more lines than just those two, than this:
if (param1 == null) {
throw new NullPointerException("param1");
}
if (param2 == null) {
throw new NullPointerException("param2");
}
Styleguides are just a tool. If your styleguide is leading to less productivity and harder to read code, the answer should be obvious. Fix or replace the tool.

What to do in findXY method if no such object exists?

Assume I have a Java method
public X findX(...)
which tries to find an Object of type X fulfilling some conditions (given in the parameters). Often such functions cannot guarantee to find such an object. I can think of different ways to deal with this:
One could write an public boolean existsX(...) method with the same signature which should be called first. This avoids any kind of exceptions and null handling, but probably you get some duplicate logic.
One could just return null (and explain this in javadoc). The caller has to handle it.
One could throw a checked exception (which one would fit for this?).
What would you suggest?
The new Java 8 Optional class was made for this purpose.
If the object exists then you return Optional.of(x) where x is the object, if it doesn't then return Optional.empty(). You can check if an Optional has an object present by using the isPresent() method and you can get the object using get().
https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html
If you can't use Optional I would go with option 2.
In the case of 1. You'd be doing double work, first you have to check if X exists, but if it does, you're basically discarding the result, and you have to do the work again in findX. Although the result of existsX could be cached and checked first when calling findX, this would still be an extra step over just returning X.
In the case of 3. To me this comes down to usability. Sometimes you just know that findX will return a result (and if it doesn't, there is a mistake somewhere else), but with a checked exception, you would still have to write the try and (most likely empty) catch block.
So option 2 is the winner to me. It doesn't do extra work, and checking the result is optional. As long as you document the null return, there should be no problems.
Guava (potentially other libraries, too) also offers an Optional class that might be worth exploring if your project uses Guava since you seem to not use Java 8.
If you
don't/can't/won't use Java 8 and its accompanying Optional type
don't want another library dependency (like Guava) for a single class implementation
but
you want something more robust than methods that can return null for which you have to have a check in all consumers (which was the standard before Java 8)
then writing your own Optional as an util class is the easiest option.

java throws impossible NullPointerException

Reduced to the essentials I have the following code:
public class C_Test extends A_Test {
C_Test() {
super( null );
}
}
public abstract class A_Test {
private final String m_test;
public A_Test( String test ) {
m_test = C_ObjectUtil.defaultIfNull( test, "" );
if( m_test == null ) {
throw new RuntimeException( "it happened again!" );
}
}
}
public class C_ObjectUtil {
public static <T> T defaultIfNull( T object, T defaultObject ) {
return ( object == null ) ? defaultObject : object;
}
}
After reading about 70 MB of data from a DB2 database using JDBC a call of new C_Test() throws the impossible Exception. Up to that time the constructor was called more than 100000 times.
After inlining the method defaultIfNull into the constructor A_Test (this removes the generics) everything works without any error!
Environment: Sun JDK 1.7.0_25 64Bit, Linux
Any idea what's happenig here?
Here comes the stacktrace. It does not match to the reduced version I've shown above. The exception here is thrown when calling m_test.equals( "null" ) in the constructor A_Test.
java.lang.NullPointerException
at com.src.db.attribvisitors.A_AttribVisitorAssignFromResultSet.<init>(A_AttribVisitorAssignFromResultSet.java:54)
at com.src.db.attribvisitors.C_AttribVisitorAssignFromSelectStatement.<init>(C_AttribVisitorAssignFromSelectStatement.java:37)
at com.src.db.attribvisitors.C_AttribVisitorAssignFromSelectStatement.<init>(C_AttribVisitorAssignFromSelectStatement.java:43)
at com.src.framework.db.migration.A_DbTableRow.initFromResultset(A_DbTableRow.java:96)
at com.src.framework.db.migration.helper.C_DbTableProcessor$1.process(C_DbTableProcessor.java:66)
at com.src.db.C_TxSql.query(C_TxSql.java:144)
at com.src.db.C_SqlExecuter$3.execute(C_SqlExecuter.java:142)
at com.src.db.C_SqlExecuter.execute(C_SqlExecuter.java:80)
at com.src.db.C_SqlExecuter.query(C_SqlExecuter.java:138)
at com.src.framework.db.migration.helper.C_DbTableProcessor.processTable(C_DbTableProcessor.java:58)
at com.src.framework.db.migration.helper.C_DbTableProcessor.processTable(C_DbTableProcessor.java:39)
at com.src.tools.C_DbExporter.exportTable(C_DbExporter.java:93)
at com.src.tools.C_DbExporter.exportTables(C_DbExporter.java:77)
at com.src.tools.C_DbExporter.exportTables(C_DbExporter.java:61)
at com.src.tools.C_DbExporterAndImporter.exportTables(C_DbExporterAndImporter.java:95)
at com.src.tools.C_DbExporterAndImporter.doMain(C_DbExporterAndImporter.java:83)
at com.src.common.C_MainUtil.runMain(C_MainUtil.java:150)
at com.src.tools.C_DbExporterAndImporter.main(C_DbExporterAndImporter.java:57)
Without any outside intervention which you aren't showing us, this isn't possible.
This invocation
m_test = C_ObjectUtil.defaultIfNull( test, "" );
pushes the value that test holds and the value of the reference to the String literal "" on the stack. The method is invoked, consuming those two references. Then this is invoked
public static <T> T defaultIfNull( T object, T defaultObject ) {
return ( object == null ) ? defaultObject : object;
}
the value from test, ie. null, is bound to object and the value of the reference to the String is bound to defaultObject. The conditional operator is executed and defaultObject is returned, ie. the non-null String. That is assigned to m_test in the calling code. Nothing can intercept these steps in your code. Not another thread, not any proxying mechanism (it's a static method invoked directly), not anything I can think of.
Your error is elsewhere.
(I'll delete or edit this answer if you post new details that will help identify the true issue.)
What you're describing is very unlikely. However, it would not be the first time for the very unlikely to occur. I'm somewhat hesitant to post this as an answer because it really isn't one, but here we go.
As Kayaman points out, the described behavior might be due to an incorrect JIT optimization. To test this theory, I would perform two steps.
Turn on -XX:+PrintCompilation and monitor the output for compilation of relevant methods right before the ominous exception occurs. This by itself is no proof that the JIT compiler is applying an incorrect optimization. However, it's enough to commence step two and see what machine code the JIT is producing.
Turn on -XX:+PrintAssembly and go through the machine code the compiler output for the relevant method(s). This is a lot of work, especially when you are doing it for the first time.
The way I see it, there are three possibilities.
It's a (subtle) bug in your code — nothing to do with the JIT.
The JIT is actually wrong (unlikely, sorry). You should by able to prove it in step (2) above.
The JIT is right, but it's (valid) optimization triggers an otherwise dormant bug. There's a decent chance you'll find it during step (2) above.
One last thing to point out — already alluded to by Cruncher — is the possibility of a race condition. Race conditions deserve a special mention because they they fall somewhere in between the first and third category. They may have nothing to do with an optimization per se, but the optimization's speed up may enable an unwanted and otherwise unachievable execution trace.
I wish you best of luck on your hunt. :)
I switched from Sun JDK 1.7.0_25 64Bit to Sun JDK 1.7.0_55 64Bit and the NPE disappeared!

Is using return at the start of method bad coding practice?

I have found myself using the following practice, but something inside me kind of cringes every time i use it. Basically, it's a precondition test on the parameters to determine if the actual work should be done.
public static void doSomething(List<String> things)
{
if(things == null || things.size() <= 0)
return;
//...snip... do actual work
}
It is good practice to return at the earliest opportunity.
That way the least amount of code gets executed and evaluated.
Code that does not run cannot be in error.
Furthermore it makes the function easier to read, because you do not have to deal with all the cases that do not apply anymore.
Compare the following code
private Date someMethod(Boolean test) {
Date result;
if (null == test) {
result = null
} else {
result = test ? something : other;
}
return result;
}
vs
private Date someMethod(Boolean test) {
if (null == test) {
return null
}
return test ? something : other;
}
The second one is shorter, does not need an else and does not need the temp variable.
Note that in Java the return statement exits the function right away; in other languages (e.g. Pascal) the almost equivalent code result:= something; does not return.
Because of this fact it is customary to return at many points in Java methods.
Calling this bad practice is ignoring the fact that that particular train has long since left the station in Java.
If you are going to exit a function at many points in a function anyway, it's best to exit at the earliest opportunity
It's a matter of style and personal preference. There's nothing wrong with it.
To the best of my understanding - no.
For the sake of easier debugging there should be only one return/exit point in a subroutine, method or function.
With such approach your program may become longer and less readable, but while debugging you can put a break point at the exit and always see the state of what you return. For example you can log the state of all local variables - it may be really helpful for troubleshooting.
It looks like there a two "schools" - one says "return as early as possible", whereas another one says "there should be only one return/exit point in a program".
I am a proponent of the first one, though in practice sometimes follow the second one, just to save time.
Also, do not forget about exceptions. Very often the fact that you have to return from a method early means that you are in an exceptional situation. In your example I think throwing an exception is more appropriate.
PMD seems to think so, and that you should always let your methods run to the end, however, for certain quick sanity checks, I still use premature return statements.
It does impair the readability of the method a little, but in some cases that can be better than adding yet another if statement or other means by which to run the method to the end for all cases.
There's nothing inherently wrong with it, but if it makes you cringe, you could throw an IllegalArgumentException instead. In some cases, that's more accurate. It could, however, result in a bunch of code that look this whenever you call doSomething:
try {
doSomething(myList);
} catch (IllegalArgumentException e) {}
There is no correct answer to this question, it is a matter of taste.
In the specific example above there may be better ways of enforcing a pre-condition, but I view the general pattern of multiple early returns as akin to guards in functional programming.
I personally have no issue with this style - I think it can result in cleaner code. Trying contort everything to have a single exit point can increase verbosity and reduce readability.
It's good practice. So continue with your good work.
There is nothing wrong with it. Personally, I would use else statement to execute the rest of the function, and let it return naturally.
If you want to avoid the "return" in your method : maybe you could use a subClass of Exception of your own and handle it in your method's call ?
For example :
public static void doSomething(List<String> things) throws MyExceptionIfThingsIsEmpty {
if(things == null || things.size() <= 0)
throw new MyExceptionIfThingsIsEmpty(1, "Error, the list is empty !");
//...snip... do actual work
}
Edit :
If you don't want to use the "return" statement, you could do the opposite in the if() :
if(things != null && things.size() > 0)
// do your things
If function is long (say, 20 lines or more), then, it is good to return for few error conditions in the beginning so that reader of code can focus on logic when reading rest of the function. If function is small (say 5 lines or less), then return statements in the beginning can be distracting for reader.
So, decision should be based on primarily on whether the function becomes more readable or less readable.
Java good practices say that, as often as possible, return statements should be unique and written at the end of the method. To control what you return, use a variable. However, for returning from a void method, like the example you use, what I'd do would be perform the check in a middle method used only for such purpose. Anyway, don't take this too serious - keywords like continue should never be used according to Java good practices, but they're there, inside your scope.

Should we always check each parameter of method in java for null in the first line?

Every method accepts a set of parameter values. Should we always validate the non-nullness of input parameters or allow the code to fail with classic RunTimeException?
I have seen a lot of code where people don't really check the nullness of input parameters and just write the business logic using the parameters. What is the best way?
void public( String a, Integer b, Object c)
{
if( a == null || b == null || c == null)
{
throw new RunTimeException("Message...");
}
.....business logic.....
}
The best way is to only check when necessary.
If your method is private, for example, so you know nobody else is using it, and you know you aren't passing in any nulls, then no point to check again.
If your method is public though, who knows what users of your API will try and do, so better check.
If in doubt, check.
If the best you can do, however, is throw a NullPointerException, then may not want to check. For example:
int getStringLength(String str) {
return str.length();
}
Even if you checked for null, a reasonable option would be throwing a NullPointerException, which str.length() will do for you anyways.
No. It's standard to assume that parameters will not be null and that a NullPointerException will be thrown otherwise. If your method allows a parameter to be null, you should state that in your api.
It's an unfortunate aspect of Java that references can be null and there is no way to specify in the language that they are not.
So generally yes, don't make up an interpretation for null and don't get into a situation where an NPE might be thrown later.
JSR305 (now inactive) allowed you to annotate parameters to state that they should not be given nulls.
void fn(#Nonnull String a, #Nonnull Integer b, #Nonnull Object c) {
Verbose, but that's Java for you. There are other annotation libraries and checkers that do much the same, but are non-standard.
(Note about capitalisation: When camel-casing words of the form "non-thing", the standard is not to capitalise the route word unless it is the name of a class. So nonthing and nonnull.)
Annotations also don't actually enforce the rule, other than when a checker is run. You can statically include a method to do the checking:
public static <T> T nonnull(T value) {
if (value == null) {
throwNPE();
}
return value;
}
private static void throwNPE() {
throw new NullPointerException();
}
Returning the value is handy in constructors:
import static pkg.Check.nonnull;
class MyClass {
#Nonnull private final String thing;
public MyClass(#Nonnull String thing) {
this.thing = nonnull(thing);
}
...
I don't see a great deal of point in doing this. You're simply replicating behaviour that you'd get for free the first time you try to operate on a, b or c.
It depends if you expect any of your arguments to be null - more precisely, if your method can still make a correct decision if some parameters are null.
If not, it's good practice to check for null and raise an exception, because otherwise you'll get a NullPointerException, which you should never catch, as its appearance always indicates that you forgot to check your variables in your code. (and if you catch it, you might miss other occurrences of it being thrown, and you may introduce bugs).
On the other hand, if you throw RunTimeException or some other custom exception, you could then handle it somewhere on the upstream, so that you have more control over what goes on.
Yes, public methods should scrub the input, especially if bad input could cause problems within your method's code. It's a good idea to fail fast; that is, check as soon as possible. Java 7 added a new Objects class that makes it easy to check for null parameters and include a customized message:
public final void doSomething(String s)
{
Objects.requireNonNull(s, "The input String cannot be null");
// rest of your code goes here...
}
This will throw a NullPointerException.
Javadocs for the Objects class: http://docs.oracle.com/javase/7/docs/api/java/util/Objects.html
It depends on what your code is trying to do and the situation in which you want to throw an Exception. Sometime you will want your method to always throw an Exception if your method will not be able to work properly with null values. If your method can work around null values then its probably not necessary to throw an Exception.
Adding too many checked Exceptions can make for very convoluted and complicated code. This was the reason that they were not included in C#.
No, you shouldn't do that universally.
My preferences, in order, would be:
Do something sensible with the null. The sensible thing to do depends entirely on the situation, but throwing a custom exception should not be your first choice.
Use an assertion to check for null and test thoroughly, eliminating any situations in which null input is produced since - a.k.a bugs.
For public APIs, document that null isn't allowed and let it fail with an NPE.
You should never throw a runtime exception unless it is truly a fatal condition for the operation of the system, such as missing critical runtime parameters, but even then this is questionable as the system should just not start.
What are the business rules? Is null allowed for the field? Is it not?
In any case, it is always good practice to CHECK for nulls on ANY parameters passed in before you attempt to operate on them, so you don't get NullPointerExceptions when someone passes you bad data.
If you don't know whether you should do it, the chances are, you don't need to do it.
JDK sources, and Joshua Bloch's book, are terrible examples to follow, because they are targeting very different audiences. How many of us are writing public APIs for millions of programmers?

Categories

Resources