This question already has answers here:
Java: How to check for null pointers efficiently
(14 answers)
Closed 8 years ago.
Sometimes I see developers using libraries like Guava’s Preconditions to validate the parameters for nulls in the beginning of a method. How is that different than getting NPE during runtime since a runtime exception occurs in either way?
EDIT: If there are good reasons, then shouldn't developers use libraries for null-check on all methods?
Some reasons are:
Prevents code from being run prior to the first time the variable (which could be null) is de-referenced.
You can have custom error messages, such as "myVar was null, I can't proceed further", or any relevant message which will look better in logs and more easily traceable. A normal NPE would be less readable in logs.
Readability of code is better (arguably) because a programmer reading this code will realize immediately that these values are expected to be non-null.
Ultimately it's a matter of taste IMO. I have seen programs that are inherently null safe and don't require pre-conditions at all.
There are various reasons. One big one, as mentioned in the comments, is to get the checking done up front before any work is done. It's also not uncommon for a method to have a code path in which a specific parameter is never dereferenced, in which case without upfront checking, invalid calls to the method may sometimes not produce an exception. The goal is to ensure they always produce an exception so that the bug is caught immediately. That said, I don't necessarily use checkNotNull if I'm going to immediately dereference the parameter on the next line or something.
There's also the case of constructors, where you want to checkNotNull before assigning a parameter to a field, but I don't think that's what you're talking about since you're talking about methods where the parameter will be used anyway.
Related
In the JLS8, chapter "Exceptions" (here), I saw that:
Explicit use of throw statements provides an alternative to the
old-fashioned style of handling error conditions by returning funny
values, such as the integer value -1 where a negative value would not
normally be expected. Experience shows that too often such funny
values are ignored or not checked for by callers, leading to programs
that are not robust, exhibit undesirable behavior, or both.
Actually, I'm not clear about 2 issues following:
(1) "such as the integer value -1 where a negative value would not normally be expected", why "a negative value would not normally be expected"? Follow my knowledge, we often use return value "-1" for an error, abnormal event,... or something "not good".
(2) "Experience shows that too often such funny values are ignored or not checked for by callers, leading to programs that are not robust, exhibit undesirable behavior, or both.". What "funny values are ignored or not checked for by callers, leading to programs that are not robust, exhibit undesirable behavior, or both" means? I don't understant this issue...Please help me to clarify and (if yes) give me an example to demonstrate
Thank you so much
A common example is: people not checking the contents of a string, but blindly calling indexOf() - and not taking into account that the "thing searched for" isn't in that string, so the returned result is -1.
At least when using checked exception the programmer must do something about a potential exception thrown from some code that his code is using. For return values, you can completely ignore them - just a bit easier therefore.
On the other hand, many people argue that the idea of checked exception didn't live up to its promise - and are therefore advocating the usage of unchecked exceptions. Or alternatively, as companies such as google propose: the usage of more sophisticated "return value classes".
Long story short:
by the nature of the language exceptions should be seen as the primary mean to communicate severe exceptional conditions
but that doesn't mean that using numeric return codes is not possible or completely discouraged.
Please help me to clarify and (if yes) give me an example to demonstrate:
For example Class.getResourceAsStream(String) returns null if it cannot find the resource, rather than throwing an exception. This is clearly documented in the javadocs.
However, lots of people don't read the documentation, and don't check the result of a getResourceAsStream call. As a result, when the resource is missing, they use the null and get an unexpected NullPointerException.
Another common example is ServletRequest.getParameter(String).
If you analysed the NPE Q's on StackOverflow, you would find that a significant number of them are caused by people not checking the results of the above two methods. (If you don't believe me, the raw questions are available for you to analyse!)
Why using error code (such as -1) is not efficient than using exception?
The text you quoted doesn't say that. And it is probably not true. In fact, the using an error code is (classically) more efficient in many cases. However, with recent JIT compiler improvements, the overheads of exceptions and exception handling have been significantly reduced for typical use-cases.
Summary:
People are lazy. But you knew that!
People are careless. But you knew that!
APIs that require people to check returned values are less robust than those that throw (checked) exceptions ... because people write code that doesn't check return codes. Why? Because people are lazy, careless or both!
Admittedly, there are pragmatic reasons not to throw exceptions. However it is a trade-off of robustness vs efficiency vs forcing the programmer to deal with checked exceptions.
The text you quoted is not trying to tell you use exceptions always. Rather it is explaining the reasons that exceptions are part of the Java language.
You may disagree, but ... frankly ... it is not a topic that is worth debating. Exceptions (checked / unchecked, etc) are so hard-baked into the Java language that it would be impossible to change.
(1) "such as the integer value -1 where a negative value would not normally be expected", why "a negative value would not normally be expected"?
It is a proven fact (see below) that people don't always check return values.
Follow my knowledge, we often use return value "-1" for an error, abnormal event,... or something "not good".
True. And exceptions provide an alternative.
What "funny values are ignored or not checked for by callers
'Funny values' such as -1. There are examples posted here every hour of every day.
leading to programs that are not robust, exhibit undesirable behavior, or both" means?
It means that programs that ignore 'funny values' aren't robust or exhibit undesirable behaviour ... and give rise to trivial questions on this site.
I don't understand this issue...Please help me to clarify and (if yes) give me an example to demonstrate.
Try this search for hundreds of examples.
When a method return an int, the caller have to check if it is an error code. But if here is no expected range of returned value, how could you specify an error signal value?
Let us say what "error code" should a parseInt method return?
And if the caller "forgets" to check the returned value, the error could go unnoticed.
However if an exception is declared, the caller must check and handle it, by catching it or declaring i in throws...
After checking the JavaDocs for a method I was thinking of using, requiredNonNull, I stumbled across the first one with the single parameter (T obj).
However what is the actual purpose of this particular method with this signature? All it simply does is throw and NPE which I'm somewhat positive (as a I may be missing something obvious here) would be thrown anyway.
Throws:
NullPointerException - if obj is null
The latter actually makes sense in terms of debugging certain code, as the doc also states, it's primarily designed for parameter validation
public static <T> T requireNonNull(T obj,String message)
Checks that the specified object reference is not null and throws a customized NullPointerException if it is.
Therefore I can print specific information along with the NPE to make debugging a hell of a lot easier.
With this in mind I highly doubt I would come across a situation where I'd rather just use the former instead. Please do enlighten me.
tl;dr - Why would you ever use the overload which doesn't take a message.
A good principle when writing software is to catch errors as early as possible. The quicker you notice, for example, a bad value such as null being passed to a method, the easier it is to find out the cause and fix the problem.
If you pass null to a method that is not supposed to receive null, a NullPointerException will probably happen somewhere, as you already noticed. However, the exception might not happen until a few methods further down, and when it happens somewhere deep down, it will be more difficult to find the exact source of the error.
So, it's better when methods check their arguments up front and throw an exception as soon as they find an invalid value such as null.
edit - About the one-parameter version: even though you won't provide an error message, checking arguments and throwing an exception early will be more useful than letting the null pass down until an exception happens somewhere deeper down. The stack trace will point to the line where you used Objects.requireNonNull(...) and it should be obvious to you as a developer that that means you're not supposed to pass null. When you let a NullPointerException happen implicitly you don't know if the original programmer had the intent that the variable should not be null.
It is a utility method. Just a shortcut! (shortcut designers have their ways of doing their shortcut style).
Why throwing in the first place?
Security and Debugging.
Security: to not allow any illegal value in a sensitive place. (makes inner algorithm more sure about what are they doing and what are they having).
Debugging: for the program to die fast when something unexpected happens.
Yes, there are many questions and perfectly good answers dealing with the Java assert statement and when exceptions should be used instead. This question is about one specific use case which normally falls within the sound category of, "Never use assert to ensure preconditions," -- or words to that effect. So, please bear with me for a moment longer...
All Java object references can be null and are null by default. Even enum references can be null in addition to referencing only the set of values specified in the enum implementation. Consequently, there is a great deal of testing for null which must occur in most Java programs. Some of that testing consists of checking for null as a value returned by a given API which denotes absence, failure intended to be handled by the caller, or some other bit of state information. However, most tests for null are meant to fail early when an unexpected null is encountered. To properly "fail early" as often as possible while also using executable code to function as a self-documenting precondition, conditional tests which throw an exception may be used to check for unwanted null values.
But...
An unwanted null value will usually cause a NullPointerException to be thrown at some point, though it may be long after the point where the source of the null may be easily determined when testing or debugging an application. So, null values usually do not go undetected, it is just that the self-reporting of incorrect use may not be very helpful. But regardless of whether an exception arises from detection of an unexpected null or as a result of attempting to dereference it, the exception is almost certainly fatal or will have to be handled at a level which fully backs out whatever led to any fatal exception.
So, an unwanted, unintended null will eventually bite at some point in time. Testing preconditions merely moves that point, in the best case, to a point in time which may make it easier for a developer to find the source of the problem. But if explicit tests or exceptions are used to check all possible unwanted null values, however unlikely, that results in a great many conditional branches in the compiled code where the expected and common branch to be taken is the branch which skips the code which throws an exception.
Given all that and the nature of Java object references, it seems that assert statements should be preferable specifically for preconditions which preclude null values. In development and testing, the preconditions are enabled and aid in detecting and fixing defects. In production, fatal errors from unexpected null values happen later, but usually do not fail to occur -- critical code which must never under any circumstances accept a null value can always use an explicit test and thrown exception even if assert is used elsewhere. Therefore, using assert can limit unnecessary overhead in production code for this one, common and almost ubiquitous precondition.
Given all that, is there a sound argument against using assert statements for preconditions which specifically document and reject null values which are invalid arguments, input, or state?
I think the fact that asserts are disabled by default is enough to keep me from using them for anything critical.
I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.
I’m from a .NET background and now dabbling in Java.
Currently, I’m having big problems designing an API defensively against faulty input. Let’s say I’ve got the following code (close enough):
public void setTokens(Node node, int newTokens) {
tokens.put(node, newTokens);
}
However, this code can fail for two reasons:
User passes a null node.
User passes an invalid node, i.e. one not contained in the graph.
In .NET, I would throw an ArgumentNullException (rather than a NullReferenceException!) or an ArgumentException respectively, passing the name of the offending argument (node) as a string argument.
Java doesn’t seem to have equivalent exceptions. I realize that I could be more specific and just throw whatever exception comes closest to describing the situation, or even writing my own exception class for the specific situation.
Is this the best practice? Or are there general-purpose classes similar to ArgumentException in .NET?
Does it even make sense to check against null in this case? The code will fail anyway and the exception’s stack trace will contain the above method call. Checking against null seems redundant and excessive. Granted, the stack trace will be slightly cleaner (since its target is the above method, rather than an internal check in the HashMap implementation of the JRE). But this must be offset against the cost of an additional if statement, which, furthermore, should never occur anyway – after all, passing null to the above method isn’t an expected situation, it’s a rather stupid bug. Expecting it is downright paranoid – and it will fail with the same exception even if I don’t check for it.
[As has been pointed out in the comments, HashMap.put actually allows null values for the key. So a check against null wouldn’t necessarily be redundant here.]
The standard Java exception is IllegalArgumentException. Some will throw NullPointerException if the argument is null, but for me NPE has that "someone screwed up" connotation, and you don't want clients of your API to think you don't know what you're doing.
For public APIs, check the arguments and fail early and cleanly. The time/cost barely matters.
Different groups have different standards.
Firstly, I assume you know the difference between RuntimeExceptions (unchecked) and normal Exceptions (checked), if not then see this question and the answers. If you write your own exception you can force it to be caught, whereas both NullPointerException and IllegalArgumentException are RuntimeExceptions which are frowned on in some circles.
Secondly, as with you, groups I've worked with but don't actively use asserts, but if your team (or consumer of the API) has decided it will use asserts, then assert sounds like precisely the correct mechanism.
If I was you I would use NullPointerException. The reason for this is precedent. Take an example Java API from Sun, for example java.util.TreeSet. This uses NPEs for precisely this sort of situation, and while it does look like your code just used a null, it is entirely appropriate.
As others have said IllegalArgumentException is an option, but I think NullPointerException is more communicative.
If this API is designed to be used by outside companies/teams I would stick with NullPointerException, but make sure it is declared in the javadoc. If it is for internal use then you might decide that adding your own Exception heirarchy is worthwhile, but personally I find that APIs which add huge exception heirarchies, which are only going to be printStackTrace()d or logged are just a waste of effort.
At the end of the day the main thing is that your code communicates clearly. A local exception heirarchy is like local jargon - it adds information for insiders but can baffle outsiders.
As regards checking against null I would argue it does make sense. Firstly, it allows you to add a message about what was null (ie node or tokens) when you construct the exception which would be helpful. Secondly, in future you might use a Map implementation which allows null, and then you would lose the error check. The cost is almost nothing, so unless a profiler says it is an inner loop problem I wouldn't worry about it.
In Java you would normally throw an IllegalArgumentException
If you want a guide about how to write good Java code, I can highly recommend the book Effective Java by Joshua Bloch.
It sounds like this might be an appropriate use for an assert:
public void setTokens(Node node, int newTokens) {
assert node != null;
tokens.put(node, newTokens);
}
Your approach depends entirely on what contract your function offers to callers - is it a precondition that node is not null?
If it is then you should throw an exception if node is null, since it is a contract violation. If it isnt then your function should silently handle the null Node and respond appropriately.
I think a lot depends on the contract of the method and how well the caller is known.
At some point in the process the caller could take action to validate the node before calling your method. If you know the caller and know that these nodes are always validated then i think it is ok to assume you'll get good data. Essentially responsibility is on the caller.
However if you are, for example, providing a third party library that is distributed then you need to validate the node for nulls, etcs...
An illegalArugementException is the java standard but is also a RunTimeException. So if you want to force the caller to handle the exception then you need to provided a check exception, probably a custom one you create.
Personally I'd like NullPointerExceptions to ONLY happen by accident, so something else must be used to indicate that an illegal argument value was passed. IllegalArgumentException is fine for this.
if (arg1 == null) {
throw new IllegalArgumentException("arg1 == null");
}
This should be sufficient to both those reading the code, but also the poor soul who gets a support call at 3 in the morning.
(and, ALWAYS provide an explanatory text for your exceptions, you will appreciate them some sad day)
like the other : java.lang.IllegalArgumentException.
About checking null Node, what about checking bad input at the Node creation ?
I don't have to please anybody, so what I do now as canonical code is
void method(String s)
if((s != null) && (s instanceof String) && (s.length() > 0x0000))
{
which gets me a lot of sleep.
Others will disagree.