How to avoid JSONObject Null check - java

I have JSON like below
{
"a1": "aaa",
"b1": 333,
"c1": {
"c1": "ccc",
"d1": "ddd",
"f1": [
{"a1": "xyz"},
{"b1": "lmn"},
{"c1":123.00}
]
}
}
I am reading the file into String and creating a JSONObject as below
JSONObject json = new JSONObject(new JSONTokener(str));
The JSON is coming to my application from outside so the content can be very different in times. Say it can be completely null or some of the elements can be null or array size can be 0 or 1 or more etc.
When I am working on the JSONObject I can keep on checking all the elements using
json.has and !=null
so that it does not throw any exception.
I can have the code as below
if(
json.has("c")
&& json.getJSONObject("c")!= null
&& json.getJSONObject("c").has("f")
&& json.getJSONObject("c").getJSONArray("f").length() > 1
&&json.getJSONObject("c").getJSONArray("f").getJSONObject(1).has("b")
){
String x = json.getJSONObject("c").getJSONArray("f").getJSONObject(1).getString("b");
}
That makes the code having a long list of if conditions.
However I am thinking instead I can just enclose the statement with try catch
try {
String x = json.getJSONObject("c").getJSONArray("f").getJSONObject(1).getString("b");
}catch(JSONException e) {
//log and proceed
}
Please suggest if there is any valid reason in this case to put a long if conditions rather than just try - catch - log and proceed.
Also can you share if using JSONException has any "pros" in this context ?

Option 0: Long If-Statement
I think this is far too convoluted, especially as the chain gets longer. The only case where I would consider a remotely similar solution is if the user needs to know exactly at which point the problem lies and there are specific guidelines on how to solve this and how this may occur in a very specific use case, however in that case you need a large number of if statements and log statements.
Option 1: Try-Catch
Try-catch, as you suggested, is indeed possible, however you need to make sure that you catch all possible exceptions that can occur when a JSON field is not present or of the wrong type, for example ClassCastException and NullPointerException. It is short but not very elegant and as the other answerer said might hide other exceptions (but you could still log the stack trace).
Option 2: Java 8 Optional
Another option is to find a library that allows you to use the Java 8 Optional type. For example, this is suggested using Jackson. This is more elegant but could become a large chain as well. Also this adds one more dependency to your project.
Option 3: Path Expressions
A third option would be to use path expression JsonPath, where you can put all your statements into one expression and get all results for that. In my opinion this is the perfect use case and by far the best solution. The only downside is that this adds one more dependency to your project.

Thanks to the responses above, below is how the problem has been solved.
JSONPath is working the best for the scenario I mentioned. Thanks to #Konrad.
First create a configuration object of type com.jayway.jsonpath.Configuration
Configuration conf = Configuration.builder().options(Option.SUPPRESS_EXCEPTIONS).mappingProvider(new JsonOrgMappingProvider()).jsonProvider(new JsonOrgJsonProvider()).build();
Option.SUPPRESS_EXCEPTIONS - This will help to suppress the exception in case an element is missing
mappingProvider(new JsonOrgMappingProvider()).jsonProvider(new JsonOrgJsonProvider() - This uses the right provider so that when we parse the json we don't need to convert the JSONObject to String thus giving optimal performance.
DocumentContext docContext = JsonPath.using(conf).parse(json);
If we use default provider then we need to parse the JSONObject as follows
DocumentContext docContext = JsonPath.using(conf).parse(json.toString());
Then to read the element I used
docContext.read("$.a.b.c.d.values[0].e.f")
The performance is now optimal as well. The reason it was taking more time and memory was because
I was reading and parsing at the same time in a loop. Later on I moved the parsing out of loop.
I was using default provider and was doing json.toString()

Please suggest if there is any valid reason in this case to put a long if conditions rather than just try - catch - log and proceed.
Here are a couple of reasons:
Efficiency: creating, throwing and catching an exception is relatively expensive. Exactly how expensive is version dependent, and probably context dependent too. In recent versions, the JIT compiler can (AFAIK) optimize some sequences into a conditional branch. However, if you are going to log the exception, then the JVM is going to have to create an exception object and populate the stacktrace, and that this the most expensive part.
If you log NullPointerException for:
json.getJSONObject("c").getJSONArray("f").getJSONObject(1).getString("b")
you may not be able to tell which of the components was missing or null. The line number in the stacktrace won't be sufficient to distinguish the cases. (With a JSONException the exception message will give you more clues.)
You also can't distinguish the case where json is null, which is probably a different kind of problem; i.e. a bug. If you treat that as a data error, you are making it difficult to find and fix the code bug.
If you catch all exceptions (as suggested by another answer), you are potentially hiding more classes of bugs. Bad idea.
Also can you share if using JSONException has any "pros" in this context ?
The only real "pro" is that it is possibly less code, especially if you can put the try ... catch around a lot of this code.
The bottom line is that you need to weigh this up for yourself. And one factor is how likely it is that you will get JSON that doesn't match your code's expectations. That will depend in part on what is producing the JSON.

Related

Handling null, even if annotations are used

I started using javax.annotation especially to warn the next developer who maybe will be working with my code in the future.
But while I was using the javax.annotation #Nonnull annotation, a question came into my mind:
If you mark f.e. a parameter of a method thorugh the #Nonnull annotation that it haves to have a value,
do you still need to handle the case, that the next developer who is using your code could be parsing null to your function?
If found one con argument and one pro argument to still handle the special cases.
con: The code is cleaner, especially if you have multiple parameters that you mark with #Nonnull
private void foo(#Nonnull Object o)
{
/*do something*/
}
vs
public void foo(Object o)
throws NullPointerException
{
if (o == null)
{
throw new NullPointerException("Given Object must have a value!");
}
/*do something*/
}
pro: It could cause unhandled errors if the next developer ignore the annotations.
This is an unsolved problem in the nullity annotation space. There are 2 viewpoints that sound identical but result, in fact, in the exact opposite. Given a parameter void foo(#NonNull String param), what does that imply?
It's compiler-checkable documentation that indicates you should not pass null as param here. It does not mean that it is impossible to do this, or that one ought to consider it impossible. Simply that one should not - it's compiler-checkable documentation that the method has no defined useful behaviour if you pass null here.
The compiler is extended to support these annotations to treat it as a single type - the type of param is #NonNull String - and the compiler knows what that means and will in fact ensure this. The type of the parameter is #NonNull String and therefore cannot be null, just like it can't be, say, an InputStream instance either.
Crucially, then, the latter means a null check is flagged as silly code, whereas the former means lack of a null check is marked as bad. Hence, opposites. The former considered a nullcheck a warnable offense (with something along the lines of param can never be null here), for the same reason this is silly code:
void foo(String arg) {
if (!(arg instanceof String)) throw new IllegalArgumentException("arg");
}
That if clause cannot possibly fire. The mindset of various nullchecker frameworks is identical here, and therefore flags it as silly code:
void foo(#NonNull String arg) {
if (arg == null) throw new NullPointerException("arg");
}
The simple fact is, plenty of java devs do not enable annotation-based nullity checking, and even if they did, there are at least 10 competing annotations and many of them mean completely different things, and work completely differently. The vast majority will not be using a checking framework that works as you think it should, therefore, the advice to remove the nullcheck because it is silly is actively a bad thing - you should add that nullcheck. The linting tools that flag this down are misguided; they want to pretend to live in a world where every java programmer on the planet uses their tool. This isn't try and is unlikely to ever become true, hence, wrong.
A few null checking frameworks are sort of living both lives and will allow you to test if an argument marked as #NonNull is null, but only if the if body starts with throw, otherwise it's flagged.
To answer your questions:
You should nullcheck. After all, other developers that use your code may not get the nullity warnings from the nullcheck tool (either other team members working on the same code base but using slightly different tools and/or configurations of those tools, or, your code is a library and another project uses it, a more obvious route to a situation with different tools/configs). The best way to handle a null failure is a compile time error. A close second is an exception that is clear about the problem and whose stack trace can be used to very quickly solve the bug. A distant third is random bizarreness that takes a whole to debug - and that explicit nullcheck means you get nice fallback: If for whatever reason the write-time tooling doesn't catch the problem, the check will then simply turn it into the second, still quite acceptable case of an exception at the point of failure that is clear about what happened and where to fix it.
Lombok's #NonNull annotation can generate it for you, if you want. Now you have the best of both worlds: Just a #NonNull annotation (no clutter) and yet a runtime exception if someone does pass null anyway (DISCLAIMER: I'm one of the core contributors to Lombok).
If your linting tool complains about 'pointless null check' on the line if (param == null) throw new NullPointerException("param");, find the option in the linting tool to exclude if-checks that result in throw statements. If the linting tool cannot be configured to ignore this case, do not use the linting tool, find a better one.
Note that modern JVMs will throw a NullPointerException with the name of the expression as message if you dereference a null pointer, which may obviate the need to write an explicit check. However, now you're dependent on that method always dereferencing that variable forever more; if ever someone changes it and e.g. assigns it to a field and returns, now you have a problem: It should have thrown the exception, in order to ensure the bug is found quickly and with an exception that explains what happened and where to go and fix the problem. Hence I wouldn't rely on the JVM feature for your NPEs.
Error messages should be as short as they can be whilst not skimping on detail. They should also not end in punctuation; especially exclamation marks. Every exception tends to be noteworthy enough to warrant an exclamation mark - but it gets tedious to read them, so do not add them. In fact, the proper thing to throw, is this: throw new NullPointerException("o"). - and you might want to rename that parameter to something more readable if you find o ugly. Parameters are mostly public API info (JVM-technically they are not, but javadoc does include them, which is the basis of API docs, so you should consider them public, and therefore, they should have clear names. Which you can then reuse). That exception conveys all relevant information to a programmer: The nature of the problem (null was sent to code that does not know how to handle this), and where (the stack trace does that automatically), and the specifics (which thing was null). Your message is much longer and doesn't add anything more. At best you can say your message might be understood by a non-coder, except this is both not true (as if a stack trace is something random joe computeruser is going to understand), and irrelevant (it's not like they can fix the problem even if they do know what it means). Using exception messages as UI output just doesn't work, so don't try.
You may want to adjust your style guides and allow braceless if statements provided that the if expression is simple (no && or ||). Possibly add an additional rule that the single statement is a control statement - break;, continue;, return (something);, or throw something;. This will significantly improve readability for multiparams. The point of a style guide is to create legible code. Surely this:
if (param1 == null) throw new NullPointerException("param1");
if (param2 == null) throw new NullPointerException("param2");
is far more legible, especially considering this method has more lines than just those two, than this:
if (param1 == null) {
throw new NullPointerException("param1");
}
if (param2 == null) {
throw new NullPointerException("param2");
}
Styleguides are just a tool. If your styleguide is leading to less productivity and harder to read code, the answer should be obvious. Fix or replace the tool.

How to tell Java that a variable cannot possibly be null?

I have a program that basically looks like this:
boolean[] stuffNThings;
int state=1;
for(String string:list){
switch(state){
case 1:
if(/*condition*/){
// foo
break;
}else{
stuffNThings=new boolean[/*size*/];
state=2;
}
// intentional fallthrough
case 2:
// bar
stuffNThings[0]=true;
}
}
As you, a human, can see, case 2 only ever happens when there was previously a state 1 and it switched to state 2 after initialising the array. But Eclipse and the Java compiler don't see this, because it looks like pretty complex logic to them. So Eclipse complains:
The local variable stuffNThings may not have been initialized."
And if I change "boolean[] stuffNThings;" to "boolean[] stuffNThings=null;", it switches to this error message:
Potential null pointer access: The variable stuffNThings may be null at this location.
I also can't initialise it at the top, because the size of the array is only determined after the final loop in state 1.
Java thinks that the array could be null there, but I know that it can't. Is there some way to tell Java this? Or am I definitely forced to put a useless null check around it? Adding that makes the code harder to understand, because it looks like there may be a case where the value doesn't actually get set to true.
Java thinks that the array could be null there, but I know that it can't.
Strictly speaking, Java thinks that the variable could be uninitialized. If it is not definitely initialized, the value should not be observable.
(Whether the variable is silently initialized to null or left in an indeterminate state is an implementation detail. The point is, the language says you shouldn't be allowed to see the value.)
But anyway, the solution is to initialize it to null. It is redundant, but there is no way to tell Java to "just trust me, it will be initialized".
In the variations where you are getting "Potential null pointer access" messages:
It is a warning, not an error.
You can ignore or suppress a warning. (If your correctness analysis is wrong then you may get NPE's as a result. But that's your choice.)
You can turn off some or all warnings with compiler switches.
You can suppress a specific warning with a #SuppressWarnings annotation:
For Eclipse, use #SuppressWarnings("null").
For Android, use #SuppressWarnings("ConstantConditions").
Unfortunately, the warning tags are not fully standardized. However, a compiler should silently ignore a #SuppressWarnings for a warning tag that it doesn't recognize.
You may be able to restructure the code.
In your example, the code is using switch drop through. People seldom do that because it leads to code that is hard to understand. So, I'm not surprised that you can find edge-case examples involving drop-through where a compiler gets the NPE warnings a bit wrong.
Either way, you can easily avoid the need to do drop-through by restructuring your code. Copy the code in the case 2: case to the end of the case 1: case. Fixed. Move on.
Note the "possibly uninitialized" error is not the Java compiler being "stupid". There is a whole chapter of the JLS on the rules for definite assignment, etcetera. A Java compiler is not permitted to be smart about it, because that would mean that the same Java code would be legal or not legal, depending on the compiler implementation. That would be bad for code portability.
What we actually have here is a language design compromise. The language stops you from using variables that are (really) not initialized. But to do this, the "dumb" compiler must sometimes stop you using variables that you (the smart programmer) know will be initialized ... because the rules say that it should.
(The alternatives are worse: either no compile-time checks for uninitialized variables leading to hard crashes in unpredictable places, or checks that are different for different compilers.)
A distinct non-answer: when code is "so" complicated that an IDE / java compiler doesn't "see it", then that is a good indication that your code is too complicated anyway. At least for me, it wasn't obvious what you said. I had to read up and down repeatedly to convince myself that the statement given in the question is correct.
You have an if in a switch in a for. Clean code, and "single layer of abstraction" would tell you: not a good starting point.
Look at your code. What you have there is a state machine in disguise. Ask yourself whether it would be worth to refactor this on larger scale, for example by turning it into an explicit state machine of some sort.
Another less intrusive idea: use a List instead of an array. Then you can simply create an empty list, and add elements to that as needed.
After just trying to execute the code regardless of Eclipse complaining, I noticed that it does indeed run without problems. So apparently it was just a warning being set to "error" level, despite not being critical.
There was a "configure problem severity" button, so I set the severity of "Potential null pointer access" to "warning" (and adjusted some other levels accordingly). Now Eclipse just marks it as warning and executes the code without complaining.
More understandable would be:
boolean[] stuffNThings;
boolean initialized = false;
for (String string: list) {
if (!initialized) {
if (!/*condition*/) {
stuffNThings = new boolean[/*size*/];
initailized = true;
}
}
if (initialized) {
// bar
stuffNThings[0] = true;
}
}
Two loops, one for the initialisation, and one for playing with the stuff might or might not be more clear.
It is easier on flow analysis (compared to a switch with fall-through).
Furthermore instead of a boolean[] a BitSet might used too (as it is not fixed sized as an array).
BitSet stuffNThings = new BitSet(/*max size*/);

Best practice with respect to NPE and multiple expressions on single line

I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.

Java: Exceptions as control flow?

I've heard that using exceptions for control flow is bad practice. What do you think of this?
public static findStringMatch(g0, g1) {
int g0Left = -1;
int g0Right = -1;
int g1Left = -1;
int g1Right = -1;
//if a match is found, set the above ints to the proper indices
//...
//if not, the ints remain -1
try {
String gL0 = g0.substring(0, g0Left);
String gL1 = g1.substring(0, g1Left);
String g0match = g0.substring(g0Left, g0Right);
String g1match = g1.substring(g1Left, g1Right);
String gR0 = g0.substring(g0Right);
String gR1 = g1.substring(g1Right);
return new StringMatch(gL0, gR0, g0match, g1match, gL1, gR1);
}
catch (StringIndexOutOfBoundsException e) {
return new StringMatch(); //no match found
}
So, if no match has been found, the ints will be -1. This will cause an exception when I try to take the substring g0.substring(0, -1). Then the function just returns an object indicating that no match is found.
Is this bad practice? I could just check each index manually to see if they're all -1, but that feels like more work.
UPDATE
I have removed the try-catch block and replaced it with this:
if (g0Left == -1 || g0Right == -1 || g1Left == -1 || g1Right == -1) {
return new StringMatch();
}
Which is better: checking if each variable is -1, or using a boolean foundMatch to keep track and just check that at the end?
Generally exceptions are expensive operations and as the name would suggest, exceptional conditions. So using them in the context of controlling the flow of your application is indeed considered bad practice.
Specifically in the example you provided, you would need to do some basic validation of the inputs you are providing to the StringMatch constructor. If it were a method that returns an error code in case some basic parameter validation fails you could avoid checking beforehand, but this is not the case.
I've done some testing on this. On modern JVMs, it actually doesn't impact runtime performance much (if at all). If you run with debugging turned on, then it does slow things down considerably.
See the following for details
(I should also mention that I still think this is a bad practice, even if it doesn't impact performance. More than anything, it reflects a possibly poor algorithm design that is going to be difficult to test)
Yes, this is a bad practice, especially when you have a means to avoid an exception (check the string length before trying to index into it). Try and catch blocks are designed to partition "normal" logic from "exceptional" and error logic. In your example, you have spread "normal" logic into the exceptional/error block (not finding a match is not exceptional). You are also misusing substring so you can leverage the error it produces as control flow.
Program flow should be in as straight a line as possible(since even then applications get pretty complex), and utilize standard control flow structures. The next developer to touch the code may not be you and (rightly)misunderstand the non-standard way you are using exceptions instead of conditionals to determine control flow.
I am fighting a slightly different slant on this problem right now during some legacy code refactoring.
The largest issue that I find with this approach is that using the try/catch breaks normal programmatic flow.
In the application I am working on(and this is different from the sample you have applied), exceptions are used to communicate from within a method call that a given outcome(for instance looking for an account number and not finding it) occurred. This creates spaghetti code on the client side, since the calling method (during a non-exceptional event, or a normal use-case event) breaks out of whatever code it was executing before the call and into the catch block. This is repeated in some very long methods many times over, making the code very easy to mis-read.
For my situation, a method should return a value per it's signature for all but truly exceptional events. The exception handling mechanism is intended to take another path when the exception occurs (try and recover from within the method so you can still return normally).
To my mind you could do this if you scope your try/catch blocks very tightly; but I think it is a bad habit and can lead to code that is very easy to misinterpret, since the calling code will interpret any thrown exception as a 'GOTO' type message, altering program flow. I fear that although this case does not fall into this trap, doing this often could result in a coding habit leading to the nightmare that I am living right now.
And that nightmare is not pleasant.

Java exception handling idioms ... who's right and how to handle it?

I currently have a technical point of difference with an acquaintance. In a nutshell, it's the difference between these two basic styles of Java exception handling:
Option 1 (mine):
try {
...
} catch (OneKindOfException) {
...
} catch (AnotherKind) {
...
} catch (AThirdKind) {
...
}
Option 2 (his):
try {
...
} catch (AppException e) {
switch(e.getCode()) {
case Constants.ONE_KIND:
...
break;
case Constants.ANOTHER_KIND:
...
break;
case Constants.A_THIRD_KIND:
...
break;
default:
...
}
}
His argument -- after I used copious links about user input validation, exception handling, assertions and contracts, etc. to back up my point of view -- boiled down to this:
"It’s a good model. I've used it since me and a friend of mine came up with it in 1998, almost 10 years ago. Take another look and you'll see that the compromises we made to the academic arguments make a lot of sense."
Does anyone have a knock-down argument for why Option 1 is the way to go?
When you have a switch statement, you're less object oriented. There are also more opportunities for mistakes, forgetting a "break;" statement, forgetting to add a case for an Exception if you add a new Exception that is thrown.
I also find your way of doing it to be MUCH more readable, and it's the standard idiom that all developers will immediately understand.
For my taste, the amount of boiler plate to do your acquaintance's method, the amount of code that has nothing to do with actually handling the Exceptions, is unacceptable. The more boilerplate code there is around your actual program logic, the harder the code is to read and to maintain. And using an uncommon idiom makes code more difficult to understand.
But the deal breaker, as I said above, is that when you modify the called method so that it throws an additional Exception, you will automatically know you have to modify your code because it will fail to compile. However, if you use your acquaintance's method and you modify the called method to throw a new variety of AppException, your code will not know there is anything different about this new variety and your code may silently fail by going down an inappropriate error-handling leg. This is assuming that you actually remembered to put in a default so at least it's handled and not silently ignored.
the way option 2 is coded, any unexpected exception type will be swallowed! (this can be fixed by re-throwing in the default case, but that is arguably an ugly thing to do - much better/more efficient to not catch it in the first place)
option 2 is a manual recreation of what option 1 most likely does under the hood, i.e. it ignores the preferred syntax of the language to use older constructs best avoided for maintenance and readability reasons. In other words, option 2 is reinventing the wheel using uglier syntax than that provided by the language constructs.
clearly, both ways work; option 2 is merely obsoleted by the more modern syntax supported by option 1
I don't know if I have a knock down argument but initial thoughts are
Option 2 works until your trying to catch an Exception that doesn't implement getCode()
Option 2 encourages the developer to catch general exceptions, this is a problem because if you don't implement a case statement for a given subclass of AppException the compiler will not warn you. Ofcourse you could run into the same problem with option 1 but atleast option 1 does not activly encourage this.
With option 1, the caller has the option of selecting exactly which exception to catch, and to ignore all others. With option 2, the caller has to remember to re-throw any exceptions not explicitly caught.
Additionally, there's better self-documentation with option 1, as the method signature needs to specify exactly which exceptions are thrown, rather than a single over-riding exception.
If there's a need to have an all-encompassing AppException, the other exception types can always inherit from it.
The knock-down argument would be that it breaks encapsulation since I now I have to know something about the subclass of Exception's public interface in order to handle exceptions by it. A good example of this "mistake" in the JDK is java.sql.SQLException, exposing getErrorCode and getSQLState methods.
It looks to me like you're overusing exceptions in either case. As a general rule, I try to throw exceptions only when both of the following are true:
An unexpected condition has occurred that cannot be handled here.
Somebody will care about the stack trace.
How about a third way? You could use an enum for the type of error and simply return it as part of the method's type. For this, you would use, for example, Option<A> or Either<A, B>.
For example, you would have:
enum Error { ONE_ERROR, ANOTHER_ERROR, THIRD_ERROR };
and instead of
public Foo mightError(Bar b) throws OneException
you will have
public Either<Error, Foo> mightError(Bar b)
Throw/catch is a bit like goto/comefrom. Easy to abuse. See Go To Statement Considered Harmful. and Lazy Error Handling
I think it depends on the extent to which this is used. I certainly wouldn't have "one exception to rule them all" which is thrown by everything. On the other hand, if there is a whole class of situations which are almost certainly going to be handled the same way, but you may need to distinguish between them for (say) user feedback purposes, option 2 would make sense just for those exceptions. They should be very narrow in scope - so that wherever it makes sense for one "code" to be thrown, it should probably make sense for all the others to be thrown too.
The crucial test for me would be: "would it ever make sense to catch an AppException with one code, but want to let another code remain uncaught?" If so, they should be different types.
Each checked Exception is an, um, exception condition that must be handled for the program behavior to be defined. There's no need to go into contractual obligations and whatnot, it's a simple matter of meaningfulness. Say you ask a shopkeeper how much something costs and it turns out the item is not for sale. Now, if you insist you'll only accept non-negative numerical values for an answer, there is no correct answer that could ever be provided to you. This is the point with checked exceptions, you ask that some action be performed (perhaps producing a response), and if your request cannot be performed in a meaningful manner, you'll have to plan for that reasonably. This is the cost of writing robust code.
With Option 2 you are completely obscuring the meaning of the exception conditions in your code. You should not collapse different error conditions into a single generic AppException unless they will never need to be handled differently. The fact that you're branching on getCode() indicates otherwise, so use different Exceptions for different exceptions.
The only real merit I can see with Option 2 is for cleanly handling different exceptions with the same code block. This nice blog post talks about this problem with Java. Still, this is a style vs. correctness issue, and correctness wins.
I'd support option 2 if it was:
default:
throw e;
It's a bit uglier syntax, but the ability to execute the same code for multiple exceptions (ie cases in a row) is much better. The only thing that would bug me is producing a unique id when making an exception, and the system could definitely be improved.
Unnecessary have to know the code and declare constants for the exception which could have been abstract when using option 1.
The second option (as I guess) will change to traditional (as option 1) when there is only one specific exception to catch, so I see inconsistencey over there.
Use both.
The first for most of the exceptions in your code.
The second for those very "specific" exceptions you've create.
Don't struggle with little things like this.
BTW 1st is better.

Categories

Resources