For people suggesting throwing an exception:
Throwing an exception doesn't give me a compile-time error, it gives me a runtime error. I know I can throw an exception, I'd rather die during compilation than during runtime.
First-off, I am using eclipse 3.4.
I have a data model that has a mode property that is an Enum.
enum Mode {on(...), off(...), standby(...); ...}
I am currently writing a view of this model and I have the code
...
switch(model.getMode()) {
case on:
return getOnColor();
case off:
return getOffColor();
case standby:
return getStandbyColor();
}
...
I am getting an error "This method must return a result of type java.awt.Color" because I have no default case and no return xxx at the end of the function.
I want a compilation error in the case where someone adds another type to the enum (e.g. shuttingdown) so I don't want to put a default case that throws an AssertionError, as this will compile with a modified Mode and not be seen as an error until runtime.
My question is this:
Why does EclipseBuilder (and javac) not recognize that this switch covers all possibilities (or does it cover them?) and stop warning me about needing a return type. Is there a way I can do what I want without adding methods to Mode?
Failing that, is there an option to warn/error on switch statements that don't cover all of the Enum's possible values?
Edit:
Rob: It is a compile error. I just tried compiling it with javac and I get a "missing return statement" error targeting the last } of the method. Eclispe just places the error at the top of the method.
You could always use the Enum with Visitor pattern:
enum Mode {
on {
public <E> E accept( ModeVisitor<E> visitor ) {
return visitor.visitOn();
}
},
off {
public <E> E accept( ModeVisitor<E> visitor ) {
return visitor.visitOff();
}
},
standby {
public <E> E accept( ModeVisitor<E> visitor ) {
return visitor.visitStandby();
}
}
public abstract <E> E accept( ModeVisitor<E> visitor );
public interface ModeVisitor<E> {
E visitOn();
E visitOff();
E visitStandby();
}
}
Then you would implement something like the following:
public final class ModeColorVisitor implements ModeVisitor<Color> {
public Color visitOn() {
return getOnColor();
}
public Color visitOff() {
return getOffColor();
}
public Color visitStandby() {
return getStandbyColor();
}
}
You'd use it as follows:
return model.getMode().accept( new ModeColorVisitor() );
This is a lot more verbose but you'd immediately get a compile error if a new enum was declared.
You have to enable in Eclipse (window -> preferences) settings "Enum type constant not covered in switch" with Error level.
Throw an exception at the end of the method, but don't use default case.
public String method(Foo foo)
switch(foo) {
case x: return "x";
case y: return "y";
}
throw new IllegalArgumentException();
}
Now if someone adds new case later, Eclipse will make him know he's missing a case. So don't ever use default unless you have really good reasons to do so.
I don't know why you get this error, but here is a suggestion, Why don't you define the color in the enum itself? Then you can't accidentally forget to define a new color.
For example:
import java.awt.Color;
public class Test {
enum Mode
{
on (Color.BLACK),
off (Color.RED),
standby (Color.GREEN);
private final Color color;
Mode (Color aColor) { color = aColor; }
Color getColor() { return color; }
}
class Model
{
private Mode mode;
public Mode getMode () { return mode; }
}
private Model model;
public Color getColor()
{
return model.getMode().getColor();
}
}
btw, for comparison here is the original case, with compiler error.
import java.awt.Color;
public class Test {
enum Mode {on, off, standby;}
class Model
{
private Mode mode;
public Mode getMode () { return mode; }
}
private Model model;
public Color getColor()
{
switch(model.getMode()) {
case on:
return Color.BLACK;
case off:
return Color.RED;
case standby:
return Color.GREEN;
}
}
}
I'd say it's probably because model.GetMode() could return null.
A nice way for this would be to add the default case to return some error value or throw exception and to use automated tests with jUnit, for example:
#Test
public void testEnum() {
for(Mode m : Mode.values() {
m.foobar(); // The switch is separated to a method
// If you want to check the return value, do it (or if there's an exception in the
// default part, that's enough)
}
}
When you got automated tests, this will take care of that foobar is defined for all enumerations.
Create a default case that throws an exception:
throw new RuntimeExeption("this code should never be hit unless someone updated the enum")
... and that pretty much describes why Eclipse is complaining: while your switch may cover all enum cases today, someone could add a case and not recompile tomorrow.
Why does EclipseBuilder not recognize that this switch covers all possibilities (or does it cover them?) and stop warning me about needing a return type. Is there a way I can do what I want without adding methods to Mode?
It's not an issue in Eclipse, but rather the compiler, javac. All javac sees is that you don't have a return value in the case in which nothing is matched (the fact that you know you are matching all cases is irrelevant). You have to return something in the default case (or throw an exception).
Personally, I'd just throw some sort of exception.
Your problem is that you are trying to use the switch statement as an indicator that your enum is locked down.
The fact is that the 'switch' statement and the java compiler cannot recognize that you do not want to allow other options in your enum. The fact that you only want three options in your enum is completely separate from your design of the switch statement, which as noted by others should ALWAYS have a default statement. (In your case it should throw an exception, because it is an unhandled scenario.)
You should liberally sprinkle your enum with comments so that everyone knows not to touch it, and you should fix your switch statement to throw errors for unrecognized cases.
That way you've covered all the bases.
EDIT
On the matter of throwing compiler error. That does not strictly make sense. You have an enum with three options, and a switch with three options. You want it to throw a compiler error if someone adds a value to the enum. Except that enums can be of any size, so it doesn't make sense to throw a compiler error if someone changes it. Furthermore, you are defining the size of your enum based on a switch statement which could be located in a completely different class.
The internal workings of Enum and Switch are completely separate and should remain uncoupled.
Since I can't just comment...
Always, Always, Always have a default case. You'd be surprised how "frequently" it would be hit (Less in Java than C, but still).
Having said that, what if I only want to handle only on/off in my case. Your semantic processing by javac would flag that as an issue.
Nowadays (this answer is written several years after the original question), eclipse allows following configuration at Window -> Preferences -> Java -> Compiler -> Error/warnings -> Potential programming problems:
Incomplete switch cases
Signal even if default case exists
Related
A web service returns a huge XML and I need to access deeply nested fields of it. For example:
return wsObject.getFoo().getBar().getBaz().getInt()
The problem is that getFoo(), getBar(), getBaz() may all return null.
However, if I check for null in all cases, the code becomes very verbose and hard to read. Moreover, I may miss the checks for some of the fields.
if (wsObject.getFoo() == null) return -1;
if (wsObject.getFoo().getBar() == null) return -1;
// maybe also do something with wsObject.getFoo().getBar()
if (wsObject.getFoo().getBar().getBaz() == null) return -1;
return wsObject.getFoo().getBar().getBaz().getInt();
Is it acceptable to write
try {
return wsObject.getFoo().getBar().getBaz().getInt();
} catch (NullPointerException ignored) {
return -1;
}
or would that be considered an antipattern?
Catching NullPointerException is a really problematic thing to do since they can happen almost anywhere. It's very easy to get one from a bug, catch it by accident and continue as if everything is normal, thus hiding a real problem. It's so tricky to deal with so it's best to avoid altogether. (For example, think about auto-unboxing of a null Integer.)
I suggest that you use the Optional class instead. This is often the best approach when you want to work with values that are either present or absent.
Using that you could write your code like this:
public Optional<Integer> m(Ws wsObject) {
return Optional.ofNullable(wsObject.getFoo()) // Here you get Optional.empty() if the Foo is null
.map(f -> f.getBar()) // Here you transform the optional or get empty if the Bar is null
.map(b -> b.getBaz())
.map(b -> b.getInt());
// Add this if you want to return null instead of an empty optional if any is null
// .orElse(null);
// Or this if you want to throw an exception instead
// .orElseThrow(SomeApplicationException::new);
}
Why optional?
Using Optionals instead of null for values that might be absent makes that fact very visible and clear to readers, and the type system will make sure you don't accidentally forget about it.
You also get access to methods for working with such values more conveniently, like map and orElse.
Is absence valid or error?
But also think about if it is a valid result for the intermediate methods to return null or if that is a sign of an error. If it is always an error then it's probably better throw an exception than to return a special value, or for the intermediate methods themselves to throw an exception.
Maybe more optionals?
If on the other hand absent values from the intermediate methods are valid, maybe you can switch to Optionals for them also?
Then you could use them like this:
public Optional<Integer> mo(Ws wsObject) {
return wsObject.getFoo()
.flatMap(f -> f.getBar())
.flatMap(b -> b.getBaz())
.flatMap(b -> b.getInt());
}
Why not optional?
The only reason I can think of for not using Optional is if this is in a really performance critical part of the code, and if garbage collection overhead turns out to be a problem. This is because a few Optional objects are allocated each time the code is executed, and the VM might not be able to optimize those away. In that case your original if-tests might be better.
I suggest considering Objects.requireNonNull(T obj, String message). You might build chains with a detailed message for each exception, like
requireNonNull(requireNonNull(requireNonNull(
wsObject, "wsObject is null")
.getFoo(), "getFoo() is null")
.getBar(), "getBar() is null");
I would suggest you not to use special return-values, like -1. That's not a Java style. Java has designed the mechanism of exceptions to avoid this old-fashioned way which came from the C language.
Throwing NullPointerException is not the best option too. You could provide your own exception (making it checked to guarantee that it will be handled by a user or unchecked to process it in an easier way) or use a specific exception from XML parser you are using.
Assuming the class structure is indeed out of our control, as seems to be the case, I think catching the NPE as suggested in the question is indeed a reasonable solution, unless performance is a major concern. One small improvement might be to wrap the throw/catch logic to avoid clutter:
static <T> T get(Supplier<T> supplier, T defaultValue) {
try {
return supplier.get();
} catch (NullPointerException e) {
return defaultValue;
}
}
Now you can simply do:
return get(() -> wsObject.getFoo().getBar().getBaz().getInt(), -1);
As already pointed out by Tom in the comment,
Following statement disobeys the Law of Demeter,
wsObject.getFoo().getBar().getBaz().getInt()
What you want is int and you can get it from Foo. Law of Demeter says, never talk to the strangers. For your case you can hide the actual implementation under the hood of Foo and Bar.
Now, you can create method in Foo to fetch int from Baz. Ultimately, Foo will have Bar and in Bar we can access Int without exposing Baz directly to Foo. So, null checks are probably divided to different classes and only required attributes will be shared among the classes.
My answer goes almost in the same line as #janki, but I would like to modify the code snippet slightly as below:
if (wsObject.getFoo() != null && wsObject.getFoo().getBar() != null && wsObject.getFoo().getBar().getBaz() != null)
return wsObject.getFoo().getBar().getBaz().getInt();
else
return something or throw exception;
You can add a null check for wsObject as well, if there's any chance of that object being null.
You say that some methods "may return null" but do not say in what circumstances they return null. You say you catch the NullPointerException but you do not say why you catch it. This lack of information suggests you do not have a clear understanding of what exceptions are for and why they are superior to the alternative.
Consider a class method that is meant to perform an action, but the method can not guarantee it will perform the action, because of circumstances beyond its control (which is in fact the case for all methods in Java). We call that method and it returns. The code that calls that method needs to know whether it was successful. How can it know? How can it be structured to cope with the two possibilities, of success or failure?
Using exceptions, we can write methods that have success as a post condition. If the method returns, it was successful. If it throws an exception, it had failed. This is a big win for clarity. We can write code that clearly processes the normal, success case, and move all the error handling code into catch clauses. It often transpires that the details of how or why a method was unsuccessful are not important to the caller, so the same catch clause can be used for handling several types of failure. And it often happens that a method does not need to catch exceptions at all, but can just allow them to propagate to its caller. Exceptions due to program bugs are in that latter class; few methods can react appropriately when there is a bug.
So, those methods that return null.
Does a null value indicate a bug in your code? If it does, you should not be catching the exception at all. And your code should not be trying to second guess itself. Just write what is clear and concise on the assumption that it will work. Is a chain of method calls clear and concise? Then just use them.
Does a null value indicate invalid input to your program? If it does, a NullPointerException is not an appropriate exception to throw, because conventionally it is reserved for indicating bugs. You probably want to throw a custom exception derived from IllegalArgumentException (if you want an unchecked exception) or IOException (if you want a checked exception). Is your program required to provide detailed syntax error messages when there is invalid input? If so, checking each method for a null return value then throwing an appropriate diagnostic exception is the only thing you can do. If your program need not provide detailed diagnostics, chaining the method calls together, catching any NullPointerException and then throwing your custom exception is clearest and most concise.
One of the answers claims that the chained method calls violate the Law of Demeter and thus are bad. That claim is mistaken.
When it comes to program design, there are not really any absolute rules about what is good and what is bad. There are only heuristics: rules that are right much (even almost all) of the time. Part of the skill of programming is knowing when it is OK to break those kinds of rules. So a terse assertion that "this is against rule X" is not really an answer at all. Is this one of the situations where the rule should be broken?
The Law of Demeter is really a rule about API or class interface design. When designing classes, it is useful to have a hierarchy of abstractions. You have low level classes that uses the language primitives to directly perform operations and represent objects in an abstraction that is higher level than the language primitives. You have medium level classes that delegate to the low level classes, and implement operations and representations at a higher level than the low level classes. You have high level classes that delegate to the medium level classes, and implement still higher level operations and abstractions. (I've talked about just three levels of abstraction here, but more are possible). This allows your code to express itself in terms of appropriate abstractions at each level, thereby hiding complexity. The rationale for the Law of Demeter is that if you have a chain of method calls, that suggests you have a high level class reaching in through a medium level class to deal directly with low level details, and therefore that your medium level class has not provided a medium-level abstract operation that the high level class needs. But it seems that is not the situation you have here: you did not design the classes in the chain of method calls, they are the result of some auto-generated XML serialization code (right?), and the chain of calls is not descending through an abstraction hierarchy because the des-serialized XML is all at the same level of the abstraction hierarchy (right?)?
As others have said, respecting the Law of Demeter is definitely part of the solution. Another part, wherever possible, is to change those chained methods so they cannot return null. You can avoid returning null by instead returning an empty String, an empty Collection, or some other dummy object that means or does whatever the caller would do with null.
To improve readability, you may want to use multiple variables, like
Foo theFoo;
Bar theBar;
Baz theBaz;
theFoo = wsObject.getFoo();
if ( theFoo == null ) {
// Exit.
}
theBar = theFoo.getBar();
if ( theBar == null ) {
// Exit.
}
theBaz = theBar.getBaz();
if ( theBaz == null ) {
// Exit.
}
return theBaz.getInt();
Don't catch NullPointerException. You don't know where it is coming from (I know it is not probable in your case but maybe something else threw it) and it is slow.
You want to access the specified field and for this every other field has to be not null. This is a perfect valid reason to check every field. I would probably check it in one if and then create a method for readability. As others pointed out already returning -1 is very oldschool but I don't know if you have a reason for it or not (e.g. talking to another system).
public int callService() {
...
if(isValid(wsObject)){
return wsObject.getFoo().getBar().getBaz().getInt();
}
return -1;
}
public boolean isValid(WsObject wsObject) {
if(wsObject.getFoo() != null &&
wsObject.getFoo().getBar() != null &&
wsObject.getFoo().getBar().getBaz() != null) {
return true;
}
return false;
}
Edit: It is debatable if it's disobeyes the Law Of Demeter since the WsObject is probably only a data structure (check https://stackoverflow.com/a/26021695/1528880).
If you don't want to refactor the code and you can use Java 8, it is possible to use Method references.
A simple demo first (excuse the static inner classes)
public class JavaApplication14
{
static class Baz
{
private final int _int;
public Baz(int value){ _int = value; }
public int getInt(){ return _int; }
}
static class Bar
{
private final Baz _baz;
public Bar(Baz baz){ _baz = baz; }
public Baz getBar(){ return _baz; }
}
static class Foo
{
private final Bar _bar;
public Foo(Bar bar){ _bar = bar; }
public Bar getBar(){ return _bar; }
}
static class WSObject
{
private final Foo _foo;
public WSObject(Foo foo){ _foo = foo; }
public Foo getFoo(){ return _foo; }
}
interface Getter<T, R>
{
R get(T value);
}
static class GetterResult<R>
{
public R result;
public int lastIndex;
}
/**
* #param args the command line arguments
*/
public static void main(String[] args)
{
WSObject wsObject = new WSObject(new Foo(new Bar(new Baz(241))));
WSObject wsObjectNull = new WSObject(new Foo(null));
GetterResult<Integer> intResult
= getterChain(wsObject, WSObject::getFoo, Foo::getBar, Bar::getBar, Baz::getInt);
GetterResult<Integer> intResult2
= getterChain(wsObjectNull, WSObject::getFoo, Foo::getBar, Bar::getBar, Baz::getInt);
System.out.println(intResult.result);
System.out.println(intResult.lastIndex);
System.out.println();
System.out.println(intResult2.result);
System.out.println(intResult2.lastIndex);
// TODO code application logic here
}
public static <R, V1, V2, V3, V4> GetterResult<R>
getterChain(V1 value, Getter<V1, V2> g1, Getter<V2, V3> g2, Getter<V3, V4> g3, Getter<V4, R> g4)
{
GetterResult result = new GetterResult<>();
Object tmp = value;
if (tmp == null)
return result;
tmp = g1.get((V1)tmp);
result.lastIndex++;
if (tmp == null)
return result;
tmp = g2.get((V2)tmp);
result.lastIndex++;
if (tmp == null)
return result;
tmp = g3.get((V3)tmp);
result.lastIndex++;
if (tmp == null)
return result;
tmp = g4.get((V4)tmp);
result.lastIndex++;
result.result = (R)tmp;
return result;
}
}
Output
241
4
null
2
The interface Getter is just a functional interface, you may use any equivalent.
GetterResult class, accessors stripped out for clarity, hold the result of the getter chain, if any, or the index of the last getter called.
The method getterChain is a simple, boilerplate piece of code, that can be generated automatically (or manually when needed).
I structured the code so that the repeating block is self evident.
This is not a perfect solution as you still need to define one overload of getterChain per number of getters.
I would refactor the code instead, but if can't and you find your self using long getter chains often you may consider building a class with the overloads that take from 2 to, say, 10, getters.
I'd like to add an answer which focus on the meaning of the error. Null exception in itself doesn't provide any meaning full error. So I'd advise to avoid dealing with them directly.
There is a thousands cases where your code can go wrong: cannot connect to database, IO Exception, Network error... If you deal with them one by one (like the null check here), it would be too much of a hassle.
In the code:
wsObject.getFoo().getBar().getBaz().getInt();
Even when you know which field is null, you have no idea about what goes wrong. Maybe Bar is null, but is it expected? Or is it a data error? Think about people who read your code
Like in xenteros's answer, I'd propose using custom unchecked exception. For example, in this situation: Foo can be null (valid data), but Bar and Baz should never be null (invalid data)
The code can be re-written:
void myFunction()
{
try
{
if (wsObject.getFoo() == null)
{
throw new FooNotExistException();
}
return wsObject.getFoo().getBar().getBaz().getInt();
}
catch (Exception ex)
{
log.error(ex.Message, ex); // Write log to track whatever exception happening
throw new OperationFailedException("The requested operation failed")
}
}
void Main()
{
try
{
myFunction();
}
catch(FooNotExistException)
{
// Show error: "Your foo does not exist, please check"
}
catch(OperationFailedException)
{
// Show error: "Operation failed, please contact our support"
}
}
NullPointerException is a run-time exception, so generally speaking is not recommended to catch it, but to avoid it.
You will have to catch the exception wherever you want to call the method (or it will propagate up the stack). Nevertheless, if in your case you can keep working with that result with value -1 and you are sure that it won't propagate because you are not using any of the "pieces" that may be null, then it seems right to me to catch it
Edit:
I agree with the later answer from #xenteros, it wil be better to launch your own exception instead returning -1 you can call it InvalidXMLException for instance.
Have been following this post since yesterday.
I have been commenting/voting the comments which says, catching NPE is bad. Here is why I have been doing that.
package com.todelete;
public class Test {
public static void main(String[] args) {
Address address = new Address();
address.setSomeCrap(null);
Person person = new Person();
person.setAddress(address);
long startTime = System.currentTimeMillis();
for (int i = 0; i < 1000000; i++) {
try {
System.out.println(person.getAddress().getSomeCrap().getCrap());
} catch (NullPointerException npe) {
}
}
long endTime = System.currentTimeMillis();
System.out.println((endTime - startTime) / 1000F);
long startTime1 = System.currentTimeMillis();
for (int i = 0; i < 1000000; i++) {
if (person != null) {
Address address1 = person.getAddress();
if (address1 != null) {
SomeCrap someCrap2 = address1.getSomeCrap();
if (someCrap2 != null) {
System.out.println(someCrap2.getCrap());
}
}
}
}
long endTime1 = System.currentTimeMillis();
System.out.println((endTime1 - startTime1) / 1000F);
}
}
public class Person {
private Address address;
public Address getAddress() {
return address;
}
public void setAddress(Address address) {
this.address = address;
}
}
package com.todelete;
public class Address {
private SomeCrap someCrap;
public SomeCrap getSomeCrap() {
return someCrap;
}
public void setSomeCrap(SomeCrap someCrap) {
this.someCrap = someCrap;
}
}
package com.todelete;
public class SomeCrap {
private String crap;
public String getCrap() {
return crap;
}
public void setCrap(String crap) {
this.crap = crap;
}
}
Output
3.216
0.002
I see a clear winner here. Having if checks is way too less expensive than catch an exception. I have seen that Java-8 way of doing. Considering that 70% of the current applications still run on Java-7 I am adding this answer.
Bottom Line For any mission critical applications, handling NPE is costly.
If efficiency is an issue then the 'catch' option should be considered.
If 'catch' cannot be used because it would propagate (as mentioned by 'SCouto') then use local variables to avoid multiple calls to methods getFoo(), getBar() and getBaz().
It's worth considering to create your own Exception. Let's call it MyOperationFailedException. You can throw it instead returning a value. The result will be the same - you'll quit the function, but you won't return hard-coded value -1 which is Java anti-pattern. In Java we use Exceptions.
try {
return wsObject.getFoo().getBar().getBaz().getInt();
} catch (NullPointerException ignored) {
throw new MyOperationFailedException();
}
EDIT:
According to the discussion in comments let me add something to my previous thoughts. In this code there are two possibilities. One is that you accept null and the other one is, that it is an error.
If it's an error and it occurs, You can debug your code using other structures for debugging purposes when breakpoints aren't enough.
If it's acceptable, you don't care about where this null appeared. If you do, you definitely shouldn't chain those requests.
The method you have is lengthy, but very readable. If I were a new developer coming to your code base I could see what you were doing fairly quickly. Most of the other answers (including catching the exception) don't seem to be making things more readable and some are making it less readable in my opinion.
Given that you likely don't have control over the generated source and assuming you truly just need to access a few deeply nested fields here and there then I would recommend wrapping each deeply nested access with a method.
private int getFooBarBazInt() {
if (wsObject.getFoo() == null) return -1;
if (wsObject.getFoo().getBar() == null) return -1;
if (wsObject.getFoo().getBar().getBaz() == null) return -1;
return wsObject.getFoo().getBar().getBaz().getInt();
}
If you find yourself writing a lot of these methods or if you find yourself tempted to make these public static methods then I would create a separate object model, nested how you would like, with only the fields you care about, and convert from the web services object model to your object model.
When you are communicating with a remote web service it is very typical to have a "remote domain" and an "application domain" and switch between the two. The remote domain is often limited by the web protocol (for example, you can't send helper methods back and forth in a pure RESTful service and deeply nested object models are common to avoid multiple API calls) and so not ideal for direct use in your client.
For example:
public static class MyFoo {
private int barBazInt;
public MyFoo(Foo foo) {
this.barBazInt = parseBarBazInt();
}
public int getBarBazInt() {
return barBazInt;
}
private int parseFooBarBazInt(Foo foo) {
if (foo() == null) return -1;
if (foo().getBar() == null) return -1;
if (foo().getBar().getBaz() == null) return -1;
return foo().getBar().getBaz().getInt();
}
}
return wsObject.getFooBarBazInt();
by applying the the Law of Demeter,
class WsObject
{
FooObject foo;
..
Integer getFooBarBazInt()
{
if(foo != null) return foo.getBarBazInt();
else return null;
}
}
class FooObject
{
BarObject bar;
..
Integer getBarBazInt()
{
if(bar != null) return bar.getBazInt();
else return null;
}
}
class BarObject
{
BazObject baz;
..
Integer getBazInt()
{
if(baz != null) return baz.getInt();
else return null;
}
}
class BazObject
{
Integer myInt;
..
Integer getInt()
{
return myInt;
}
}
Giving answer which seems different from all others.
I recommend you to check for NULL in ifs.
Reason :
We should not leave a single chance for our program to be crashed.
NullPointer is generated by system. The behaviour of System
generated exceptions can not be predicted. You should not leave your
program in the hands of System when you already have a way of handling
it by your own. And put the Exception handling mechanism for the extra safety.!!
For making your code easy to read try this for checking the conditions :
if (wsObject.getFoo() == null || wsObject.getFoo().getBar() == null || wsObject.getFoo().getBar().getBaz() == null)
return -1;
else
return wsObject.getFoo().getBar().getBaz().getInt();
EDIT :
Here you need to store these values wsObject.getFoo(),
wsObject.getFoo().getBar(), wsObject.getFoo().getBar().getBaz() in
some variables. I am not doing it because i don't know the return
types of that functions.
Any suggestions will be appreciated..!!
I wrote a class called Snag which lets you define a path to navigate through a tree of objects. Here is an example of its use:
Snag<Car, String> ENGINE_NAME = Snag.createForAndReturn(Car.class, String.class).toGet("engine.name").andReturnNullIfMissing();
Meaning that the instance ENGINE_NAME would effectively call Car?.getEngine()?.getName() on the instance passed to it, and return null if any reference returned null:
final String name = ENGINE_NAME.get(firstCar);
It's not published on Maven but if anyone finds this useful it's here (with no warranty of course!)
It's a bit basic but it seems to do the job. Obviously it's more obsolete with more recent versions of Java and other JVM languages that support safe navigation or Optional.
Using blocks of code with switch or if is a common thing when checking for events. It can be clean code when made simple, but still seems to have more lines than needed, and could be simplified using lambdas.
Block with if:
if(action == ACTION_1){
doAction1();
} else if(action == ACTION_2){
doAction2();
} else {
doDefaultAction();
}
Block with switch:
switch(action){
case ACTION_1:
doAction1();
break;
case ACTION_2:
doAction2();
break;
default:
doDefaultAction();
}
Block with lambdas using the utility class With below:
with(action)
.when(ACTION_1, this::doAction1)
.when(ACTION_2, this::doAction2)
.byDefault(this::doDefaultAction)
Using lambdas has less code, but the question is: is it easier to read than the others? Easier to maintain? Regarding performance lambdas is the worst, but for cases where performance is not important the lambdas version is shorter than the switch/if blocks.
So, how do you see it? Maybe there is a Kotlin way shorter than this, I try to focus on java only, I love Kotlin but the compilation is still too slow for my projects.
A similar utility class could be used when the block must return a specific value.
FYI, the class for the lambdas is here, I didn't check for errors, just made it quickly for this example:
public class With<T> {
private final T id;
private boolean actionFound;
private With(T id) {
this.id = id;
}
public static <T> With<T> with(T id) {
return new With<>(id);
}
public With<T> when(T expectedId, Action action) {
if (!actionFound && id == expectedId) {
actionFound = true;
action.execute();
}
return this;
}
public void byDefault(Action action) {
if (!actionFound) {
action.execute();
}
}
#FunctionalInterface
interface Action {
void execute();
}
}
As a couple has said, replacing switch with compounded methods is less efficient. Depending on your use-case, it might even be worth it to use your implementation.
Funnily enough, Oracle is actually planning to implement lambdas within switch statements, as seen in this recent JEP.
Example:
String formatted = switch (s) {
case null -> "(null)";
case "" -> "(empty)";
default -> s;
}
The switch is more flexible in that you can call functions with varying numbers of arguments, or call more than one function. You can also more easily denote when two cases lead to the same action. The fact that it's faster is just a bonus.
So in that sense I'm not sure what your With class is really adding.
However, switch has a limited number of types that it can work with. Perhaps your With class would prove to be more useful if you were to pass it predicates rather than performing simple reference equality, for example:
public With<T> when(Predicate<T> expected, Action action) {
if (!actionFound && expected.test(id)) {
actionFound = true;
action.execute();
}
return this;
}
Sample usage:
final String test = "test";
with(test)
.when(String::isEmpty, this::doAction1)
.when(s -> s.length() == 3, this::doAction2)
.byDefault(this::doDefaultAction);
replace switch with lambdas. Worth it?
No.
Because in an OO language the replacemenst for a switch or an if/else cascade is polymorphism, not "fluent API".
One option to do this is to declare static final Map<T, Action> EXPECTED_ID_TO_ACTION. Then you just can EXPECTED_ID_TO_ACTION.getOrDefault(actionId, DEFAULT_ACTION).execute(), turning ugly switch or multiple ifs into one-liner.
So our project back-end is a Java 8 Springboot application, springboot allows you to do some stuff really easily. ex, request validation:
class ProjectRequestDto {
#NotNull(message = "{NotNull.DotProjectRequest.id}")
#NotEmpty(message = "{NotEmpty.DotProjectRequest.id}")
private String id;
}
When this constraint is not meet, spring (springboot?) actually throws a validation exception, as such, we catch it somewhere in the application and construct a 404 (Bad Request) response for our application.
Now, given this fact, we kinda followed the same philosophy throughout our application, that is, on a deeper layer of the application we might have something like:
class ProjectService throws NotFoundException {
DbProject getProject(String id) {
DbProject p = ... // some hibernate code
if(p == null) {
Throw new NotFoundException();
}
return p;
}
}
And again we catch this exception on a higher level, and construct another 404 for the client.
Now, this is causing a few problems:
The most important one: Our error tracing stops being useful, we cannot differentiate (easily) when the exception is important, because they happen ALL the time, so if the service suddenly starts throwing errors we would not notice until it is too late.
Big amount of useless logging, on login requests for example, user might mistyped his password, and we log this and as a minor point: our analytics cannot help us determine what we are actually doing wrong, we see a lot of 4xx's but that is what we expect.
Exceptions are costly, gathering the stack trace is a resource intensive task, minor point at this moment, as the service scales up with would become more of a problem.
I think the solution is quite clear, we need to make an architectural change to not make exceptions part of our normal data flow, however this is a big change and we are short on time, so we plan to migrate over time, yet the problem remains for the short term.
Now, to my actual question: when I asked one of our architects, he suggested the use of monads (as a temporal solution ofc), so we don't modify our architecture, but tackle the most contaminating endpoints (ex. wrong login) in the short term, however I'm struggling with the monad paradigm overall and even more in java, I really have no idea on how to apply it to our project, could you help me with this? some code snippets would be really good.
TL:DR: If you take a generic spring boot application that throws errors as a part of its data flow, how can you apply the monad pattern to avoid login unnecessary amount of data and temporarily fix this Error as part of data flow architecture.
The standard monadic approach to exception handling is essentially to wrap your result in a type that is either a successful result or an error. It's similar to the Optional type, though here you have an error value instead of an empty value.
In Java the simplest possible implementation is something like the following:
public interface Try<T> {
<U> Try<U> flatMap(Function<T, Try<U>> f);
class Success<T> implements Try<T> {
public final T value;
public Success(T value) {
this.value = value;
}
#Override
public <U> Try<U> flatMap(Function<T, Try<U>> f) {
return f.apply(value);
}
}
class Fail<T> implements Try<T> {
// Alternatively use Exception or Throwable instead of String.
public final String error;
public Fail(String error) {
this.error = error;
}
#Override
public <U> Try<U> flatMap(Function<T, Try<U>> f) {
return (Try<U>)this;
}
}
}
(with obvious implementations for equals, hashCode, toString)
Where you previously had operations that would either return a result of type T or throw an exception, they would return a result of Try<T> (which would either be a Success<T> or a Fail<T>), and would not throw, e.g.:
class Test {
public static void main(String[] args) {
Try<String> r = ratio(2.0, 3.0).flatMap(Test::asString);
}
static Try<Double> ratio(double a, double b) {
if (b == 0) {
return new Try.Fail<Double>("Divide by zero");
} else {
return new Try.Success<Double>(a / b);
}
}
static Try<String> asString(double d) {
if (Double.isNaN(d)) {
return new Try.Fail<String>("NaN");
} else {
return new Try.Success<String>(Double.toString(d));
}
}
}
I.e. instead of throwing an exception you return a Fail<T> value which wraps the error. You can then compose operations which might fail using the flatMap method. It should be clear that once an error occurs it will short-circuit any subsequent operations - in the above example if ratio returns a Fail then asString doesn't get called and the error propagates directly through to the final result r.
Taking your example, under this approach it would look like this:
class ProjectService throws NotFoundException {
Try<DbProject> getProject(String id) {
DbProject p = ... // some hibernate code
if(p == null) {
return new Try.Fail<DbProject>("Failed to create DbProject");
}
return new Try.Succeed<DbProject>(p);
}
}
The advantage over raw exceptions is it's a bit more composable and allows, for example, for you to map (e.g. Stream.map) a fail-able function over a collection of values and end up with a collection of Fails and Successes. If you were using exceptions then the first exception would fail the entire operation and you would lose all results.
One downside is that you have to use Try return types all the way down your call stack (somewhat like checked exceptions). Another is that since Java doesn't have built-in monad support (al la Haskell & Scala) then the flatMap'ing can get slightly verbose. For example something like:
try {
A a = f(x);
B b = g(a);
C c = h(b);
} catch (...
where f, g, h might throw, becomes instead:
Try<C> c = f(x).flatMap(a -> g(a))
.flatMap(b -> h(b));
You can generalise the above implementation by making the error type an generic parameter E (instead of String), so it then becomes Try<T, E>. whether this is useful depends on your requirements - I've never needed it.
I have a more fully-implemented version here, alternatively the Javaslang and FunctionalJava libraries offer their own variants.
Eclipse forces me to use a default case for any switch including those listing all declared enum values, allegedly because of the language specification [1]. This is unfortunate because Android Studio, in which the project is developed in parallel, does not, and would naturally warn me about all switches which become incomplete if ever the enum is changed. While I would prefer the latter behaviour because the former makes enum changes actually more error-prone (see example below), I am in no position to choose, so I need to find how to do this right for both. I suppose if there is a place in a code which should under any circumstances remain unreachable but still removing the line is not an option, throwing an Error seems like the natural thing to do there. But which one, is there a generally accepted subclass for such a scenario (perhaps extending to other "forced" unreachable places)? Or is it acceptable to simply throw a quick-and-dirty new Error("enum name") just for the sake of it, instead of writing my own just to be never used?
In the example:
public static enum Color {
RED,
GREEN,
BLUE;
#Override
public String toString() {
switch(this) {
case RED:
return "Red";
case GREEN:
return "Green";
case BLUE:
return "Blue";
default:
/* never reached */
throw new ThisCanNeverHappenError();
}
}
}
adding WHITE to the enum makes this switch and possibly many more throughout the code silent sources of nasty errors as flow control finds them to be just fine.
There was a LONG discussion of this in the Eclipse Bugzilla/Bug 374605.
The end result is that this is a configurable warning and can be disabled.
Change the dropdown from Warning to Ignore
You should not throw an Error. A better exception should be IllegalStateException:
switch(this) {
case RED:
return "Red";
case GREEN:
return "Green";
case BLUE:
return "Blue";
default:
throw new IllegalStateException("Unexpected enum value: " + this.name());
}
On a different note, you shouldn't use a switch statement there anyway. Add a field to the enum. Also note that enums are always static, so you can remove that keyword.
public enum Color {
RED ("Red"),
GREEN("Green"),
BLUE ("Blue");
private final String displayText;
private Color(String displayText) {
this.displayText = displayText;
}
public String getDisplayText() {
return this.displayText;
}
#Override
public String toString() {
return this.displayText;
}
}
There is no need for this switch (and the corresponding handling of the default case) if you add the "toString" value as a parameter of the enum instances:
public static enum Color {
RED("Red"),
GREEN("Green"),
BLUE("Blue");
private final String name;
private Color(String name) {
this.name = name;
}
#Override
public String toString() {
return name;
}
}
Throwing Errors is not a very wise move - most exception-handling code passes them thru, they end up in strange places, might make your process unusable.
I would recommend IllegalStateException.
You are making an assertion: /* never reached */ - if that assertion is erroneous you should probably throw an AssertionError.
The alternative recommendation of throwing an IllegalStateException is not really appropriate - according to the javadoc:
Signals that a method has been invoked at an illegal or inappropriate time
which is not really the case here.
Suppose I have an enum Color with 2 possible values: RED and BLUE:
public enum Color {
RED,
BLUE
}
Now suppose I have a switch statement for this enum where I have code for both possible values:
Color color = getColor(); // a method which returns a value of enum "Color"
switch (color) {
case RED:
...
break;
case BLUE:
...
break;
default:
break;
}
Since I have code block for both possible values of the enum, what is the usage of default in the above code?
Should I throw an exception if the code somehow reaches the default block like this?
Color color = getColor(); // a method which returns a value of enum "Color"
switch (color) {
case RED:
...
break;
case BLUE:
...
break;
default:
throw new IllegalArgumentException("This should not have happened");
}
It is good practice to throw an Exception as you have shown in the second example. You improve the maintainability of your code by failing fast.
In this case it would mean if you later (perhaps years later) add an enum value and it reaches the switch statement you will immediately discover the error.
If the default value were not set, the code would perhaps run through even with the new enum value and could possibly have undesired behavior.
The other answers are correct in saying that you should implement a default branch that throws an exception, in case a new value gets added to your enum in the future. However, I would go one step further and question why you're even using a switch statement in the first place.
Unlike languages like C++ and C#, Java represents Enum values as actual objects, which means that you can leverage object-oriented programming. Let's say that the purpose of your method is to provide an RGB value for each color:
switch (color)
case RED:
return "#ff0000";
...
Well, arguably, if you want each color to have an RGB value, you should include that as part of its description:
public enum Color
{
RED("#FF0000"),
BLUE("#0000FF");
String rgb;
public Color(String rgb) {
this.rgb = rgb;
}
public getRgb() { return this.rgb; }
}
That way, if you add a new color later, you're pretty much forced to provide an RGB value. It's even more fail-fast than the other approach, because you'll fail at compile-time rather than run-time.
Note that you can do even more complicated things if you need to, including having each color provide its own custom implementation of an abstract method. Enums in Java are really powerful and object-oriented, and in most cases I've found I can avoid needing to switch on them in the first place.
Compile time completeness of the switch cases doesn't guarantee runtime completenes.
Class with a switch statement compiled against an older version of enum may be executed with a newer enum version (with more values). That's a common case with library dependencies.
For reasons like these, the compiler considers the switch without default case incomplete.
In small programs, there is no practical use for that, but think of a complex system that speards among large number of files and developers - if you define the enum in one file and use it in another one, and later on someone adds a value to the enum without updating the switch statement, you'll find it very useful...
If you've covered all of the possibilities with your various cases and the default cannot happen, this is the classic use case for assertions:
Color color = getColor(); // a method which returns a value of enum "Color"
switch (color) {
case RED:
// ...
break;
case BLUE:
// ...
break;
default:
assert false; // This cannot happen
// or:
throw new AssertionError("Invalid Colors enum");
}
To satisfy IDEs and other static linters, I often leave the default case in as a no-op, along with a comment such as // Can't happen or // Unreachable
i.e., if the switch is doing the typical thing of handling all possible enum values, either explicitly or via fall-throughs, then the default case is probably programmer error.
Depending on the application, I sometimes put an assertion in the case to guard against programmer error during development. But this has limited value in shipping code (unless you ship with assertions enabled.)
Again, depending on the situation I might be convinced to throw an Error, as this is really an unrecoverable situation -- nothing the user can do will correct what is probably programmer error.
Yes, you should do it. You may change enum but don't change switch. In the future it'll lead to mistakes. I think that throw new IllegalArgumentException(msg) is the good practice.
When the enum constants are too many and you need to handle only for few cases, then the default will handle the rest of the constants.
Also, enum constants are references, if the reference is not yet set, or null. You may have to handle such cases too.
Yes, it is dead code until someone add a value to the enum, which will make your switch statement follow the principle of 'fail fast' (https://en.wikipedia.org/wiki/Fail-fast)
This could relates to this question : How to ensure completeness in an enum switch at compile time?
Apart from the possible future extending of the enum, which was pointed out by many, some day someone may 'improve' yout getColor() or override it in a derived class and let it return an invalid value. Of course a compiler should catch that, unless someone explicitly forces unsafe type casting...
But bad things just happen, and it's a good practice not to leave any unexpected else or default path unguarded.
I'm surprised nobody else mentioned this. You can cast an int to an enum and it won't throw just because the value is not one of the enumerated values. This means (among other things), the compiler cannot tell that all the enum values are in the switch.
Even if you write your code correctly, this really does come up when serializing objects that contain enums. A future version might add to the enum and your code choke on reading it back, or somebody looking to create mayhem may hexedit a new value in. Either way, running off the switch rarely does the right thing. So, we throw in default unless we know better.
Here is how I would handle it, beside NULL value which would result in a null pointer exception which you can handle.
If Color color is not null, it has to be one of the singletons in enum Color, if you assign any reference to an object that is not one of the them this will cause a Runtime error.
So my solution is to account for values that are not supported.
Test Run
Test.java
public Test
{
public static void main (String [] args)
{
try { test_1(null); }
catch (NullPointerException e) { System.out.println ("NullPointerException"); }
try { test_2(null); }
catch (Exception e) { System.out.println(e.getMessage()); }
try { test_1(Color.Green); }
catch (Exception e) { System.out.println(e.getMessage()); }
}
public static String test_1 (Color color) throws Exception
{
String out = "";
switch (color) // NullPointerException expected
{
case Color.Red:
out = Red.getName();
break;
case Color.Blue:
out = Red.getName();
break;
default:
throw new UnsupportedArgumentException ("unsupported color: " + color.getName());
}
return out;
}
.. or you can consider null as unsupported too
public static String test_2 (Color color) throws Exception
{
if (color == null) throw new UnsupportedArgumentException ("unsupported color: NULL");
return test_1(color);
}
}
Color.java
enum Color
{
Red("Red"), Blue("Blue"), Green("Green");
private final String name;
private Color(String n) { name = n; }
public String getName() { return name; }
}
UnsupportedArgumentException.java
class UnsupportedArgumentException extends Exception
{
private String message = null;
public UnsupportedArgumentException() { super(); }
public UnsupportedArgumentException (String message)
{
super(message);
this.message = message;
}
public UnsupportedArgumentException (Throwable cause) { super(cause); }
#Override public String toString() { return message; }
#Override public String getMessage() { return message; }
}
In this case using Assertion in default is the best practice.