I have a program that basically looks like this:
boolean[] stuffNThings;
int state=1;
for(String string:list){
switch(state){
case 1:
if(/*condition*/){
// foo
break;
}else{
stuffNThings=new boolean[/*size*/];
state=2;
}
// intentional fallthrough
case 2:
// bar
stuffNThings[0]=true;
}
}
As you, a human, can see, case 2 only ever happens when there was previously a state 1 and it switched to state 2 after initialising the array. But Eclipse and the Java compiler don't see this, because it looks like pretty complex logic to them. So Eclipse complains:
The local variable stuffNThings may not have been initialized."
And if I change "boolean[] stuffNThings;" to "boolean[] stuffNThings=null;", it switches to this error message:
Potential null pointer access: The variable stuffNThings may be null at this location.
I also can't initialise it at the top, because the size of the array is only determined after the final loop in state 1.
Java thinks that the array could be null there, but I know that it can't. Is there some way to tell Java this? Or am I definitely forced to put a useless null check around it? Adding that makes the code harder to understand, because it looks like there may be a case where the value doesn't actually get set to true.
Java thinks that the array could be null there, but I know that it can't.
Strictly speaking, Java thinks that the variable could be uninitialized. If it is not definitely initialized, the value should not be observable.
(Whether the variable is silently initialized to null or left in an indeterminate state is an implementation detail. The point is, the language says you shouldn't be allowed to see the value.)
But anyway, the solution is to initialize it to null. It is redundant, but there is no way to tell Java to "just trust me, it will be initialized".
In the variations where you are getting "Potential null pointer access" messages:
It is a warning, not an error.
You can ignore or suppress a warning. (If your correctness analysis is wrong then you may get NPE's as a result. But that's your choice.)
You can turn off some or all warnings with compiler switches.
You can suppress a specific warning with a #SuppressWarnings annotation:
For Eclipse, use #SuppressWarnings("null").
For Android, use #SuppressWarnings("ConstantConditions").
Unfortunately, the warning tags are not fully standardized. However, a compiler should silently ignore a #SuppressWarnings for a warning tag that it doesn't recognize.
You may be able to restructure the code.
In your example, the code is using switch drop through. People seldom do that because it leads to code that is hard to understand. So, I'm not surprised that you can find edge-case examples involving drop-through where a compiler gets the NPE warnings a bit wrong.
Either way, you can easily avoid the need to do drop-through by restructuring your code. Copy the code in the case 2: case to the end of the case 1: case. Fixed. Move on.
Note the "possibly uninitialized" error is not the Java compiler being "stupid". There is a whole chapter of the JLS on the rules for definite assignment, etcetera. A Java compiler is not permitted to be smart about it, because that would mean that the same Java code would be legal or not legal, depending on the compiler implementation. That would be bad for code portability.
What we actually have here is a language design compromise. The language stops you from using variables that are (really) not initialized. But to do this, the "dumb" compiler must sometimes stop you using variables that you (the smart programmer) know will be initialized ... because the rules say that it should.
(The alternatives are worse: either no compile-time checks for uninitialized variables leading to hard crashes in unpredictable places, or checks that are different for different compilers.)
A distinct non-answer: when code is "so" complicated that an IDE / java compiler doesn't "see it", then that is a good indication that your code is too complicated anyway. At least for me, it wasn't obvious what you said. I had to read up and down repeatedly to convince myself that the statement given in the question is correct.
You have an if in a switch in a for. Clean code, and "single layer of abstraction" would tell you: not a good starting point.
Look at your code. What you have there is a state machine in disguise. Ask yourself whether it would be worth to refactor this on larger scale, for example by turning it into an explicit state machine of some sort.
Another less intrusive idea: use a List instead of an array. Then you can simply create an empty list, and add elements to that as needed.
After just trying to execute the code regardless of Eclipse complaining, I noticed that it does indeed run without problems. So apparently it was just a warning being set to "error" level, despite not being critical.
There was a "configure problem severity" button, so I set the severity of "Potential null pointer access" to "warning" (and adjusted some other levels accordingly). Now Eclipse just marks it as warning and executes the code without complaining.
More understandable would be:
boolean[] stuffNThings;
boolean initialized = false;
for (String string: list) {
if (!initialized) {
if (!/*condition*/) {
stuffNThings = new boolean[/*size*/];
initailized = true;
}
}
if (initialized) {
// bar
stuffNThings[0] = true;
}
}
Two loops, one for the initialisation, and one for playing with the stuff might or might not be more clear.
It is easier on flow analysis (compared to a switch with fall-through).
Furthermore instead of a boolean[] a BitSet might used too (as it is not fixed sized as an array).
BitSet stuffNThings = new BitSet(/*max size*/);
Related
I'm attempting to understand what's happening in this bit of Java code as its owner are no longer around and possibly fixing it or simplifying it. I'm guessing these blocks had a lot more in them at some point and what's left in place was not cleaned up properly.
It seems all occurrences of orElse(false) don't set anything to false and can be removed.
Then the second removeDiscontinued method is returning a boolean that I don't think is used anywhere. Is this just me or this is written in a way that makes it hard to read?
I'm hesitant removing anything from it since I haven't used much of the syntax like orElse, Lazy, Optional. Some help would be much appreciated.
private void removeDiscontinued(Optional<Map<String, JSONArrayCache>> dptCache, Lazy<Set<String>> availableTps) {
dptCache.map(pubDpt -> removeDiscontinued(pubDpt.keySet(), availableTps)).orElse(false);
}
private boolean removeDiscontinued(Set<String> idList, Lazy<Set<String>> availableTps) {
if (availableTps.get().size() > 0) {
Optional.ofNullable(idList).map(trIds -> trIds.removeIf(id -> !availableTps.get().contains(id)))
.orElse(false);
}
return true;
}
This code is indeed extremely silly. I know why - there's a somewhat common, extremely misguided movement around. This movement makes claims that are generally interpreted as 'write it 'functional' and then it is just better'.
That interpretation is obvious horse exhaust. It's just not true.
We can hold a debate on who is to blame for this - is it folks hearing the arguments / reading the blogposts and drawing the wrong conclusions, or is it the 'functional fanfolks' fanning the flames, so to speak, making ridiculous claims that do not hold up?
Point is: This code is using functional style when it is utterly inappropriate to do so and it has turned into a right mess as a result. The code is definitely bad; the author of this code is not a great programmer, but perhaps most of the blame goes to the functional evangelistst. At any rate, it's very difficult to read; no wonder you're having a hard time figuring out what this stuff does.
The fundamental issue
The fundamental issue is that this functional style strongly likes being a side-effect free process: You start with some data, then the functional pipeline (a chain of stream map, orElse, etc operations) produces some new result, and then you do something with that. Nothing within the pipeline should be changing anything, it's just all in service of calculating new things.
Both of your methods fail to do so properly - the return value of the 'pipeline' is ignored in both of them, it's all about the side effects.
You don't want this: The primary point of the pipelines is that they can skip steps, and will aggressively do so if they think they can, and the pipeline assumes no side-effects, so it makes wrong calls.
That orElse is not actually optional - it doesn't seem to do anything, except: It forces the pipeline to run, except the spec doesn't quite guarantee that it will, so this code is in that sense flat out broken, too.
These methods also take in Optional as an argument type which is completely wrong. Optional is okay as a return value for a functional pipeline (such as Stream's own max() etc methods). It's debatable as a return value anywhere else, and it's flat out silly and a style error so bad you should configure your linter to aggressively flag it as not suitable for production code if they show up in a field declaration or as a method argument.
So get rid of that too.
Let's break down what these methods do
Both of them will call map on an Optional. An optional is either 'NONE', which is like null (as in, there is no value), or it is a SOME, which means there is exactly one value.
Both of your methods invoke map on an optional. This operation more or less boils down, in these specific methods, as:
If the optional is NONE, do nothing, silently. Otherwise, perform the operation in the parens.
Thus, to get rid of the Optional in the argument of your first method, just remove that, and then update the calling code so that it decides what to do in case of no value, instead of this pair of methods (which decided: If passing in an optional.NONE, silently do nothing. "Silently do nothing" is an extremely stupid default behaviour mode, which is a large part of why Optional is not great). Clearly it has an Optional from somewhere - either it made it (with e.g. Optional.ofNullable in which case undo that too, or it got one from elsewhere, for example because it does a stream operation and that returned an optional, in which case, replace:
Optional<Map<String, JSONArrayCache>> optional = ...;
removeDiscontinued(thatOptionalThing, availableTps);
with:
optional.map(v -> removeDiscontinued(v, availableTps));
or perhaps simply:
if (optional.isPresent()) {
removeDiscontinued(optional.get(), availableTps);
} else {
code to run otherwise
}
If you don't see how it could be null, great! Optional is significantly worse than NullPointerException in many cases, and so it is here as well: You do NOT want your code to silently do nothing when some value is absent in a place where the programmer of said code wasn't aware of that possibility - an exception is vastly superior: You then know there is a problem, and the exception tells you where. In contrast to the 'silently do not do anything' approach, where it's much harder to tell something is off, and once you realize something is off, you have no idea where to look. Takes literally hundreds of times longer to find the problem.
Thus, then just go with:
removeDiscontinued(thatOptionalThing.get(), availableTps);
which will NPE if the unexpected happens, which is good.
The methods themselves
Get rid of those pipelines, functional is not the right approach here, as you're only interested in the side effects:
private void removeDiscontinued(Map<String, JSONArrayCache> dptCache, Lazy<Set<String>> availableTps) {
Set<String> keys = dptCache.keySet();
if (availableTps.get().size() > 0) {
keys.removeIf(id -> availableTps.get().contains(id));
}
}
That's it - that's all you need, that's what that code does in a very weird, sloppy, borderline broken way.
Specifically:
That boolean return value is just a red herring - the author needed that code to return something so that they could use it as argument in their map operation. The value is completely meaningless. If a styleguide that promises: "Your code will be better if you write it using this style" ends up with extremely confusing pointless variables whose values are irrelevant, get rid of the style guide, I think.
The ofNullable wrap is pointless: That method is private and its only caller cannot possibly pass null there, unless dptCache is an instance of some bizarro broken implementation of the Map interface that deigns to return null when its keySet() method is invoked: If that's happening, definitely fix the problem at the source, don't work around it in your codebase, no sane java reader would expect .keySet to return null there. That ofNullable is just making this stuff hard to read, it doesn't do anything here.
Note that the if (availableTps.get().size() > 0) check is just an optimization. You can leave it out if you want. That optimization isn't going to have any impact unless that dptCache object is a large map (thousands of keys at least).
Are there any performance or memory differences between the two snippets below? I tried to profile them using visualvm (is that even the right tool for the job?) but didn't notice a difference, probably due to the code not really doing anything.
Does the compiler optimize both snippets down to the same bytecode? Is one preferable over the other for style reasons?
boolean valid = loadConfig();
if (valid) {
// OK
} else {
// Problem
}
versus
if (loadConfig()) {
// OK
} else {
// Problem
}
The real answer here: it doesn't even matter so much what javap will tell you how the corresponding bytecode looks like!
If that piece of code is executed like "once"; then the difference between those two options would be in the range of nanoseconds (if at all).
If that piece of code is executed like "zillions of times" (often enough to "matter"); then the JIT will kick in. And the JIT will optimize that bytecode into machine code; very much dependent on a lot of information gathered by the JIT at runtime.
Long story short: you are spending time on a detail so subtle that it doesn't matter in practical reality.
What matters in practical reality: the quality of your source code. In that sense: pick that option that "reads" the best; given your context.
Given the comment: I think in the end, this is (almost) a pure style question. Using the first way it might be easier to trace information (assuming the variable isn't boolean, but more complex). In that sense: there is no "inherently" better version. Of course: option 2 comes with one line less; uses one variable less; and typically: when one option is as readable as another; and one of the two is shorter ... then I would prefer the shorter version.
If you are going to use the variable only once then the compiler/optimizer will resolve the explicit declaration.
Another thing is the code quality. There is a very similar rule in sonarqube that describes this case too:
Local Variables should not be declared and then immediately returned or thrown
Declaring a variable only to immediately return or throw it is a bad practice.
Some developers argue that the practice improves code readability, because it enables them to explicitly name what is being returned. However, this variable is an internal implementation detail that is not exposed to the callers of the method. The method name should be sufficient for callers to know exactly what will be returned.
https://jira.sonarsource.com/browse/RSPEC-1488
I often find when debugging a program it is convenient, (although arguably bad practice) to insert a return statement inside a block of code. I might try something like this in Java ....
class Test {
public static void main(String args[]) {
System.out.println("hello world");
return;
System.out.println("i think this line might cause a problem");
}
}
of course, this would yield the compiler error.
Test.java:7: unreachable statement
I could understand why a warning might be justified as having unused code is bad practice. But I don't understand why this needs to generate an error.
Is this just Java trying to be a Nanny, or is there a good reason to make this a compiler error?
Because unreachable code is meaningless to the compiler. Whilst making code meaningful to people is both paramount and harder than making it meaningful to a compiler, the compiler is the essential consumer of code. The designers of Java take the viewpoint that code that is not meaningful to the compiler is an error. Their stance is that if you have some unreachable code, you have made a mistake that needs to be fixed.
There is a similar question here: Unreachable code: error or warning?, in which the author says "Personally I strongly feel it should be an error: if the programmer writes a piece of code, it should always be with the intention of actually running it in some scenario." Obviously the language designers of Java agree.
Whether unreachable code should prevent compilation is a question on which there will never be consensus. But this is why the Java designers did it.
A number of people in comments point out that there are many classes of unreachable code Java doesn't prevent compiling. If I understand the consequences of Gödel correctly, no compiler can possibly catch all classes of unreachable code.
Unit tests cannot catch every single bug. We don't use this as an argument against their value. Likewise a compiler can't catch all problematic code, but it is still valuable for it to prevent compilation of bad code when it can.
The Java language designers consider unreachable code an error. So preventing it compiling when possible is reasonable.
(Before you downvote: the question is not whether or not Java should have an unreachable statement compiler error. The question is why Java has an unreachable statement compiler error. Don't downvote me just because you think Java made the wrong design decision.)
There is no definitive reason why unreachable statements must be not be allowed; other languages allow them without problems. For your specific need, this is the usual trick:
if (true) return;
It looks nonsensical, anyone who reads the code will guess that it must have been done deliberately, not a careless mistake of leaving the rest of statements unreachable.
Java has a little bit support for "conditional compilation"
http://java.sun.com/docs/books/jls/third_edition/html/statements.html#14.21
if (false) { x=3; }
does not result in a compile-time
error. An optimizing compiler may
realize that the statement x=3; will
never be executed and may choose to
omit the code for that statement from
the generated class file, but the
statement x=3; is not regarded as
"unreachable" in the technical sense
specified here.
The rationale for this differing
treatment is to allow programmers to
define "flag variables" such as:
static final boolean DEBUG = false;
and then write code such as:
if (DEBUG) { x=3; }
The idea is that it should be possible
to change the value of DEBUG from
false to true or from true to false
and then compile the code correctly
with no other changes to the program
text.
It is Nanny.
I feel .Net got this one right - it raises a warning for unreachable code, but not an error. It is good to be warned about it, but I see no reason to prevent compilation (especially during debugging sessions where it is nice to throw a return in to bypass some code).
I only just noticed this question, and wanted to add my $.02 to this.
In case of Java, this is not actually an option. The "unreachable code" error doesn't come from the fact that JVM developers thought to protect developers from anything, or be extra vigilant, but from the requirements of the JVM specification.
Both Java compiler, and JVM, use what is called "stack maps" - a definite information about all of the items on the stack, as allocated for the current method. The type of each and every slot of the stack must be known, so that a JVM instruction doesn't mistreat item of one type for another type. This is mostly important for preventing having a numeric value ever being used as a pointer. It's possible, using Java assembly, to try to push/store a number, but then pop/load an object reference. However, JVM will reject this code during class validation,- that is when stack maps are being created and tested for consistency.
To verify the stack maps, the VM has to walk through all the code paths that exist in a method, and make sure that no matter which code path will ever be executed, the stack data for every instruction agrees with what any previous code has pushed/stored in the stack. So, in simple case of:
Object a;
if (something) { a = new Object(); } else { a = new String(); }
System.out.println(a);
at line 3, JVM will check that both branches of 'if' have only stored into a (which is just local var#0) something that is compatible with Object (since that's how code from line 3 and on will treat local var#0).
When compiler gets to an unreachable code, it doesn't quite know what state the stack might be at that point, so it can't verify its state. It can't quite compile the code anymore at that point, as it can't keep track of local variables either, so instead of leaving this ambiguity in the class file, it produces a fatal error.
Of course a simple condition like if (1<2) will fool it, but it's not really fooling - it's giving it a potential branch that can lead to the code, and at least both the compiler and the VM can determine, how the stack items can be used from there on.
P.S. I don't know what .NET does in this case, but I believe it will fail compilation as well. This normally will not be a problem for any machine code compilers (C, C++, Obj-C, etc.)
One of the goals of compilers is to rule out classes of errors. Some unreachable code is there by accident, it's nice that javac rules out that class of error at compile time.
For every rule that catches erroneous code, someone will want the compiler to accept it because they know what they're doing. That's the penalty of compiler checking, and getting the balance right is one of the tricker points of language design. Even with the strictest checking there's still an infinite number of programs that can be written, so things can't be that bad.
While I think this compiler error is a good thing, there is a way you can work around it.
Use a condition you know will be true:
public void myMethod(){
someCodeHere();
if(1 < 2) return; // compiler isn't smart enough to complain about this
moreCodeHere();
}
The compiler is not smart enough to complain about that.
It is certainly a good thing to complain the more stringent the compiler is the better, as far as it allows you to do what you need.
Usually the small price to pay is to comment the code out, the gain is that when you compile your code works. A general example is Haskell about which people screams until they realize that their test/debugging is main test only and short one. I personally in Java do almost no debugging while being ( in fact on purpose) not attentive.
If the reason for allowing if (aBooleanVariable) return; someMoreCode; is to allow flags, then the fact that if (true) return; someMoreCode; does not generate a compile time error seems like inconsistency in the policy of generating CodeNotReachable exception, since the compiler 'knows' that true is not a flag (not a variable).
Two other ways which might be interesting, but don't apply to switching off part of a method's code as well as if (true) return:
Now, instead of saying if (true) return; you might want to say assert false and add -ea OR -ea package OR -ea className to the jvm arguments. The good point is that this allows for some granularity and requires adding an extra parameter to the jvm invocation so there is no need of setting a DEBUG flag in the code, but by added argument at runtime, which is useful when the target is not the developer machine and recompiling & transferring bytecode takes time.
There is also the System.exit(0) way, but this might be an overkill, if you put it in Java in a JSP then it will terminate the server.
Apart from that Java is by-design a 'nanny' language, I would rather use something native like C/C++ for more control.
I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.
For this Java code:
String var;
clazz.doSomething(var);
Why does the compiler report this error:
Variable 'var' might not have been initialized
I thought all variables or references were initialized to null. Why do you need to do:
String var = null;
??
Instance and class variables are initialized to null (or 0), but local variables are not.
See §4.12.5 of the JLS for a very detailed explanation which says basically the same thing:
Every variable in a program must have a value before its value is used:
Each class variable, instance variable, or array component is initialized with a default value when it is created:
[snipped out list of all default values]
Each method parameter is initialized to the corresponding argument value provided by the invoker of the method.
Each constructor parameter is initialized to the corresponding argument value provided by a class instance creation expression or explicit constructor invocation.
An exception-handler parameter is initialized to the thrown object representing the exception.
A local variable must be explicitly given a value before it is used, by either initialization or assignment, in a way that can be verified by the compiler using the rules for definite assignment.
It's because Java is being very helpful (as much as possible).
It will use this same logic to catch some very interesting edge-cases that you might have missed. For instance:
int x;
if(cond2)
x=2;
else if(cond3)
x=3;
System.out.println("X was:"+x);
This will fail because there was an else case that wasn't specified. The fact is, an else case here should absolutely be specified, even if it's just an error (The same is true of a default: condition in a switch statement).
What you should take away from this, interestingly enough, is don't ever initialize your local variables until you figure out that you actually have to do so. If you are in the habit of always saying "int x=0;" you will prevent this fantastic "bad logic" detector from functioning. This error has saved me time more than once.
Ditto on Bill K. I add:
The Java compiler can protect you from hurting yourself by failing to set a variable before using it within a function. Thus it explicitly does NOT set a default value, as Bill K describes.
But when it comes to class variables, it would be very difficult for the compiler to do this for you. A class variable could be set by any function in the class. It would be very difficult for the compiler to determine all possible orders in which functions might be called. At the very least it would have to analyze all the classes in the system that call any function in this class. It might well have to examine the contents of any data files or database and somehow predict what inputs users will make. At best the task would be extremely complex, at worst impossible. So for class variables, it makes sense to provide a reliable default. That default is, basically, to fill the field with bits of zero, so you get null for references, zero for integers, false for booleans, etc.
As Bill says, you should definitely NOT get in the habit of automatically initializing variables when you declare them. Only initialize variables at declaration time if this really make sense in the context of your program. Like, if 99% of the time you want x to be 42, but inside some IF condition you might discover that this is a special case and x should be 666, then fine, start out with "int x=42;" and inside the IF override this. But in the more normal case, where you figure out the value based on whatever conditions, don't initialize to an arbitrary number. Just fill it with the calculated value. Then if you make a logic error and fail to set a value under some combination of conditions, the compiler can tell you that you screwed up rather than the user.
PS I've seen a lot of lame programs that say things like:
HashMap myMap=new HashMap();
myMap=getBunchOfData();
Why create an object to initialize the variable when you know you are promptly going to throw this object away a millisecond later? That's just a waste of time.
Edit
To take a trivial example, suppose you wrote this:
int foo;
if (bar<0)
foo=1;
else if (bar>0)
foo=2;
processSomething(foo);
This will throw an error at compile time, because the compiler will notice that when bar==0, you never set foo, but then you try to use it.
But if you initialize foo to a dummy value, like
int foo=0;
if (bar<0)
foo=1;
else if (bar>0)
foo=2;
processSomething(foo);
Then the compiler will see that no matter what the value of bar, foo gets set to something, so it will not produce an error. If what you really want is for foo to be 0 when bar is 0, then this is fine. But if what really happened is that you meant one of the tests to be <= or >= or you meant to include a final else for when bar==0, then you've tricked the compiler into failing to detect your error. And by the way, that's way I think such a construct is poor coding style: Not only can the compiler not be sure what you intended, but neither can a future maintenance programmer.
I like Bill K's point about letting the compiler work for you- I had fallen into initializing every automatic variable because it 'seemed like the Java thing to do'. I'd failed to understand that class variables (ie persistent things that constructors worry about) and automatic variables (some counter, etc) are different, even though EVERYTHING is a class in Java.
So I went back and removed the initialization I'd be using, for example
List <Thing> somethings = new List<Thing>();
somethings.add(somethingElse); // <--- this is completely unnecessary
Nice. I'd been getting a compiler warning for
List<Thing> somethings = new List();
and I'd thought the problem was lack of initialization. WRONG. The problem was I hadn't understood the rules and I needed the <Thing> identified in the "new", not any actual items of type <Thing> created.
(Next I need to learn how to put literal less-than and greater-than signs into HTML!)
I don't know the logic behind it, but local variables are not initialized to null. I guess to make your life easy. They could have done it with class variables if it were possible. It doesn't mean you have to have it initialized in the beginning. This is fine :
MyClass cls;
if (condition) {
cls = something;
else
cls = something_else;
Sure, if you've really got two lines on top of each other as you show- declare it, fill it, no need for a default constructor. But, for example, if you want to declare something once and use it several or many times, the default constructor or null declaration is relevant. Or is the pointer to an object so lightweight that its better to allocate it over and over inside a loop, because the allocation of the pointer is so much less than the instantiation of the object? (Presumably there's a valid reason for a new object at each step of the loop).
Bill IV