IDEA "never accessed" warning on increment operator ++ - java

IDEA shows a warning if a increment int with ++ or += operators.
It can only be fixed if I increment in a explicit way int=int+1.
Is it a bug or a feature?
EDIT: Ok, it's a feature of IDEA. But it seems to me wrong. We obviously can't increment something without accessing the initial state. Would we have operator overloading we can think of ++ as of a function with argument i.
So, the question is: is it possible to change this behavior in IDEA?

Its a feature, you can customise which ones show and what level of warning they give in Intellij by going to Project Settings - Insepections
There is a huge list which you can turn on and off or customise.
This one is due to the variable is never used even though you are incrementing it, it is never explicitly read by another statement.
And if you do i = i + 1 it still gives that warning just on the new assignment like so (well that's a new warning I see now)
The warning you are after is in Declaration redundancy - Unused Symbol
You can configure this for what it checks or to ignore with special annotations but not how it behaves in your instance.
I think that it is still valid for the IDE to give that warning as the operations ++ and += are only accessing it to assign it to itself... what the point if you are then not using it elsewhere.

It gives you the warning, as you are not reading the variable out, merely incrementing it.
It's a feature.

Related

How to tell Java that a variable cannot possibly be null?

I have a program that basically looks like this:
boolean[] stuffNThings;
int state=1;
for(String string:list){
switch(state){
case 1:
if(/*condition*/){
// foo
break;
}else{
stuffNThings=new boolean[/*size*/];
state=2;
}
// intentional fallthrough
case 2:
// bar
stuffNThings[0]=true;
}
}
As you, a human, can see, case 2 only ever happens when there was previously a state 1 and it switched to state 2 after initialising the array. But Eclipse and the Java compiler don't see this, because it looks like pretty complex logic to them. So Eclipse complains:
The local variable stuffNThings may not have been initialized."
And if I change "boolean[] stuffNThings;" to "boolean[] stuffNThings=null;", it switches to this error message:
Potential null pointer access: The variable stuffNThings may be null at this location.
I also can't initialise it at the top, because the size of the array is only determined after the final loop in state 1.
Java thinks that the array could be null there, but I know that it can't. Is there some way to tell Java this? Or am I definitely forced to put a useless null check around it? Adding that makes the code harder to understand, because it looks like there may be a case where the value doesn't actually get set to true.
Java thinks that the array could be null there, but I know that it can't.
Strictly speaking, Java thinks that the variable could be uninitialized. If it is not definitely initialized, the value should not be observable.
(Whether the variable is silently initialized to null or left in an indeterminate state is an implementation detail. The point is, the language says you shouldn't be allowed to see the value.)
But anyway, the solution is to initialize it to null. It is redundant, but there is no way to tell Java to "just trust me, it will be initialized".
In the variations where you are getting "Potential null pointer access" messages:
It is a warning, not an error.
You can ignore or suppress a warning. (If your correctness analysis is wrong then you may get NPE's as a result. But that's your choice.)
You can turn off some or all warnings with compiler switches.
You can suppress a specific warning with a #SuppressWarnings annotation:
For Eclipse, use #SuppressWarnings("null").
For Android, use #SuppressWarnings("ConstantConditions").
Unfortunately, the warning tags are not fully standardized. However, a compiler should silently ignore a #SuppressWarnings for a warning tag that it doesn't recognize.
You may be able to restructure the code.
In your example, the code is using switch drop through. People seldom do that because it leads to code that is hard to understand. So, I'm not surprised that you can find edge-case examples involving drop-through where a compiler gets the NPE warnings a bit wrong.
Either way, you can easily avoid the need to do drop-through by restructuring your code. Copy the code in the case 2: case to the end of the case 1: case. Fixed. Move on.
Note the "possibly uninitialized" error is not the Java compiler being "stupid". There is a whole chapter of the JLS on the rules for definite assignment, etcetera. A Java compiler is not permitted to be smart about it, because that would mean that the same Java code would be legal or not legal, depending on the compiler implementation. That would be bad for code portability.
What we actually have here is a language design compromise. The language stops you from using variables that are (really) not initialized. But to do this, the "dumb" compiler must sometimes stop you using variables that you (the smart programmer) know will be initialized ... because the rules say that it should.
(The alternatives are worse: either no compile-time checks for uninitialized variables leading to hard crashes in unpredictable places, or checks that are different for different compilers.)
A distinct non-answer: when code is "so" complicated that an IDE / java compiler doesn't "see it", then that is a good indication that your code is too complicated anyway. At least for me, it wasn't obvious what you said. I had to read up and down repeatedly to convince myself that the statement given in the question is correct.
You have an if in a switch in a for. Clean code, and "single layer of abstraction" would tell you: not a good starting point.
Look at your code. What you have there is a state machine in disguise. Ask yourself whether it would be worth to refactor this on larger scale, for example by turning it into an explicit state machine of some sort.
Another less intrusive idea: use a List instead of an array. Then you can simply create an empty list, and add elements to that as needed.
After just trying to execute the code regardless of Eclipse complaining, I noticed that it does indeed run without problems. So apparently it was just a warning being set to "error" level, despite not being critical.
There was a "configure problem severity" button, so I set the severity of "Potential null pointer access" to "warning" (and adjusted some other levels accordingly). Now Eclipse just marks it as warning and executes the code without complaining.
More understandable would be:
boolean[] stuffNThings;
boolean initialized = false;
for (String string: list) {
if (!initialized) {
if (!/*condition*/) {
stuffNThings = new boolean[/*size*/];
initailized = true;
}
}
if (initialized) {
// bar
stuffNThings[0] = true;
}
}
Two loops, one for the initialisation, and one for playing with the stuff might or might not be more clear.
It is easier on flow analysis (compared to a switch with fall-through).
Furthermore instead of a boolean[] a BitSet might used too (as it is not fixed sized as an array).
BitSet stuffNThings = new BitSet(/*max size*/);

Why Does Java Require Variables to Be Initialized?

In Java, why doesn't the compiler simply assume that an uninitialised variable should have 0 as its value, like C does? Is this just BetterPractice in general, or is there another reason that is particular to Java?
Because the code that uses a variable that has not been initialized, leading to unpredictable or unintended results.I guess java designers must have faced lot of problems while programming with C like stuck in loop, So probably when they were developing java they decided to get rid of this problem. And that is why we call java a robust language.
And also assigning variables initial values is not always fine. For example
int i; // uninitialized variable ,suppose it is initialize to 0 by compiler
int j=5/i; // run time error
For most situations using a variable before assigning it to anything is a mistake, and by having the compiler explicitly consider it an error it helps catching these bugs very early in the programming process.
Saying that uninitialized variables contain zero remove this capability. It is my guess that having the programmer type some more to explicitly assign variables to zero was found to be less important than finding some rather tricky bugs at runtime.

What hack can I use to suppress an unused function warning?

Consider a private method which is called from JNI and not used otherwise, generating a compiler warning about an unused method:
private void someMethodCalledOnlyFromJNI() { // WARNING: method is never used
// ....
}
This is some legacy code in Java 1.4 - so no dice on #SuppressWarnings.
What hack would you use to suppress this compiler warning?
Edit: Obviously this is just a warning and it can easily be ignored. But, personally, I hate warnings in my code just as much as I don't want to see errors in it. AFAIC - my code should have 0 warnings, it might be an exaggeration, but I am very pedantic about this.
Just as an example, someone might see this function, not know it is used from JNI, and simply delete it.
Ignore it. It is a warning, after all - best option
use protected (and add a comment for the reason why) - bearable
Make a dummy method just above it and make the two call each other (again with comments) - ugly
configure the IDE not to show this warning at all (in eclipse it is Windows > Preferences > Java > Compiler > Errors/Warnings) - not preferable
As per your update: having 0 warnings is not a goal you should set. The number of warnings depends on the settings, so if you don't all have unified IDEs, this number will vary. And then you can add checkstyle / PMD to report warnings as well - then you'll have even more. The reasonable behaviour is to have a warnings treshold.
If you don't want anyone to delete this method, just add a comment:
// This method is used is used by JNI. (Don't delete)
Somewhere else in the class:
if(Boolean.FALSE.booleanValue())
{ // prevents warning for unused private method which is used from JNI
someMethodCalledOnlyFromJNI();
}
(can't use simple false because that results in dead code warning).
Either just ignore the warning, or declare it as protected instead. If you go for protected and want to prevent subclassing/overriding as well, then declare it final as well.
To start with, its only a warning, thus it should not be an issue for you.
You could either mod the code to remove that function thus removing the problem.
Or just call it from some where at the start/end of your code and ignore any results. As long as it is not going to try to set up any thing that will affect the rest of your program you will be fine.
you can make it public. if it's legacy code I am sure no one will complain :)

How to follow the origin of a value in Java?

I have a variable that very rarely gets an incorrect value. Since the system is quite complex I'm having trouble tracing all the code paths that value goes through - there are multiple threads involved, it can be saved and then loaded from a DB and so on. I'm going to try to use a code graph generator to see if I can spot the problem by looking at the ways the setter can be called, by may be there's some other technique. Perhaps wrapping the value with a class that traces the places and changes it goes through? I'm not sure the question is clear enough, but I'd appreciate input from somebody who encountered such a situation.
[Edit] The problem is not easily reproducible and I can't catch it in a debugger. I'm looking for a static analysis or logging technique to help track down the issue.
[Edit 2] Just to make things clearer, the value I'm talking about is a timestamp represented as the number of milliseconds from the Unix epoch (01/01/1970) in a 64-bit long variable. At some unknown point the top 32 bits of the value are truncated generating completely incorrect (and unrecoverable) timestamps.
[Edit 3] OK, thanks to some of your suggestions and to a couple of hours of pouring through the code, I found the culprit. The millisecond-based timestamp was converted into a second-based timestamp by dividing it by 1000 and stored in an int variable. At a later point in code, the second-based timestamp (an int) was multiplied by 1000 and stored into a new long variable. Since both 1000 and the second-based timestamps were int values, the result of the multiplication was truncated before being converted to long. This was a subtle one, thanks to everyone who helped.
If you are using a setter and only a setter to set your value you can add these lines in order to track the thread and stack trace:
public void setTimestamp(long value) {
if(log.idDebugEnabled) {
log.debug("Setting the value to " + value + ". Old value is " + this.timestamp);
log.debug("Thread is " + Thread.currentThread().getName());
log.debug("Stacktrace is", new Throwable()); // we could also iterate on Thread.currentThread().getStackTrace()
}
// check for bad value
if(value & 0xffffffff00000000L == 0L) {
log.warn("Danger Will Robinson", new IlegalValueException());
}
this.timestamp = value;
}
Also, go over the class that contains the field, and make sure that every reference to it is done via the setter (even in private/protected methods)
Edit
Perhaps FindBugs can help in terms of static analysis, I'll try to find the exact rule later.
The fact that 32 bits of the long get changed, rather than the whole value, suggests strongly that this is a threading problem (two threads update the variable at the same time). Since java does not guarantee atomic access to a long value, if two threads update it at the same time, it could end up with half the bits set one way and half the other. This means that the best way to approach the issue is from a threading point of view. Odds are that there is nothing setting the variable in a way that a static analysis tool will show you is an incorrect value, but rather the syncronization and locking strategy around this variable needs to be examined for potential holes.
As a quick fix, you could wrap that value in an AtomicLong.
I agree - if the value is only changed via a setter (no matter what the orgin) - and it better be - then the best way is to modify the setter to do the tracking for you (print stack trace at every setting, possibly only when the value set is a specific one if that cuts down on the clatter)
Multithreaded programming is jsut hard, but there are IDE tools to help. If you have intellij IDEA, you can use the analyze dataflow feature to work out where things gets changed. If won't show you a live flow (its a static analysis tool), but it can give you a great start.
Alternatively, you can use some Aspects and just print out the value of the variable everywhere, but the resulting debugging info will be too overwhelming to be that meaningful.
The solution is to avoid state shared between threads. Use immutable objects, and program functionally.
Two things:
First, to me, it smells as though some caller is treating their timestamp in an integer context, losing your high 32 bits. It may be, as Yishai surmised, threading-related, but I'd look first at the operations being performed. However, naturally, you need to assure that your value is being updated "atomically" - whether with an AtomicLong, as he suggested, or with some other mechanism.
That speculation aside, given that what you're losing is the high 32 bits, and you know it's milliseconds since the epoch, your setter can enforce validity: if the supplied value is less than the timestamp at program start, it's wrong, so reject it, and of course, print a stack trace.
1) Supposing that foo is the name of your variable, you could add something like this to the setter method:
try {
throw new Exception();
}
catch (Exception e) {
System.out.println("foo == " + foo.toString());
e.printStackTrace();
}
How well this will work depends on how frequently the setter is being called. If it's being called thousands of times over the run of your program, you might have trouble finding the bad value in all the stack traces. (I've used this before to troubleshoot a problem like yours. It worked for me.)
2) If you can run your app in a debugger and you can identify programatically bad values for your variable, then you could set a breakpoint in the setter conditional on whatever it is that makes the value bad. But this requires that you can write a test for badness, which maybe you can't do.
3) Since you said (in a subsequent edit) that the problem is the high 32 bits being zeroed, you can specifically test for that before printing your stack trace. That should cut down the amount of debugging output enough to be manageable.
In your question, you speak of a "variable" that has an incorrect value, and suggest that you could try "wrapping the value with a class". Perhaps I'm reading too much into your choice of words, but would like to see a bit more about the design.
Is the value in question a primitive? Is it a field of a large, complex object that is shared between threads? If it is a field of some object, is that object a DTO or does it implement domain-specific behavior?
In general, I'd agree with the previous comments re instrumenting the object of which the "variable" is a field, but more information about the nature and usage of this variable would help guide more precise suggestions.
Based on your description, I don't know if that means it's not feasible to actual debug the app in real time, but if it is, depending on your IDE there's a bunch of debugging options available.
I know that with Eclipse, you can set conditional breakpoints in the setter method for example. You can specify to suspend only when the value gets set to a specific value, and you can also filter by thread, in case you want to focus on a specific thread.
I will rather keep a breakpoint inside the setter. Eclipse allows you to do that.
There are some IDE which allows you to halt ( wait for execution of next instruction ) the program, if the value of variable is changed.
IMO the best way to debug this type of problem is using a field modification breakpoint. (Especially if you're using reflection extensively)
I'm not sure how to do this in eclipse, but in intellij you can just right click on the field and do an "add breakpoint".

Why does javac complain about not initialized variable?

For this Java code:
String var;
clazz.doSomething(var);
Why does the compiler report this error:
Variable 'var' might not have been initialized
I thought all variables or references were initialized to null. Why do you need to do:
String var = null;
??
Instance and class variables are initialized to null (or 0), but local variables are not.
See ยง4.12.5 of the JLS for a very detailed explanation which says basically the same thing:
Every variable in a program must have a value before its value is used:
Each class variable, instance variable, or array component is initialized with a default value when it is created:
[snipped out list of all default values]
Each method parameter is initialized to the corresponding argument value provided by the invoker of the method.
Each constructor parameter is initialized to the corresponding argument value provided by a class instance creation expression or explicit constructor invocation.
An exception-handler parameter is initialized to the thrown object representing the exception.
A local variable must be explicitly given a value before it is used, by either initialization or assignment, in a way that can be verified by the compiler using the rules for definite assignment.
It's because Java is being very helpful (as much as possible).
It will use this same logic to catch some very interesting edge-cases that you might have missed. For instance:
int x;
if(cond2)
x=2;
else if(cond3)
x=3;
System.out.println("X was:"+x);
This will fail because there was an else case that wasn't specified. The fact is, an else case here should absolutely be specified, even if it's just an error (The same is true of a default: condition in a switch statement).
What you should take away from this, interestingly enough, is don't ever initialize your local variables until you figure out that you actually have to do so. If you are in the habit of always saying "int x=0;" you will prevent this fantastic "bad logic" detector from functioning. This error has saved me time more than once.
Ditto on Bill K. I add:
The Java compiler can protect you from hurting yourself by failing to set a variable before using it within a function. Thus it explicitly does NOT set a default value, as Bill K describes.
But when it comes to class variables, it would be very difficult for the compiler to do this for you. A class variable could be set by any function in the class. It would be very difficult for the compiler to determine all possible orders in which functions might be called. At the very least it would have to analyze all the classes in the system that call any function in this class. It might well have to examine the contents of any data files or database and somehow predict what inputs users will make. At best the task would be extremely complex, at worst impossible. So for class variables, it makes sense to provide a reliable default. That default is, basically, to fill the field with bits of zero, so you get null for references, zero for integers, false for booleans, etc.
As Bill says, you should definitely NOT get in the habit of automatically initializing variables when you declare them. Only initialize variables at declaration time if this really make sense in the context of your program. Like, if 99% of the time you want x to be 42, but inside some IF condition you might discover that this is a special case and x should be 666, then fine, start out with "int x=42;" and inside the IF override this. But in the more normal case, where you figure out the value based on whatever conditions, don't initialize to an arbitrary number. Just fill it with the calculated value. Then if you make a logic error and fail to set a value under some combination of conditions, the compiler can tell you that you screwed up rather than the user.
PS I've seen a lot of lame programs that say things like:
HashMap myMap=new HashMap();
myMap=getBunchOfData();
Why create an object to initialize the variable when you know you are promptly going to throw this object away a millisecond later? That's just a waste of time.
Edit
To take a trivial example, suppose you wrote this:
int foo;
if (bar<0)
foo=1;
else if (bar>0)
foo=2;
processSomething(foo);
This will throw an error at compile time, because the compiler will notice that when bar==0, you never set foo, but then you try to use it.
But if you initialize foo to a dummy value, like
int foo=0;
if (bar<0)
foo=1;
else if (bar>0)
foo=2;
processSomething(foo);
Then the compiler will see that no matter what the value of bar, foo gets set to something, so it will not produce an error. If what you really want is for foo to be 0 when bar is 0, then this is fine. But if what really happened is that you meant one of the tests to be <= or >= or you meant to include a final else for when bar==0, then you've tricked the compiler into failing to detect your error. And by the way, that's way I think such a construct is poor coding style: Not only can the compiler not be sure what you intended, but neither can a future maintenance programmer.
I like Bill K's point about letting the compiler work for you- I had fallen into initializing every automatic variable because it 'seemed like the Java thing to do'. I'd failed to understand that class variables (ie persistent things that constructors worry about) and automatic variables (some counter, etc) are different, even though EVERYTHING is a class in Java.
So I went back and removed the initialization I'd be using, for example
List <Thing> somethings = new List<Thing>();
somethings.add(somethingElse); // <--- this is completely unnecessary
Nice. I'd been getting a compiler warning for
List<Thing> somethings = new List();
and I'd thought the problem was lack of initialization. WRONG. The problem was I hadn't understood the rules and I needed the <Thing> identified in the "new", not any actual items of type <Thing> created.
(Next I need to learn how to put literal less-than and greater-than signs into HTML!)
I don't know the logic behind it, but local variables are not initialized to null. I guess to make your life easy. They could have done it with class variables if it were possible. It doesn't mean you have to have it initialized in the beginning. This is fine :
MyClass cls;
if (condition) {
cls = something;
else
cls = something_else;
Sure, if you've really got two lines on top of each other as you show- declare it, fill it, no need for a default constructor. But, for example, if you want to declare something once and use it several or many times, the default constructor or null declaration is relevant. Or is the pointer to an object so lightweight that its better to allocate it over and over inside a loop, because the allocation of the pointer is so much less than the instantiation of the object? (Presumably there's a valid reason for a new object at each step of the loop).
Bill IV

Categories

Resources