I am getting error like Avoid Literals In If Condition in sonarqube , and unable to find the proper solution to it.
SingleWrapper singleWrapper=null;
:
:
singleWrapper=createWrapper();
:
private void wrap(){
if(singleWrapper != null){ //Here i am getting error.
//do Something
}
}
I know this question seems to be repeated one but its not,because previously asked for String .
Thanks for any help.
It is because your static code analysis tool detects null as a hardcoded literal, which, rigorously, is true.
The recommended behavior is to declare a constant object like
final static Object NULL = null;
and use it like
if(singleWrapper != NULL)
But I haven't still met a developer doing this. In this case, I think you're OK and you can ignore the code check warnings. That's my 2 cents.
The description for the PMD rule reads:
Avoid using hard coded literals in conditional statements, declare those as static variables or private members.
While in most cases it is relevant (you don't want to have arbitrary hard coded string or numerical literals), in this case it is (IMHO) a bit too zealous, for checking against null is so widely used that it should probably be ignored by this rule.
Since this rule comes from PMD (not SQ internal engine), you could ask for an upstream fix - or just remove it from your profile if it really bugs you.
Note that this rule is part of the Controversial Rules set.
Related
I have a program that basically looks like this:
boolean[] stuffNThings;
int state=1;
for(String string:list){
switch(state){
case 1:
if(/*condition*/){
// foo
break;
}else{
stuffNThings=new boolean[/*size*/];
state=2;
}
// intentional fallthrough
case 2:
// bar
stuffNThings[0]=true;
}
}
As you, a human, can see, case 2 only ever happens when there was previously a state 1 and it switched to state 2 after initialising the array. But Eclipse and the Java compiler don't see this, because it looks like pretty complex logic to them. So Eclipse complains:
The local variable stuffNThings may not have been initialized."
And if I change "boolean[] stuffNThings;" to "boolean[] stuffNThings=null;", it switches to this error message:
Potential null pointer access: The variable stuffNThings may be null at this location.
I also can't initialise it at the top, because the size of the array is only determined after the final loop in state 1.
Java thinks that the array could be null there, but I know that it can't. Is there some way to tell Java this? Or am I definitely forced to put a useless null check around it? Adding that makes the code harder to understand, because it looks like there may be a case where the value doesn't actually get set to true.
Java thinks that the array could be null there, but I know that it can't.
Strictly speaking, Java thinks that the variable could be uninitialized. If it is not definitely initialized, the value should not be observable.
(Whether the variable is silently initialized to null or left in an indeterminate state is an implementation detail. The point is, the language says you shouldn't be allowed to see the value.)
But anyway, the solution is to initialize it to null. It is redundant, but there is no way to tell Java to "just trust me, it will be initialized".
In the variations where you are getting "Potential null pointer access" messages:
It is a warning, not an error.
You can ignore or suppress a warning. (If your correctness analysis is wrong then you may get NPE's as a result. But that's your choice.)
You can turn off some or all warnings with compiler switches.
You can suppress a specific warning with a #SuppressWarnings annotation:
For Eclipse, use #SuppressWarnings("null").
For Android, use #SuppressWarnings("ConstantConditions").
Unfortunately, the warning tags are not fully standardized. However, a compiler should silently ignore a #SuppressWarnings for a warning tag that it doesn't recognize.
You may be able to restructure the code.
In your example, the code is using switch drop through. People seldom do that because it leads to code that is hard to understand. So, I'm not surprised that you can find edge-case examples involving drop-through where a compiler gets the NPE warnings a bit wrong.
Either way, you can easily avoid the need to do drop-through by restructuring your code. Copy the code in the case 2: case to the end of the case 1: case. Fixed. Move on.
Note the "possibly uninitialized" error is not the Java compiler being "stupid". There is a whole chapter of the JLS on the rules for definite assignment, etcetera. A Java compiler is not permitted to be smart about it, because that would mean that the same Java code would be legal or not legal, depending on the compiler implementation. That would be bad for code portability.
What we actually have here is a language design compromise. The language stops you from using variables that are (really) not initialized. But to do this, the "dumb" compiler must sometimes stop you using variables that you (the smart programmer) know will be initialized ... because the rules say that it should.
(The alternatives are worse: either no compile-time checks for uninitialized variables leading to hard crashes in unpredictable places, or checks that are different for different compilers.)
A distinct non-answer: when code is "so" complicated that an IDE / java compiler doesn't "see it", then that is a good indication that your code is too complicated anyway. At least for me, it wasn't obvious what you said. I had to read up and down repeatedly to convince myself that the statement given in the question is correct.
You have an if in a switch in a for. Clean code, and "single layer of abstraction" would tell you: not a good starting point.
Look at your code. What you have there is a state machine in disguise. Ask yourself whether it would be worth to refactor this on larger scale, for example by turning it into an explicit state machine of some sort.
Another less intrusive idea: use a List instead of an array. Then you can simply create an empty list, and add elements to that as needed.
After just trying to execute the code regardless of Eclipse complaining, I noticed that it does indeed run without problems. So apparently it was just a warning being set to "error" level, despite not being critical.
There was a "configure problem severity" button, so I set the severity of "Potential null pointer access" to "warning" (and adjusted some other levels accordingly). Now Eclipse just marks it as warning and executes the code without complaining.
More understandable would be:
boolean[] stuffNThings;
boolean initialized = false;
for (String string: list) {
if (!initialized) {
if (!/*condition*/) {
stuffNThings = new boolean[/*size*/];
initailized = true;
}
}
if (initialized) {
// bar
stuffNThings[0] = true;
}
}
Two loops, one for the initialisation, and one for playing with the stuff might or might not be more clear.
It is easier on flow analysis (compared to a switch with fall-through).
Furthermore instead of a boolean[] a BitSet might used too (as it is not fixed sized as an array).
BitSet stuffNThings = new BitSet(/*max size*/);
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to get to grips with the use of the assert keyword in Java. As I understand it, the correct case is for verifying things that should always be true.
I'm worried that I'm overusing asserts, however.
Here's a sample:
private BodyParams() {
assert revokedDoc != null : "revokedDoc must not be null";
assert revokedDoc.getStatus() == DocumentStatus.Revoked : "document is not revoked";
assert !isBlank(revokedDoc.getDocType()) : "docType should not be blank";
assert revokedDoc.getIssuedDate() != null : "doc should have issue date";
assert revokedDoc.getSendingOrg() != null
&& !isBlank(revokedDoc.getSendingOrg().getName())
: "sending ord should exists and name should ne populated";
if (registeredUser) {
assert revokedDoc.getOwner() != null
&& !isBlank(revokedDoc.getOwner().getFirstName())
: "owner should exists and first name should be populated";
this.ownerFirstName = revokedDoc.getOwner().getFirstName();
this.docUrl = Application.PUBLIC_HOSTNAME
+ controllers.routes.DocumentActions.viewDocument(
revokedDoc.getId()
).url();
} else {
this.ownerFirstName = null;
this.docUrl = null;
}
if (revokedDoc.getStatus() == DocumentStatus.Available) {
assert !isBlank(revokedDoc.getFriendlyName())
: "friendly name should not be blank for picked-up docs";
this.friendlyName = revokedDoc.getFriendlyName();
} else {
this.friendlyName = null;
}
this.docType = revokedDoc.getDocType();
this.issueDate = revokedDoc.getIssuedDate();
this.issuerName = revokedDoc.getSendingOrg().getName();
}
In this example, it is assumed that the revokedDoc field came from the database, and correct validation was performed when it was inserted. These asserts test that assumption. Is this overkill?
edit: I should mention that this is only for development code. Assertions will not be enabled in production. I'm using the assertions to ensure that data that will be known good data from a trusted source in production behaves itself in development
It does not look right. To simplify there are two broad categories of problems that can arise and require checking the validity of a variable:
Your method receives or uses an argument that could possibly not be what you expect and your method should have appropriate argument checking and throw an IllegalArgumentException or NullPointerException or whatever if required. Example: the client code has passed in a null argument and you have no control over that code
Your method uses some of the class internals and you should have appropriate unit tests to make sure that those internals are always consistent and that your methods can use them without additional checks.
In your case, the method that creates the revokeDoc object should make sure it is in a valid state after creation and take appropriate action otherwise, for example throw an exception and roll back any changes. That way your BodyParams method can just use the object without all those asserts which clutter your code at the wrong time: if revokeDoc is not consistent it is probably too late to do something about it and should have been detected earlier.
Related post: Exception Vs Assertion
Assert is really useful to perform that should always be true inside an library or a module. It is intented to verify invariants ( control flow, internal, etc.) in your code, and it is a bad idea to use it to enforce correct use of your code (you have exceptions for that).
As a consequence, your public interface should never be based on assert : when you have a public method and you want to check input parameter, it is generally better to throw an IllegalArgumentException.
Here are some good documentation about asserts.
In your example, I think you should use exceptions instead of asserts. It's not a bad idea to perform some validity checks on data coming from a database (even if it has been validated on input) but assertion might be disabled in production code and you have to think on how you should handle such malformed content.
This could be an opinionated question. However, I'd go with the following things to decide:
Is this method exposed to outside world (via an interface, JAR file, user input field or anywhere where you could get inputs from a source that is not in your control) - then I should have a valid actual check which would result in an exception.
Am i relying on assertion for my correct execution of the code? If so, I shouldn't. At runtime, assertions are meant to be disabled.
Is this assertion always true? and if yes, am I going to use it on the off case for just debugging - then yes, use an assertion in place of a code comment. When something goes bad, enable the assertions and figure out what's wrong.
You need to consider two scenarios: development code and production code.
Since Java's assert statement is disabled by default (and adds only little overhead by checking a global static flag which is enabled by passing -ea to the VM), I would not consider this overhead since it helps you detect issues early during your development phase (assumed that you have enabled assertions in your development environment).
On the other hand, you say "... Correct validation was performed when it was inserted ..." - so, how do you know that the value has not been changed in the database meanwhile? If security matters for your system (I am just assuming it does), one basic pattern is that you must not trust anything which you get from the outside. Means, validate values you read from the database - but, in that case, assert is not the proper tool. Use normal validation code and exceptions for that.
The best practice, acording to OO metology is to check the params you receive. And create regulars checks for others. Should in your case you should get something like this:
private BodyParams(revokedDoc)
[...]
asserts of the params
if(isBlank(revokedDoc.....)
All the assets looks good, and is the way to make sure the method has everything to work. But they should be to make an aide of what's going on wrong, not to make your program work.
I'm using Findbugs and javax.annotation.Nonnull on method parameters.
On private methods I usually add an assert line to check for nullness like
private void myMethod(#Nonnull String str) {
assert str != null
....
Latest Netbeans version (7.3rc2) is reporting that the assert check is not necessary (because of the Nonnull annotation). I'm not fully sure this is a Netbeans bug or not.
Can the assert line be removed because I specified the #Nonnull annotation ?
As far as I understand, the annotation is used only during static analysis while assert is, when enabled, active during execution so the twos are not alternative.
The assert is evaluated at runtime, the annotation helps FindBugs catch problems during the analysis before runtime. As both checks are not really conflicting you could keep them both. I would find it annoying if my IDE told me to remove the assert.
Netbeans is right. If you think it can be null: remove the annotation. If you know it can't: remove the assert.
If there's ANY chance that your method could be called with a null value, then #Nonnull annotation shouldn't be there.
Like you said, that annotation doesn't actually do anything at runtime: it is only used by IDEs and static code analysis tools. It doesn't ensure that things aren't null.
Since this is private method, we can ensure that annotated parameter cannot be null. I think you can remove this assertion.
If NetBeans warns to public method, I think it has problem. I recommend you to put assertion.
If you still feel that assertion in private method is necessary, I think you can use bytecode injection.
For instance, here is a maven plugin to inject null check. Sorry this is my personal project, but it works to me. I guess it can suit your need.
https://github.com/KengoTODA/jsr305-maven-plugin
I found a different solution, as I was thinking about my IDE warnings.
Initially, I felt that the IDE was wrong. I'm a paranoid programmer, and want to have the label for documentation & static analysis AND a runtime check in case I ever use it from reflection, or another JVM language or something that isn't statically analyzable, so I thought it was wrong to give me a warning and tell me the assert(x != null) statement wasn't needed.
But then I thought about how asserts can be removed depending on the status of the -ea flag passed to Java at Runtime, and that in some ways assert and #Nonnull are really both development-only checks.
Turns out, there's an actual runtime check that can be inserted (Java 7+) Objects.requireNonNull which will throw a NullPointerException and cannot be removed with an -ea assertion. I think I'm going to prefer this to my assert(x != null); use(x); pattern.
public ConstructorForClass(#Nonnull Type x) {
this.x = Objects.requireNonNull(x);
//...
}
I am using PMD to analyze code and it produces a few high priority warnings which I do not know how to fix.
1) Avoid if(x!=y)..; else...; But what should I do if I need this logic? That is, I do need to check if x!=y? How can I refactor it?
2) Use explicit scoping instead of the default package private level. But the class is indeed used only within the package. What access modifier should I use?
3) Parameter is not assigned and could be declared final. Should I add final keyword to all the places which PMD pointed out with this warning?
Avoid negation: Instead of if( x!=y ) doThis() else doThat(), check for the positive case first, because people/humans tend to like positive things more than negative. It twists the brain to have to reverse the logic in mind when reading the source code. So instead, write:
if ( x!=y ) doThis() else doThat() // Bad - negation first
if ( x==y ) doThat() else doThis() // Good - positive first
Explicit scoping: According to PMD website, it's a controversial rule. You may hate it, someone else likes it. What you should do is make all the fields within your classes private. There seems to be a field or method (not a class) with a package visibility, e.g. something like this:
class Foo {
/* private missing */ Object bar;
}
Final parameters: Method parameters should be final to avoid accidental reassignment. That's just a good practice. If you're using Eclipse, the content assist even provides a quickfix called "Change modifiers to final where possible". Just select all code in the editor with Ctrl-a and then press Ctrl-1.
You don't need to enable all rules. Choose some of the rules you agree to and refactor your code until all warnings are cleared.
1 - Refactor it to a if (x == y) ... else ... logic. Just avoid negative conditions in if statments, they make code harder to understand
2 - I wouldn't enable that rule.
3 - A lot of people declare a lot of fields and variables final. Especially when they want to make sure or express that the value of a variable shall not be changed in the method. If you don't like that, disable that rule.
These all seem like minor warnings that could be turned off.
1) It wants you to flip the logic
if(x==y) {
//old else clause
} else {
//old if clause
}
2) If package is really the correct access you want, there is no access modifier to add. I am not familiar enough to know if there is a way to suppress that specific warning.
3) A style issue. Some people want final on everything it could be on. Others thinks it adds too much clutter for to little information. If you are in the latter camp, turn that warning off.
Regarding the first item (the inequality) there are two issues:
1) Readability of double negation.
Say you have:
if(x!=y) { false clause } else { true clause }
The second clause is executed if "not x is not equal to y".
This can be rewritten as:
if (x==y) {true clause } else {false clause}.
2) Correctness: if x and y are not-primitives, using if(!x.equals(y)) is safer.
This is the equivalent of using == instead of .equals() and can lead to very serious bugs.
You can also use // NOPMD at the end of any line where you don't want PMD rules to be checked.
For example for the above given code you can suppress PMD check by giving,
class Foo {
/* private missing */ Object bar; // NOPMD
}
Please be aware that the above comment may silently suppress other warnings in the same line.
I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.