How would i set up using pmd and checkstyle results as advice only and disable them on the build server? And would it be bad practice to do so?
Both pmd and checkstyle offer valuable advice, and i want to keep on using them.
But (here comes the but) i find that my code collects a lot of lint trying to work around some of the warnings. To name a few examples:
Test-classes contain many mockito and junit static imports, invariably i have to add #SuppressWarnings("PMD.TooManyStaticImports").
A class under test needs its fields filled with mock objects, these are not used anywhere in the test but they need to be declared and annotated with #Mock for the class under test to work correctly. Add #SuppressWarnings("PMD.UnusedPrivateField").
In test classes i will have methods for creating objects from a long list of parameters, eg: createPerson(String firstname, String lastname, int shoesize, String favouritecolor, ...). These objects are normally created from a database or XML. Add #SuppressWarnings("PMD.ParameterNumberCheck").
Sometimes my documentation will be: "This method makes sure that X in the following 3 cases: \n ...". Apparently this is not allowed as the first sentence should end with a period.
Parent class X has some field y that all its children need and use, but checkstyle won't allow it unless the field is accessed through a method (getY()). This is just unnatural, IMO.
One option would be to turn the checks causing the most nuisance off permanently, however a check may be a nuisance or very useful depending on the context.
I recognize that explicitly suppressing warnings in the code is also a way to document that only in the specific context, the check is irrelavant and annoying. It is the amount of suppresions that annoys me, almost every testclass needs suppressions, and some of the other classes need workarounds.
So would it be a solutions to generate the warings, but not allow checkstyle and pmd violations to fail te build?
Test-classes contain ...
A class under test ...
In test classes ...
It seems to me, you should suppress these checks under your test code as you don't agree with them.
This is a common occurrence, like in Checkstyle we don't document our test code but our main code documents everything. To get around this for PMD, we split our configuration between test and main. To get around this for Checkstyle utility, we suppress violations for the test directory. You can also look at the options for the Checks, and see if there is anyway to configure it to ignore your cases.
Sometimes my documentation will be: "This method makes sure that X in the following 3 cases: \n ...".
I can't say for certain since I don't know the contents of your methods, but the first sentence should be a simple explanation of what the method does and it's goal. Then you can follow it by your specific cases you mentioned. Checkstyle just requires the first sentence to end with a period, not every sentence.
Parent class X has some field y that all its children need and use, but checkstyle won't allow it unless the field is accessed through a method (getY()). This is just unnatural, IMO.
Since you completely dislike this, then just disable the check for protected fields. If you look at the documentation for VisibilityModifier, you can change protectedAllowed to true and have it ignore these specific cases.
i find that my code collects a lot of lint trying to work around some of the warnings.
To me, it just seems you are not customizing these tools to your preferences and just trying to use a default configuration.
Related
I know how to make sure that a function has been invoked, using:
mockito.verify
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
I can basically write unit test for every case, but I want to make sure that if any further cases will be added - there will also be invocation to that method.
Unit testing alone will not do that. You have to look into using coverage in order to get there.
Unit testing can only tell you if the paths that were taken resulted in a "valid" result; but there is no knowledge of "all paths" that exist; and if they were all hit.
So you want to turn here for example and learn which coverage tool would work for you.
When you are working with eclipse or intellij, those things work out of the box; you can install plugins like cobertura or eclemma within eclipse; and then do a "run unit test with coverage".
But of course: that only results in a number. You then have to look carefully at your code to understand if you are happy with that number (where those IDEs make that really easy; they can show you your source code, and which paths were taken).
Meaning: coverage is a whole concept, and you have to understand what that means; and in which way you can make that concept helpful for your daily work. For example, you the last thing you want is your boss giving you a specific target goal for coverage.
And just to be sure: there is no tooling that tells you: you added new code, and now this specific method invocation is no longer coming through all parts. What coverage gives you is that you had 75.32% coverage before your change; and afterwards, it went down to 74.01% ... the rest is then up to you.
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
You don't want that.
The misunderstanding is that you don't test "the code". You test public observable behavior. In your case the behavior is that your unit under test (UuT) (after doing other stuff) calls a method on a dependency (I hope).
You don't want to test "the code" because it may change for becoming cleaner and/or support more behavior. But then you don't want to change your existing tests since they will guarantee that the desired behavior is preserved during your refactoring.
On the other hand each test method should verify exactly one expectation about the UuTs behavior. This means that you should already have one test method for each execution path though your if/else cascade. So all you have to do is to add the verify() instruction to each of this test methods.
Finally you may have an easier job testing your code if you force Single Layer of Abstraction principle which basically says that a method either calls methods on some dependencies (aka "dispatching"), calls internal methods or does low level operations. This principle may lead to a design, where the "low level" stuff your UuT currently does moves to a new dependency so that your UuT only needs to do two calls on some dependencies in a certain order...
I have encountered this issue before where I needed to tightly test bound the switch cases, and I was desperate enough to do that.
I am asuming test coverage analysis is not enough for you. For me, the if-else conditions part was so critical that changing something unintentionally could have proved greatly disastrous, so I could not afford to leave a failure prone code and I needed a test case to satisfy myself.
How I satisfied myself is here:
1: Changed the conditions such as if..else, etc to a swich case variable - TaskSwitcherEnum, say taskSwitcher, and performed all sorts of operations under various possible values of TaskSwitcherEnum.
switch (taskSwitcher){
case TaskSwitcherEnum.Task_Type_1:
//do Something before break
break;
case TaskSwitcherEnum.Task_Type_2:
//do Something again
break;
...
}
2: Tightly tested the desired method for all possible values of taskSwitcherEnum. Mockito.verify(), whether required task method is called once, for each given TaskSwitcherEnum value.
3: Finally did a junit like this:
assertEquals("Task performance strategy is designed to handle only five cases.", 5, taskSwitcherEnum.values().length);
Doing this made sure [at least test covered] following things:
1: That my code has only desired branches, and any other code branch addition/deletion is caught by a test case.
2: That each branch does it's desired job by calling a method that I want, through testing on every particular Enum values against a called method.
Gist of the whole answer is, sometimes a little design change helps a lot.
There may be some related questions, but I think my situation is peculiar enough to justify a question on its own.
I'm working on a historically grown huge Java project (far over one million LOC, due to other reasons we're still bound to Java 6 at the moment), where reflection is used to display data in tables - reflection is not used for dynamically changing the displayed data, but just for using some kind of short cuts in the code. A simplified part of the code looks like this.
TableColumns taco = new TableColumns(Bean.class);
taco.add(new TableColumn("myFirstMember"));
taco.add(new TableColumn("mySecondMember"));
...
List<Bean> dataList = getDataFromDB(myFilterSettings);
taco.displayTable(dataList);
So the values of the table cells of each row are stored in an instance of Bean. The values for the first cell comes from calling itemOfDataList.getMyFirstMember() (so here comes the reflection part of the code). The rendering of the table cells is done depending on the return type of the itemOfDataList.getMyFirstMember().
This way, it's easy to add new columns to the table, getting them rendered in a standard way without caring about any details.
Problem of this approach: when the getter name changes, the compiler doesn't notice and there will be an exception at runtime in case Bean.getMyFirstMember() was renamed to Bean.getMyFirstMemberChanged().
While reflection is used to determine which getter is called, the needed info is in fact available at compile time, there are no variables used for the column info.
My goal: having a validator that will check at compile time whether the needed getter methods in the Bean class do exist.
Possible solultions:
modifying the code (using more specific infos, writing an adapter, using annotations or whatever that can be checked at compile time by the compiler), I explicitely don't want a solution of this kind, due to the huge code basis. I just need to guarantee that the reflection won't fail at runtime.
writing a custom validator: I guess this shouldn't be too complex, but I have no real idea how to start, we use eclipse as ide, so it should be possible to write such a custom validator - any hints for a good starting point?
The validator should show a warning in eclipse if the parameter in the TableColumn(parameter) isn't final (should be a literal or constant). The validator should show an error in eclipse if the TableColumn is added to TableColumns and the corresponding Bean.getParameter() doesn't exist.
as we use SonarQube for quality checking, we could also implement a custom rule checking if the methods do exist - not completely sure if such a custom rule is possible (probably yes)
maybe other solutions that will give a fast feedback within eclipse that some tables won't render correctly after some getter methods were renamed
What I'm asking for:
what will be easier in this situation: writing a custom validator for eclipse or writing a custom rule for SonarQube?
hints where to start either approach
hints for other solultions
Thanks for your help.
Some alternatives:
You could migrate to more modern Java for this pattern, it is a prime candidate for method references. Then, your IDE of choice can automatically take care of the problem when you refactor/rename. This can be done bit-by-bit as the opportunity/necessity arises.
You could write your own custom annotations:
Which you can probably get SonarQube to scan for
Which could allow you to take advantage of javax.validation.* goodies, so your code may look/feel more like 'standard' Java EE code.
Annotations can be covered by a processor during the build step, various build tools have ways to hook this up -- and the processor can do more advanced/costly introspection so you can push the validation to compile-time as opposed to run-time.
othersMap.put("maskedPan", Class.forName("Some Class"));
Remove this use of dynamic class loading.
Rule
Changelog
Classes should not be loaded dynamically
Dynamically loaded classes could contain malicious code executed by a static class initializer. I.E. you wouldn't even have to instantiate or explicitly invoke methods on such classes to be vulnerable to an attack.
This rule raises an issue for each use of dynamic class loading.
Noncompliant Code Example
String className = System.getProperty("messageClassName");
Class clazz = Class.forName(className); // Noncompliant
See
Let's first state the obvious: a SonarQube rule is not meant to be taken as The One And Only Truth In The Universe. It is merely a way to bring your attention to a potentially sensitive piece of code, and up to you to take the appropriate action. If people in your organization force you to abide by SonarQube's rules, then they don't understand the purpose of the tool.
In this case, the rule is telling you that you are at a risk of arbitrary code execution, due to the class name being loaded through a system property, without any safety check whatsoever. And I can only agree with what the rule says.
Now, it is up to you to decide what to do with this information:
If you believe that your build and deployment system is robust enough that no malicious code can be side-loaded through this channel, you can just mark this issue as won't fix, optionally provide a comment about why you consider this as not an issue and move on
if instead you assume that an attacker could drop a .class or .jar file somewhere in your application's class path and use this as a side-loading channel for arbitrary code execution, you should at the very least validate that the provided class name is one you expect, and reject any unexpected one
One option would be something like that:
Class<?> cls;
switch (System.getProperty("messageClassName")){
case "com.example.Message1":
cls = com.example.Message1.class;
break;
...
}
Well you could try to outsmart the Sonar rule, e.g. by using reflection to call the Class.forName() method, but I'm feeling you would be solving the wrong problem there:
Class.class.getDeclaredMethod("forName", String.class).invoke(null, className);
The right way to do it is to either convince the people who run Sonar in your org that what you do is necessary and they need to make an exception to the rule for you. Or if you can't convince them, stop doing it.
All JUnit assert methods have an optional first parameter being the message printed when the assertion fails.
How do I ensure the parameter is always passed, and developers in my project never lazily skip describing what the assertion is doing?
Is there an inspection tool that can check for that?
Is there anything I can do programmatically?
My project is maven-friendly.
As code inspection seems to be your aim, I would recommend a tool called PMD. If there is not already a rule for this, I would think it is fairly trivial to create one. Furthermore, this will help you in detecting other code mess which your developers may be creating.
Here is a link:
http://pmd.sourceforge.net/
In our team, we do code reviews of pull requests. If asserts are not following the standard, you could flag it in the review. Only pull requests that have sufficient approvals are allowed to be merged.
That said, I would probably not enforce this rule on my team. Instead I'd tell them to write shorter tests where the method name will clearly state the intent, like:
#Test(expected = IllegalArgumentException.class);
shouldThrowExceptionWhenInputIsNegative() {}
#Test
shouldFilterOutNulls() {}
#Test
shouldCreateAdditionalRecordWhenBankBalanceIsOver10000() {}
etc...
For writing unit tests, I know it's very popular to write test methods that look like
public void Can_User_Authenticate_With_Bad_Password()
{
...
}
While this makes it easy to see what the test is testing for, I think it looks ugly and it doesn't display well in auto-generated documentation (like sandcastle or javadoc).
I'm interested to see what people think about using a naming schema that is the method being tested and underscore test and then the test number. Then using the XML code document(.net) or the javadoc comments to describe what is being tested.
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1()
{
...
}
by doing this I can easily group my tests together by what methods they are testing, I can see how may test I have for a given method, and I still have a full description of what is being tested.
we have some regression tests that run vs a data source (an xml file), and these file may be updated by someone without access to the source code (QA monkey) and they need to be able to read what is being tested and where, to update the data sources.
I prefer the "long names" version - although only to describe what happens. If the test needs a description of why it happens, I'll put that in a comment (with a bug number if appropriate).
With the long name, it's much clearer what's gone wrong when you get a mail (or whatever) telling you which tests have failed.
I would write it in terms of what it should do though:
LogInSucceedsWithValidCredentials
LogInFailsWithIncorrectPassword
LogInFailsForUnknownUser
I don't buy the argument that it looks bad in autogenerated documentation - why are you running JavaDoc over the tests in the first place? I can't say I've ever done that, or wanted generated documentation. Given that test methods typically have no parameters and don't return anything, if the method name can describe them reasonably that's all the information you need. The test runner should be capable of listing the tests it runs, or the IDE can show you what's available. I find that more convenient than navigating via HTML - the browser doesn't have a "Find Type" which lets me type just the first letters of each word of the name, for example...
Does the documentation show up in your test runner? If not that's a good reason for using long, descriptive names instead.
Personally I prefer long names and rarely see the need to add comments to tests.
I've done my dissertation on a related topic, so here are my two cents: Any time you rely on documentation to convey something that is not in your method signature, you are taking the huge risk that nobody would read the documentation.
When developers are looking for something specific (e.g., scanning a long list of methods in a class to see if what they're looking for is already there), most of them are not going to bother to read the documentation. They want to deal with one type of information that they can easily see and compare (e.g., names), rather than have to start redirecting to other materials (e.g., hover long enough to see the JavaDocs).
I would strongly recommend conveying everything relevant in your signature.
Personally I prefer using the long method names. Note you can also have the method name inside the expression, as:
Can_AuthenticateUser_With_Bad_Password()
I suggest smaller, more focussed (test) classes.
Why would you want to javadoc tests?
What about changing
Can_User_Authenticate_With_Bad_Password
to
AuthenticateDenieTest
AuthenticateAcceptTest
and name suit something like User
As a Group how do we feel about doing a hybrid Naming schema like this
/// <summary>
/// Tests for user authentication with a bad password.
/// </summary>
public void AuthenticateUser_Test1_With_Bad_Password()
{
...
}
and we get the best of both.