Cobertura coverage and the assert keyword - java

My line coverage for unit tests measured by Cobertura is suffering, because I have assert statements which are not covered in tests. Should I be testing assertions, and is there any way to get Cobertura to ignore them so they do not affect my test coverage?

The line coverage of your Java assert statements should be simply covered by running your test suite with assertions enabled, i.e., giving -ea as argument to the jvm.
If you do this, you'll see that cobertura easily reports 100% line coverage if the rest of your lines are covered as well.
Nevertheless, the assert lines will still be colored red, suggesting insufficient coverage.
This is because your assertions are usually always true, so you never hit the false branch.
Thus, branch coverage in Cobertura is messed up by using assert, since assert lines will have 50% branch coverage, rendering the overall branch coverage percentage hard to interpret (or useless).
Clover has a nice feature to ignore assert when computing coverage. I haven't seen this feature in any open source Java coverage tool.
If you use assertions in a design-by-contract style, there is no need to add tests that make your Java assert statements fail. As a matter of fact,
for many assertions (e.g., invariants, post-conditions) you cannot even create objects that would make them fail, so it is impossible to write such tests.
What you can do, though, is use the invariants/postconditions to derive test cases exercising their boundaries -- see Robert Binder's invariant boundaries pattern. But this won't make your assertions fail.
Only if you have a very tricky pre-condition for a given method, you may want to consider writing a test aimed at making the pre-condition fail. But then re-thinking your pre-condition may be a better idea.

There's a feature request (1959691) already logged for Cobertura to add the ability to ignore assert lines; feel free to watch it in case they implement and add a comment to the page if you think it's a good idea. It's a shame none of the opensource coverage tools support this yet, as I definitely think assert statements are a good idea, but testing them usually isn't practical/possible.

Code coverage is a tool which allow you to improve your testing, it is not a some kind of proof for the validity of your tests. You get the value from it by aiming to having 100% code coverage, looking at what is not covered, and reflecting on how to improve your tests.
Finding that there are uncovered assert statements is just an example where you should call: "ok, no need to cover this". A similar examples for this is with tracing macros and debug code which adds hidden "if statements" which are never covered.
I guess what what you don't like about your answer is that you like to have a minimal code-coverage requirement for your code; I faced the same issue with some of my projects. If one must not lose the coverage data, one solution is to have "coverage build" where the problematic statements (assert, trace, etc.) are replaced with empty ones (say through macro or linker magic). But I believe this it is usually a better trade-off just to accept the non-perfect coverage.
Good luck.

Presumably you're using assertions to verify pre-conditions or some other set of conditions. Consider whether your situation is similar to Argument Exceptions should be Unit Tested.
In any case, I expect you're ultimately trying to determine if you should test the negative-path branches of these statements.
Yes, by all means, test those. Your assertions are themselves logic about your assumptions, and it's a Good Thing to test that your guard statements prevent/protect the scenarios you think they do.

Related

Is there a way to force that a method is tested?

I'd like to mark unit tested methods as tested in Java code, so that a compiler error occurs in case of a missing testing method.
Is there a way to do that? I couldn't find a satisfying solution.
of course it's possible. you can write your own postprocessor that parses your code, finds marked method, runs tests, gets coverage results, compares them with marked methods and fails build if something is wrong.
but
it's a loooot of work
the benefit is very small. why? because what matter is not the coverage itself but quality of tests. you can even write a test generator that covers all / most of your code automatically but it will have no value. good test checks for behaviour specific. and only devs know what behaviour is expected
the quality of test is usually covered by code reviews. at the pull request screen you can also display the stats of the pull request ie. code coverage etc. but i'm not sure if automating it further is worth the effort

Catch Fake Unit Tests in Java

I have tests which do not have asserts in my repo, though jacoco gives good coverage. Is there a way to detect tests like this, other than better code reviews ?
Use PMD. It has a standard rule for unitTests without any asserts.
If your intent is only to find trivially wrong tests, namely those that lack any assert statement, checkers like PMD might do the job. You will likely get some false positives, namely for tests where the actual test goal is just to ensure that the SUT does not throw an exception (a case mentioned in the comments by Peter Lawrey).
There are, however, many more problems in tests that are not as simple to find: An assertion could just be wrong (assertTrue(isPrime(9))),or, the assertion may only address a part of the relevant aspects (like, when dealing with rational numbers, only checking the numerator but not the denominator).
For the detection of such quality problems there also exist approaches, like mutation testing, which can help to some extent. And, while writing the tests in the first place: Using a test first development process ensures that a test at least failed once.
However, a test suite may have additional problems that are not related to the set of tests that are executed, but to non-functional quality criteria: Test execution time, maintainability of the test suite, expressiveness of the diagnostic output in case of a test failure etc.
When you are interested in detecting these kinds of problems, I would argue reviews are unavoidable.

Best rules to write JUnit test cases

We have a project with > 80% test coverage but the quality of the code is still a problem. Some changes will bring in new bugs ever with test cases to cover the logic.
What's the best rules to write the JUnit test cases?
A tip first: when developing new code, start doing it in a unit test
Do not start with border cases, null tests
Make stable tests, which will not easily be broken with source changes
For scenarios with business data, sequences, make help functions
Catching regression errors, weakness in the code usage is the main goal
New code often merits to be redesigned, a rewrite when some additional functionality crystallizes. A unit test should not be a burdon, and be entirely rewritten too
Some effort can better go into the tested code.
100% coverage also implies that much effort went into test code and its maintenance.
However if a unit test of a source is cumbersome, refactor the original code by splitting responsibilities. This can be done ugly by using inheritance for separating different aspects.
Rely on findbugs/coverity, selenium and such too.
You said "Some changes will bring in new bugs ever with test cases to cover the logic".
My question is what kind of bugs are introduced? Are these functional bugs say missing a functionality or a functionality not working as expected even though the code and Unit test both are there.
Then I would say you need to do some requirement mapping. Junits are not only helpful to cover the "written code", but you should also look at the requirements/scenarios and there should be atleast one unit-test for each scenario. Every time there is a change in functionality, make sure you update/add the Junit first (so it will start failing), then update/add/remove code. So the failed Junit now start passing.

JUnit report to show test functionality, not coverage

One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.

Ensure minimal coverage on new Subversion commits

We have a massive project with almost no unit tests at all. I would like to ensure from now on that the developers commit new features (or bugs!) without minimal coverage for corresponding unit tests.
What are some ways to enforce this?
We use many tools, so perhaps I can use a plugin (jira, greenhopper, fisheye, sonar, hudson). I was also thinking perhaps a Subversion pre-commit hook, the Commit Acceptance Plugin for jira, or something equivalent.
Thoughts?
Sonar (wonderful tool by the way) with Build breaker plugin can break your Hudson build when some metrics don't meet specified rules. You can setup such a rule in Sonar that would trigger an alert (eventually causing the build to fail) when the coverage is below given point. The only drawback is that you probably want the coverage to grow, so you must remember to increase the alert level every day to the current value.
What you want to do is determine what is new code, and verify that the new code is covered by some test.
Determining code coverage in general can be accomplished with any of a variety of test coverage tools. Many test coverage tools can simply reinstrument your entire application and then you can run tests to determine coverage.
Our (Semantic Designs') line of Test Coverage tools can determine, from a changed-file list, just the individual files that need to re-instrumented, and with careful test organization, just the tests that need to be reexecuted. This will minimize the cost of re-running your tests, and you'll still end
with the same overall coverage data. (Actually, these tools detect what tests need to be made based on changes at the method level).
Once you have test coverage data, what you want to know is the the specifically new code is covered by some tests. You can do this sloppily with just test coverage data if you know which files changed, by insisting the changed files have 100% coverage. That probably doesn't work in practice.
You could instead take advantage of SD's Smart Differencer tools to give a more precise answer. These tools compare two language files, and indicate where the changes are using the language syntax (e.g., expression, statement, declaration, method body, not just changed source lines) and conceptual editing operations (move, copy, delete, insert, rename-identifier-within-block). SmartDifferencer deltas tend to be both smaller and finer than what you would get from a plain diff tool.
It is easy to extract from the SmartDifferencer's output a list of lines changed. One could compute the intersection of that, per file, with the lines covered by the test coverage data. If the changed-lines are not all entirely within the set of covered lines, then "new" code hasn't been tested and you can raise a flag, stop a checkin, or whatever to signal that your checking policy has been violated.
The TestCoverage and SmartDifferencer tools don't come out-of-the-box with this computation done for you, but it should be a pretty easy script to implement.
if you use maven - cobertura plugin can be a good choice ( and not so annoying for developers as svn hook )
http://mojo.codehaus.org/cobertura-maven-plugin/usage.html

Categories

Resources