Run JUnit tests automatically before commit in Eclipse - java

This nearly identical to this question with two differences.
I want to run unit tests right before commit and not all of the time (accepted answer).
The answers and comments to the question seem to indicate that the suggested plugins are no longer supported.
Basic problem I am trying to solve is simply forgetting to run unit tests for small quick changes and it seems it should be possible to automate.

On the question you linked to, I mentioned Infinitest. It doesn't fit your item #1, but it only reruns tests that are likely to have broken (presumably it does some kind of clever code analysis -- I don't know the details) so you might find it useful anyway. It is definitely still supported, and in fact it's now open source!

Basic problem I am trying to solve is
simply forgetting to run unit tests
for small quick changes and it seems
it should be possible to automate.
If your tests are fast, you can run them on each save. See this blog by Misko Hevery
Alternatively, you could use a commit hook to run the tests before accepting the update.

Related

During git bisect, is it safe to run only failing tests? or we should run all tests?

When I use the git bisect command, I run only the failing tests at
each bisection point for Java programs. However, I see that many tutorials related to git bisect propose running "make; make test". Is there any reason why I should run all the tests at each step?
Thanks a lot in advance.
I would have to say that the conditions mentioned by #bcmcfc are necessary but not sufficient. For reference, his conditions are
all tests pass at the commit marked as good
some tests fail at the commit marked as bad
My problem is not knowing what has happened in between the good commit and the bad. For example, was there another bug discovered and fixed in the intervening commits? It's conceivable that that bug or its fix influenced this bug.
Another issue is the possible presence of "dirty" commits in the history. I don't know your usage patterns, but some people allow commits with failing tests to be present on feature branches. bisect can land on those commits, and if you only run the tests that you expect to fail you may not fully understand what's happening in that commit, and that may lead you astray in fixing the bug. It may even be that the bug was introduced and then fixed in that feature branch, then introduced again later on another feature branch in a slightly different way, which will really confuse your efforts to fix it.
This seems to me to be an example of the old adage, "In theory there's no difference between theory and practice, but in practice there is." I would run every test every time. And if they all pass where you expect, then you shouldn't feel like you've wasted your effort, you should glow with confidence knowing that you know what's going on.
If:
all tests pass at the commit marked as good
some tests fail at the commit marked as bad
Then yes, it's safe to only run the failing tests to speed up the bisection process. You can infer from the test results at the good and bad commits that the rest of the tests should pass.
You would probably re-run the full test suite after fixing the bug in question in any case, which covers you for the case where your bugfix introduces a regression.

JUnit report to show test functionality, not coverage

One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.

How do I move forward with a broken test environment? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Environment:
Java
Maven
Eclipse
Spring
Jetty for developemnt
JUnit
I just started on a new project (Yay!) but the current unit test state is a bit strange (to me). There is no way to run the tests in an automated way. Some of the tests need the server up and running to pass and so fail when run otherwise.
To make matters worse there are a large number of tests that have fallen behind and no longer pass, though they should (or they should have been changed).
The trouble is that the tests are run manually (right click in eclipse and run as JUnit test) so since no-one is going to manually test everything with each change the tests are simply written and then forgotten.
I am used to developing with all tests green from the start and I want to bring testing back into a useful state with automation.
How do I:
Mark test to not run with a reason (like "legacy test needs to be updated" or "passes only with server").
Run different tests depending on if the server is up.
Some way to log test statistics for trending of testing information (not as important)
Any suggestions would be useful. Thanks.
Update: made question more specific.
Assuming you've got actual work to be getting on with, I'd advise not to try to fix all of the existing tests before doing anything else. By all means #Ignore any tests that are failing so you can work with the tests that currently pass. Maybe also try to make a note for each failing test that you ignore, so you can re-visit it when you come to work on the area of code it's trying to test.
If your tests depend on external services, you may be able to use assumeTrue() to verify that they're up before actually trying and failing the test -- this will mark the test ignored at run-time so you still get your build and as much useful information as is possible. The TestWatcher class (if you have a new enough JUnit) may help you to do this with minimal boilerplate -- we have it set up to ignore instead of failing if we can't connect, then to ignore any tests that would subsequently fail without paying the timeout penalty again.
You can use Jenkins to automate the test for you. It can be set up to watch the source control repository, then execute a build and test it when a change is detected. This will provide a centralized place where you can look to see the state of the build and success of tests.

Does Maven Surefire execute test cases sequentially by default?

This is a follow up to this question which I realized when I dug deeper into my research:
Is it reasonable to suppose that the Maven Surefire plugin executes test cases sequentially by default: a test case ends befores the next one starts (I'm not interested in order). I found that you can configure Surefire to run in parallel, does that mean the sequential execution is the default behavior and will likely be in the future?
NB: In case you were asking why would I want to force tests to run sequentially (I know, good tests should be able to run in parallel), it is because I'm solving a solution to a specific problem which involves coverage of a web application. You can read about it here.
Thank you
The answer to your question involves speculating about the future, which is usually a difficult thing. Having said that, I'd make a guess that yes, it is going to be the default behaviour, because parallel execution of tests makes sense only for perfectly isolated tests, with all external dependencies mocked, or otherwise taken care of. It is sometimes hard to achieve, especially when creating tests for old code. In such cases the decision must be left to the programmer, who only has the idea whether it makes sense to employ parallelism.

Delete or comment out non-working JUnit tests?

I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I:
1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them
2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore
3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base
If you use Junit 4 you can annotate that tests with #Ignore annotation.
If you use JUnit 3 you can just rename tests so they don't start with test.
Also, try to fix tests for functionality you are modifying in order to not make code mess larger.
Follow the no broken window principle and take some action towards a solution of the problem. If you can't fix the tests, at least:
Ignore them from the unit tests (there are different ways to do this).
Enter as many issue as necessary and assign people to fix the tests.
Then to prevent such situation from happening in the future, install a plug in similar to Hudson Game Plugin. People gets assigned points during continuous integration, e.g.
-10 break the build <-- the worse
-1 break a test
+1 fix a test
etc.
Really cool tool to create a sense of responsibility about unit tests within a team.
The failing JUnit tests indicate that either
The source code under test has been worked on without the tests being maintained. In this case option 3 is definitely worth considering, or
You have a genuine failure.
Either way you need to fix/review the tests/source. Since it sounds like your job is to create the CI system and not to fix the tests, in your position i would leave a time-bomb in the tests. You can get very fancy with annotated methods with JUnit 4 (something like #IgnoreUntil(date="2010/09/16")) and a custom runner, so or you can simply add an an if statement to the first line of each test:
if (isBeforeTimeBomb()) {
return;
}
Where isBeforeTimeBomb() can simply check the current date against a future date of your choosing. Then you follow the advice given by others here and notify your development team that the build is green now, but is likely to explode in X days unless the timebombed tests are fixed.
Comment them out so that they can be fixed later.
Generate test coverage reports (with Cobertura for example). The methods that were supposed to be covered by the tests that you commented out will then be indicated as not covered by tests.
If they compile but fail: leave them in. That will get you a good history of test improvements over time when using CI. If the tests do not compile but break the build, comment them out and poke the developers to fix them.
This obviously does not preclude using option 3 (hitting them over the head), you should do that anyway, regardless of what you do with the tests.
You should definitely disable them in some way for now. Whether that's done by commenting, deleting (assuming you can get them back from source control) or some other means is up to you. You do not want these failing tests to be an obstacle for people trying to submit new changes.
If there are few enough that you feel you can fix them yourself, great -- do it. If there are too many of them, then I'd be inclined to use a "crowdsourcing" approach. File a bug for each failing test. Try to assign these bugs to the actual owners/authors of the tests/tested code if possible, but if that's too hard to determine then randomly selecting is fine as long as you tell people to reassign the bugs that were mis-assigned to them. Then encourage people to fix these bugs either by giving them a deadline or by periodically notifying everyone of the progress and encouraging them to fix all of the bugs.
A CI system that is steady red is pretty worthless. The main benefit is to maintain a quality bar, and that's made much more difficult if there's no transition to mark a quality drop.
So the immediate effort should be to disable the failing tests, and create a tracking ticket/work item for each. Each of those is resolved however you do triage - if nobody cares about the test, get rid of it. If the failure represents a problem that needs to be addressed before ship, then leave the test disabled.
Once you are in this state, you can now rely on the CI system to tell you that urgent action is required - roll back the last change, or immediately put a team on fixing the problem, or whatever.
I don't know your position in the company, but if it's possible leave them in and file the problems as errors in your ticket system. Leave it up to the developers to either fix them or remove the tests.
If that doesn't work remove them (you have version control, right?) and close the ticket with a comment like 'removed failing junit tests which apparently won't be fixed' or something a bit more polite.
The point is, junit tests are application code and as such should work. That's what developers get paid for. If a test isn't appropriate anymore (something that doesn't exist anymore got tested) developers should signal that and remove the test.

Categories

Resources