Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Environment:
Java
Maven
Eclipse
Spring
Jetty for developemnt
JUnit
I just started on a new project (Yay!) but the current unit test state is a bit strange (to me). There is no way to run the tests in an automated way. Some of the tests need the server up and running to pass and so fail when run otherwise.
To make matters worse there are a large number of tests that have fallen behind and no longer pass, though they should (or they should have been changed).
The trouble is that the tests are run manually (right click in eclipse and run as JUnit test) so since no-one is going to manually test everything with each change the tests are simply written and then forgotten.
I am used to developing with all tests green from the start and I want to bring testing back into a useful state with automation.
How do I:
Mark test to not run with a reason (like "legacy test needs to be updated" or "passes only with server").
Run different tests depending on if the server is up.
Some way to log test statistics for trending of testing information (not as important)
Any suggestions would be useful. Thanks.
Update: made question more specific.
Assuming you've got actual work to be getting on with, I'd advise not to try to fix all of the existing tests before doing anything else. By all means #Ignore any tests that are failing so you can work with the tests that currently pass. Maybe also try to make a note for each failing test that you ignore, so you can re-visit it when you come to work on the area of code it's trying to test.
If your tests depend on external services, you may be able to use assumeTrue() to verify that they're up before actually trying and failing the test -- this will mark the test ignored at run-time so you still get your build and as much useful information as is possible. The TestWatcher class (if you have a new enough JUnit) may help you to do this with minimal boilerplate -- we have it set up to ignore instead of failing if we can't connect, then to ignore any tests that would subsequently fail without paying the timeout penalty again.
You can use Jenkins to automate the test for you. It can be set up to watch the source control repository, then execute a build and test it when a change is detected. This will provide a centralized place where you can look to see the state of the build and success of tests.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I upgraded my project form java 1.5 to java 1.8 and my tests pass in eclipse junit run and also pass when I run individual tests using "mvn -Dtest=xxxx clean test" but when I run "mvn clean install" the tests are failing, any Idea ?
It is impossible to help answer your specific question without more detail, however here's some general guidance for things to check.
It seems likely that one or more of the individual tests are not properly initializing test fixtures or cleaning up after themselves. The earlier tests are changing the environment for the tests that follow. Tests that run after one of these polluting tests do not start with clean, properly initialized test data and they fail. When run individually, the test data is initialized and the formerly failing tests pass.
"Environment" could mean test class variables, cache, database, environment variables... etc.
When this situation happens, do not immediately assume that the tests are broken without a review of the code under test. Depending on what the code under test is doing, the failing test may be valid, pointing out a problem with initialization or proper cleanup in the code itself. For these cases, the tests have done their job!
Also, keep in mind that different JVMs can run tests in a different order - within classes, and between classes. Your test classes should not assume that tests will run in a specific order, thus each should be properly isolated from one another.
This question already has an answer here:
Maven Surefire plugin: what is meaning of filesystem in runOrder?
(1 answer)
Closed 7 years ago.
I'm currently trying to figure out why some integration tests only fail on Linux machines (and know that test order affecting test results is a bad thing).
When I run the maven target on Windows machines, the order of the test classes is pretty much preserved between runs and across two different machines.
When I run the maven target on Linux machines, the order of the test classes is different between Linux machines (did not check for across builds).
How does maven determine the order of test classes that will run?
EDIT: I am not trying to control the order of the tests, but am trying to determine how maven decides what order to run these tests, which is not answered in How do I control the order of execution of tests in Maven?
There is no guarantee to the order in which test will run.
There should be no need for tests to be in order. If you need the tests to run in order, you are not writing proper tests.
You should be able to run any test on it's own, at any time. The reason for this is that if, for example, you create an object in one test and use it in another test, if the creation test fails, the next test will fail by default.
One of the problems of a team lead is that people on the team (sometimes even including myself) often create JUnit tests without any testing functionality.
It's easily done since the developers use their JUnit test as a harness to launch the part of the application they are coding, and then either deliberately or forgetfully just check it in without any assert tests or mock verifies.
Then later it gets forgotten that the tests are incomplete, yet they pass and produce great code coverage. Running up the application and feeding data through it will create high code coverage stats from Cobertura or Jacoco and yet nothing is tested except its ability to run without blowing up - and I've even seen that worked-around with big try-catch blocks in the test.
Is there a reporting tool out there which will test the tests, so that I don't need to review the test code so often?
I was temporarily excited to find Jester which tests the tests by changing the code under test (e.g. an if clause) and re-running it to see if it breaks the test.
However this isn't something you could set up to run on a CI server - it requires set-up on the command line, can't run without showing its GUI, only prints results onto the GUI and also takes ages to run.
PIT is the standard Java mutation tester. From their site:
Mutation testing is conceptually quite simple.
Faults (or mutations) are automatically seeded into your code, then your tests are run. If your tests fail then the mutation is killed, if your tests pass then the mutation lived.
...
Traditional test coverage (i.e line, statement, branch etc) measures only which code is executed by your tests. It does not check that your tests are actually able to detect faults in the executed code. It is therefore only able to identify code the is definitely not tested.
The most extreme example of the problem are tests with no assertions. Fortunately these are uncommon in most code bases. Much more common is code that is only partially tested by its suite. A suite that only partially tests code can still execute all its branches (examples).
As it is actually able to detect whether each statement is meaningfully tested, mutation testing is the gold standard against which all other types of coverage are measured.
The quality of your tests can be gauged from the percentage of mutations killed.
It has a corresponding Maven plugin to make it simple to integrate as part of a CI build. I believe the next version will also include proper integration with Maven site reports too.
Additionally, the creator/maintainer is pretty active here on StackOverflow, and is good about responding to tagged questions.
As far as possible, write each test before implementing the feature or fixing the bug the test is supposed to deal with. The sequence for a feature or bug fix becomes:
Write a test.
Run it. At this point it will fail if it is a good test. If it does
not fail, change, replace, or add to it.
When you have a failing test, implement the feature it is supposed
to test. Now it should pass.
You have various options:
You probably could use some code analysis tool like checkstyle to verify that each test has an assertion. Or alternatively use a JUnit Rule to verify this, but both is easily tricked and works only on a superficial level.
Mutation testing as Jester does is again a technical solution which would work, and it seems #Tom_G has a tool that might work. But these tools are (in my experience) extremely slow, because the work by changing the code, running tests, analyzing result over and over again. So even tiny code bases take lots of time and I wouldn't even think about using it in a real project.
Code Reviews: such bad tests are easily caught by code reviews, and they should be part of every development process anyway.
All this still only scratches on the surface. The big question you should ponder is: why do developers feel tempted to create code just to start a certain part of the application? Why don't they write tests for what they want to implement, so there is almost no need for starting parts of the application. Get some training for automated unit testing and especially TDD/BDD, i.e. a process where you write the tests first.
In my experience it is very likely that you will hear things like: We can't test this because .... You need to find the real reason why the developers, can't or don't want to write these tests, which might or might not be the reasons they state. Then fix those reasons and those abominations of tests will go away all on their own.
What you are looking for is indeed mutation testing.
Regarding tool support, you might also want to look at the Major
mutation framework (mutation-testing.org), which is quite efficient and configurable. Major
uses a compiler-integrated mutator and gives you great control over
what should be mutated and tested. As far as I know Major does not yet
produce graphical reports but rather data (csv) files that you can
process or visualize in any way you want.
Sounds like you need to consider a coverage tool like Jacoco, the gradle plugin provides report on coverage. I also use the EclEmma Eclipse plugin to obtain the same results, but with a fairly nice integration in the IDE.
In my experience, Jacoco has provided acceptable numbers even when there are no-op unit test. As it seems able to accurately determine the tested code paths. No-op test get low or 0% coverage scores and the score increase as the test become more complete.
Update
To address the down-voter. Perhaps a more appropriate tool to address this is PMD. Can be used in an IDE or build system. With proper configuration and rule development it could be used to find these incomplete unit tests. I have used it in the past to find methods missing certain security related annotation in the past.
This nearly identical to this question with two differences.
I want to run unit tests right before commit and not all of the time (accepted answer).
The answers and comments to the question seem to indicate that the suggested plugins are no longer supported.
Basic problem I am trying to solve is simply forgetting to run unit tests for small quick changes and it seems it should be possible to automate.
On the question you linked to, I mentioned Infinitest. It doesn't fit your item #1, but it only reruns tests that are likely to have broken (presumably it does some kind of clever code analysis -- I don't know the details) so you might find it useful anyway. It is definitely still supported, and in fact it's now open source!
Basic problem I am trying to solve is
simply forgetting to run unit tests
for small quick changes and it seems
it should be possible to automate.
If your tests are fast, you can run them on each save. See this blog by Misko Hevery
Alternatively, you could use a commit hook to run the tests before accepting the update.
I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I:
1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them
2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore
3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base
If you use Junit 4 you can annotate that tests with #Ignore annotation.
If you use JUnit 3 you can just rename tests so they don't start with test.
Also, try to fix tests for functionality you are modifying in order to not make code mess larger.
Follow the no broken window principle and take some action towards a solution of the problem. If you can't fix the tests, at least:
Ignore them from the unit tests (there are different ways to do this).
Enter as many issue as necessary and assign people to fix the tests.
Then to prevent such situation from happening in the future, install a plug in similar to Hudson Game Plugin. People gets assigned points during continuous integration, e.g.
-10 break the build <-- the worse
-1 break a test
+1 fix a test
etc.
Really cool tool to create a sense of responsibility about unit tests within a team.
The failing JUnit tests indicate that either
The source code under test has been worked on without the tests being maintained. In this case option 3 is definitely worth considering, or
You have a genuine failure.
Either way you need to fix/review the tests/source. Since it sounds like your job is to create the CI system and not to fix the tests, in your position i would leave a time-bomb in the tests. You can get very fancy with annotated methods with JUnit 4 (something like #IgnoreUntil(date="2010/09/16")) and a custom runner, so or you can simply add an an if statement to the first line of each test:
if (isBeforeTimeBomb()) {
return;
}
Where isBeforeTimeBomb() can simply check the current date against a future date of your choosing. Then you follow the advice given by others here and notify your development team that the build is green now, but is likely to explode in X days unless the timebombed tests are fixed.
Comment them out so that they can be fixed later.
Generate test coverage reports (with Cobertura for example). The methods that were supposed to be covered by the tests that you commented out will then be indicated as not covered by tests.
If they compile but fail: leave them in. That will get you a good history of test improvements over time when using CI. If the tests do not compile but break the build, comment them out and poke the developers to fix them.
This obviously does not preclude using option 3 (hitting them over the head), you should do that anyway, regardless of what you do with the tests.
You should definitely disable them in some way for now. Whether that's done by commenting, deleting (assuming you can get them back from source control) or some other means is up to you. You do not want these failing tests to be an obstacle for people trying to submit new changes.
If there are few enough that you feel you can fix them yourself, great -- do it. If there are too many of them, then I'd be inclined to use a "crowdsourcing" approach. File a bug for each failing test. Try to assign these bugs to the actual owners/authors of the tests/tested code if possible, but if that's too hard to determine then randomly selecting is fine as long as you tell people to reassign the bugs that were mis-assigned to them. Then encourage people to fix these bugs either by giving them a deadline or by periodically notifying everyone of the progress and encouraging them to fix all of the bugs.
A CI system that is steady red is pretty worthless. The main benefit is to maintain a quality bar, and that's made much more difficult if there's no transition to mark a quality drop.
So the immediate effort should be to disable the failing tests, and create a tracking ticket/work item for each. Each of those is resolved however you do triage - if nobody cares about the test, get rid of it. If the failure represents a problem that needs to be addressed before ship, then leave the test disabled.
Once you are in this state, you can now rely on the CI system to tell you that urgent action is required - roll back the last change, or immediately put a team on fixing the problem, or whatever.
I don't know your position in the company, but if it's possible leave them in and file the problems as errors in your ticket system. Leave it up to the developers to either fix them or remove the tests.
If that doesn't work remove them (you have version control, right?) and close the ticket with a comment like 'removed failing junit tests which apparently won't be fixed' or something a bit more polite.
The point is, junit tests are application code and as such should work. That's what developers get paid for. If a test isn't appropriate anymore (something that doesn't exist anymore got tested) developers should signal that and remove the test.