I have a set of junit tests that run automatically on my build server (Jenkins).
I run more than 500 tests. Most of them, in the test results view, show up with the correct package value
Example : results for com.test.app.RollingArchiveTest
But I have 8 tests that have junit.framework prepended to them.
So it would give : junit.framework.com.test.app.RollingArchiveTest
What is really strange is that I see both behaviors in tests that belong to the same package. Some classes are prepended and some are not.
I looked at the code, and founds nothing really obvious. The tests all run using the same command so I would not expect any change there.
I could not really find any information about that on the web.
Would you have any clue what could cause this?
I am not sure if it is relevant, but all the test cases for classes that have junit.framework prepended to them are skipped.
Thanks
Ok, diving deeper I realized that in all the classes I use the Assume statement.
When this assume firing in the #BeforeClass, I end up with the junit.framework.TestSuite prepended.
So the solution would be to avoid Assuming anything in the BeforeClass.
Related
I'd like to mark unit tested methods as tested in Java code, so that a compiler error occurs in case of a missing testing method.
Is there a way to do that? I couldn't find a satisfying solution.
of course it's possible. you can write your own postprocessor that parses your code, finds marked method, runs tests, gets coverage results, compares them with marked methods and fails build if something is wrong.
but
it's a loooot of work
the benefit is very small. why? because what matter is not the coverage itself but quality of tests. you can even write a test generator that covers all / most of your code automatically but it will have no value. good test checks for behaviour specific. and only devs know what behaviour is expected
the quality of test is usually covered by code reviews. at the pull request screen you can also display the stats of the pull request ie. code coverage etc. but i'm not sure if automating it further is worth the effort
For a decent sized open source project where developers come and go, someone may fix a bug without realizing that someone else a while back committed a disabled unit test (a la #Ignore). We'd like to find the passing tests that are disabled so we can enable them and update the bug tracker, CC list, and anything else downstream.
What is the best way to occasionally run all #Ignore'd tests and identify the ones that are now passing? We are using Java 1.6 with JUnit4, building our project with ant and transitioning to gradle. We use Jenkins for CI.
A few ideas:
Permanently replace all of our #Ignore annotations with a conditional ignore
http://www.codeaffine.com/2013/11/18/a-junit-rule-to-conditionally-ignore-tests/
Run a custom JUnit4 class runner that changes the behavior of #Ignore.
https://stackoverflow.com/a/42520871
Temporarily comment out all #Ignore annotations so that they run. However we'd need a way to negate the failures.
Sorry, this is not a solution, but rather another alternative that has worked for me:
My key point was to not modify existing (1000s) of unit tests. So no broad code changes. No new Annotations, certainly not temporarily.
What I did was override the JUnit #Ignore detection and make that conditional, via classpath prepends: Check in a separate control file if that test/class is listed or disabled. This is based on package/FQCN/method name and Regexp patterns. If covered, run it even though it still has #Ignore in the unchanged original JUNit Test source.
Log the outcome, amend the control file. Rinse and repeat.
I am continuing the development of a serialization layer generator. The user enters a description of types (currently in XSD or in WSDL), and the software produces code in a certain target language (currently, Java and ansi C89) which is able to represent the types described and which is also able to serialize (turn into a byte-sequence) and deserialize these values.
Since generating code is tricky (I mean, writing code is hard. Writing code that writes code is writing code to do a hard thing, which is a whole new land of hardness :) ). Thus, in the project which preceded my master thesis, we decided that we want some system tests in place.
These system tests know a type and a number of pairs of values and byte sequences. In order to execute a system test in a certain language, the type is run through the syste, resulting in code as described above. This code is then linked with some handwritten host-code, which is capable of reading these pairs of a byte sequence and a value and functions to read values of the given value from a string. The resulting executable is then run and the byte-value-pairs are fed into this executable and it is overall checked if all such bindings result in the output "Y". If this is the case, then these example values for the type serialize into the previously defined byte sequence and we can conclude that the generated code compiles and runs correctly, and thus, overall, that the part of the system handling this type is correct. This is a very good thing.
However, right now I am a bit unhappy with the current implementation. Currently, I have written a custom junit runner which uses quite a lot of reflection sorcery in order to read these byte-value-bindings from a classes attributes. Also, the overall stack to generate the code requires a lot of boilerplate code and boilerplate classes which do little more than to contain two or three strings. Even worse, it is quite hard to get a good integration with all tools which base on Junits descriptions and which generate test failure reports. It is quite hard to actually debug what is happening if the helpful maven Junit testrunner or the eclipse test runner gobble up whatever errors the compiler threw, just because the format of this error is different from junits own assertion errors.
Even worse, a single failed test in the generated code causes the maven build to fail. This is very annoying. I like it if the maven build fails if a certain test of a different unit fails, because (for example), if a certain depth first preorder calculation fails for some reason, everything will go haywire. However, if I just want to show someone some generated code for a type I know working, then it is very annoying if I cannot quickly build my application because the type I am working on right now is not finished.
So, given this background, how can I get a nice automated system which checks these generation specifications? Possibilities I have considererd:
A Junit integrated solution appears to be less than ideal, unless I can improve the integration of maven and junit and junit with my runner and everything else.
We used fitnesse earlier, but overall ditched it, because it caused more problems than it solved. The major issues we had were integration into maven and hudson.
A solution using texttest. I am not entirely convinced, because this mostly wants an executable, strings to put on stdin and strings to expect on stdout. Adding the whole "run application, link with host code and THEN run the generated executable" seems kinda complicated.
Writing my own solution. This will of course work and do what I want. However, this will be the most time consuming task, as usual.
So... do you see another possible way to do this while avoiding to write something own?
You can run Maven with -Dmaven.test.skip=true. Netbeans has a way to set this automatically unless you explicitly hit one of the commands to test the project, I don't know about Eclipse.
I'm currently building a CI build script for a legacy application. There are sporadic JUnit tests available and I will be integrating a JUnit execution of all tests into the CI build. However, I'm wondering what to do with the 100'ish failures I'm encountering in the non-maintained JUnit tests. Do I:
1) Comment them out as they appear to have reasonable, if unmaintained, business logic in them in the hopes that someone eventually uncomments them and fixes them
2) Delete them as its unlikely that anyone will fix them and the commented out code will only be ignored or be clutter for evermore
3) Track down those who have left this mess in my hands and whack them over the heads with the printouts of the code (which due to long-method smell will be sufficently suited to the task) while preaching the benefits of a well maintained and unit tested code base
If you use Junit 4 you can annotate that tests with #Ignore annotation.
If you use JUnit 3 you can just rename tests so they don't start with test.
Also, try to fix tests for functionality you are modifying in order to not make code mess larger.
Follow the no broken window principle and take some action towards a solution of the problem. If you can't fix the tests, at least:
Ignore them from the unit tests (there are different ways to do this).
Enter as many issue as necessary and assign people to fix the tests.
Then to prevent such situation from happening in the future, install a plug in similar to Hudson Game Plugin. People gets assigned points during continuous integration, e.g.
-10 break the build <-- the worse
-1 break a test
+1 fix a test
etc.
Really cool tool to create a sense of responsibility about unit tests within a team.
The failing JUnit tests indicate that either
The source code under test has been worked on without the tests being maintained. In this case option 3 is definitely worth considering, or
You have a genuine failure.
Either way you need to fix/review the tests/source. Since it sounds like your job is to create the CI system and not to fix the tests, in your position i would leave a time-bomb in the tests. You can get very fancy with annotated methods with JUnit 4 (something like #IgnoreUntil(date="2010/09/16")) and a custom runner, so or you can simply add an an if statement to the first line of each test:
if (isBeforeTimeBomb()) {
return;
}
Where isBeforeTimeBomb() can simply check the current date against a future date of your choosing. Then you follow the advice given by others here and notify your development team that the build is green now, but is likely to explode in X days unless the timebombed tests are fixed.
Comment them out so that they can be fixed later.
Generate test coverage reports (with Cobertura for example). The methods that were supposed to be covered by the tests that you commented out will then be indicated as not covered by tests.
If they compile but fail: leave them in. That will get you a good history of test improvements over time when using CI. If the tests do not compile but break the build, comment them out and poke the developers to fix them.
This obviously does not preclude using option 3 (hitting them over the head), you should do that anyway, regardless of what you do with the tests.
You should definitely disable them in some way for now. Whether that's done by commenting, deleting (assuming you can get them back from source control) or some other means is up to you. You do not want these failing tests to be an obstacle for people trying to submit new changes.
If there are few enough that you feel you can fix them yourself, great -- do it. If there are too many of them, then I'd be inclined to use a "crowdsourcing" approach. File a bug for each failing test. Try to assign these bugs to the actual owners/authors of the tests/tested code if possible, but if that's too hard to determine then randomly selecting is fine as long as you tell people to reassign the bugs that were mis-assigned to them. Then encourage people to fix these bugs either by giving them a deadline or by periodically notifying everyone of the progress and encouraging them to fix all of the bugs.
A CI system that is steady red is pretty worthless. The main benefit is to maintain a quality bar, and that's made much more difficult if there's no transition to mark a quality drop.
So the immediate effort should be to disable the failing tests, and create a tracking ticket/work item for each. Each of those is resolved however you do triage - if nobody cares about the test, get rid of it. If the failure represents a problem that needs to be addressed before ship, then leave the test disabled.
Once you are in this state, you can now rely on the CI system to tell you that urgent action is required - roll back the last change, or immediately put a team on fixing the problem, or whatever.
I don't know your position in the company, but if it's possible leave them in and file the problems as errors in your ticket system. Leave it up to the developers to either fix them or remove the tests.
If that doesn't work remove them (you have version control, right?) and close the ticket with a comment like 'removed failing junit tests which apparently won't be fixed' or something a bit more polite.
The point is, junit tests are application code and as such should work. That's what developers get paid for. If a test isn't appropriate anymore (something that doesn't exist anymore got tested) developers should signal that and remove the test.
I have written JUnit tests for my class, and would like it to tell me if there is any part of my code that is not unit tested. Is there a way to do this?
Yes, coverage tools like cobertura or emma.
They create reports that show every line in the source code and whether it was executed or not (and aggregated statistics as well).
Of course, they can only show you if the code was run. There is no way to tell if the unit test contained assertions to confirm that the result was correct.
You need some code coverage tools. See here (http://java-source.net/open-source/code-coverage) for some
If you look at the first one I think it does what you need
Cobertura is a free Java tool that calculates the percentage of code accessed by tests. It can be used to identify which parts of your Java program are lacking test coverage. It is based on jcoverage. Features of Cobertura:
Can be executed from ant or from the
command line.
If you use Eclipse, you can also try EclEmma, which shows you which lines of source were covered by your test. This is sometimes more useful than running a coverage tool like Cobertura because you can run a single test from inside Eclipse and then get immediate feedback on what was covered.
Your headline and your actual question differ. The tools mentioned in the other answers can tell you, which part of the code were not tested (=not executed at all). Making "make sure that all parts of code is unit tested" is a different thing. The coverage tools can tell you whether all lines/instructions have been executed, but they don't guarantee that everything is tested functionally (all constellations of data, all execution paths, etc.). This requires some brain power.
In my opinion, test coverage often gives a wrong feeling of safety. E.g. testing trivial getters increases coverage a lot but is rather useless.
If you are using IntelliJ then there is a button titled
"Run With Coverage"