The goal is to provide different variations of the very same test methods (just like parameterized testing). The problem is that the actual number of necessary test runs depends and is discovered on the go.
The original idea was to create subelements (children) of the test (using the description object of the test metthod addChild).
When running the code Eclipse View shows that all the discovered and executed tests are placed under "Unrooted Tests". The tests are described using the description.getTestClass() method the test method's Description instance.
Does anyone can explain what is happening and if possible give a solution?
I extend the BlockJUnit4ClassRunner. Also by using Description.addChild one adds a child.
The test runner of eclipse is a custom implementation called RemoteTestRunner. From its source code I learned that the listener mechanism is not responsible for creating the tree of test cases but the actual Runner structure (getDescription) and the actual children. But again not the children of the description but the children of each of the Runner instance.
All in all the code was harder to read and understand than it should be.
So Unrooted testcases are just tests that were reported by the listener process but could not match properly to the actual Runner structure.
Related
Background:
Our Test-Suite uses an inhouse developed Test-Framework, based on JUnit. Several of our tests use JUnit's Parameterized functionality in order to test the variety of different test data, e.g. layout tests (we use Galen Framework), where we want to verify the correct behaviour at diferrent window resolutions.
Our TestCaseRule, which is applied to all of our tests in a base class, saves the failed tests into a Database, from there we can browse through the fails by a Web Interface.
Problem:
JUnit's Parameterized Runner creates one fail instance for each failed test + parameter combination.
That means, if I have a class with for instance 3 tests, and each one runs 6 times (6 parameters), if all tests fail I would get 6x3=18 fails in my reporting, instead of (desired) 3. Thereby our reporting gets an entire different meaning and becomes useless...
Desired:
I have googled a lot but unfortunately could not find anyone facing the same issue. The best solution for me would be if I could get JUnit to merge all fails per method and concatenating stacktraces, so I could really just ensure one method will result at the most in 1 fail. I also do not want to skip all following tests, so I don't miss fails which would be generated on different parameters.
I experimented with reflection; fetching Parameters data in a #Before method, iterating through the test method, injecting the parameters and finally preveting the actual test to be executed, but it was quite hacky and did not represent an accceptable solution because of its test-scope-lack.
I am thankful for all help attempts!
Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".
I'm writing a little library for movies for myself. It's partly for learning TDD. Now I have a problem I can't solve.
The code is in here https://github.com/hasanen/MovieLibrary/blob/master/movielibrary-core/src/test/java/net/pieceofcode/movielibrary/service/MovieLibraryServiceITC.java
The problem is that when I run the whole class (right click above class name in eclipse), the second test fails because removing doesn't succeed. But when right clicking the method (getMovieGenres_getAllGenresAndRemoveOne_returnsTwoGenreAndIdsAreDifferent) and choosing Run as Junit Test, it works.
I don't necessarily need the fix, but at least some advice on how to find why junit is acting like this.
From the way you explain the problem, the problem appears to be in the setUp class. The setUp class runs before every test case invocation. This is the general sequence.
1- Add three movies.
2- Test if three movies exists.
3- Add three movies
4- remove movie item # 1.
Since sequence 1-4 works, the problem is sequence 3. Either sequence 3 swallows some exception or mutates the underlying object. (may be changes the sequence.) Without knowing how addMovie changes the underlying object, its hard to tell.
Something outside your test class (likely a superclass) is creating movieLibraryService, and it's not being recreated as often as it needs to be for independent testing.
If you add the line
movieLibraryService = new MovieLibraryService();
at the top of your testSetUp() method, this service will be properly reset before the running of each test method, and they will likely work properly.
As it is, I suspect you're getting a failure on the assertions about size, as the size is becoming 6 instead of 3.
Alternatively, you could add a teardown method (annotated with #After) which removes the contents of the movie library so that it always starts empty.
IMHO the problem is your test isn't real unit test but integration one. So while testing your service you're testing all the layers it uses. I recommend yo to use mocks for lower layers dependencies (EasyMock or something) and use integration tests only for your repository layer. This way you can avoid persistence layer influences while testing service layer.
Here is the problem I am facing. I have been tasked with testing the query parsing engine of a piece of software through negative testing. That is, I must write a large number of queries that will fail, and test that they do indeed fail, as well as having the expected error message for the particular error in the query. These are defined in an XML file. I've written a simple wrapper around the parsing of the XML document and struct-like classes for these test cases.
Now, given that I am using JUnit as a testing framework, I'm running into this issue - the act of running through all of these externally defined tests lives in a single method. If a single test fails, then no more will be run. Is there any way to dynamically dispatch a method to handle each of the tests as I encounter them? This way, if a test fails, we can still run the remaining ones while getting a report on what did and did not fail.
The other alternative is, of course, writing all of the JUnit tests. I'd like to avoid this for many reasons, one of which is that the number of tests to be run is extremely large, and a test case is 99% boilerplate code.
Thanks.
You should look into JUnit's Parameterized annotation.
If I understand correctly, the input data and expected results are all defined in XML, so you don't need specific code to handle each test case?
If you use JUnit4, you could write your own Runner implementation. You could either implement Runner directly or extend ParentRunner. All you need to implement is one method that returns a description of the tests, and another method that runs the tests.
I am fairly new to Java. I have constructed a single JUnit test class and inside this file are a number of test methods. When I run this class (in NetBeans) it runs each test method in the class in order.
Question 1: How can I run only a specific sub-set of the test methods in this class?
(Potential answer: Write #Ignore above #Test for the tests I wish to ignore. However, if I want to indicate which test methods I want to run rather than those I want to ignore, is there a more convenient way of doing this?)
Question 2: Is there an easy way to change the order in which the various test methods are run?
Thanks.
You should read about TestSuite's. They allow to group & order your unit test methods. Here's an extract form this article
"JUnit test classes can be rolled up
to run in a specific order by creating
a Test Suite.
EDIT: Here's an example showing how simple it is:
public static Test suite() {
TestSuite suite = new TestSuite("Sample Tests");
suite.addTest(new SampleTest("testmethod3"));
suite.addTest(new SampleTest("testmethod5"));
return suite;
}
This answer tells you how to do it. Randomizing the order the tests run is a good idea!
Like the comment from dom farr states, each unit test should be able to run in isolation. There should be no residuals and no given requirements after or before a test run. All your unit tests should pass run in any order, or any subset.
It's not a terrible idea to have or generate a map of Test Case --> List of Test and then randomly execute all the tests.
There are a number of approaches to this, but it depends on your specific needs. For example, you could split up each of your test methods into separate test classes, and then arrange them in different test suites (which would allow for overlap of methods in the suites if desired). Or, a simpler solution would be to make your test methods normal class methods with one test method in your class that calls them in your specific order. Do you want them to be dynamcially called?
I've not been using Java that long either, but as far as I've seen there isn't a convenient method of marking methods to execute rather than ignore. Instead I think this could be achieved using the IDE. When I want to do that in Eclipse I can use the junit view to run individual tests by clicking on them. I imagine there is something similar in Netbeans.
I don't know of an easy way to reorder the test execution. Eclipse has a button to rerun tests with the failing tests first, but it sounds like you want something more versatile.