JUnit dynamic method dispatch? - java

Here is the problem I am facing. I have been tasked with testing the query parsing engine of a piece of software through negative testing. That is, I must write a large number of queries that will fail, and test that they do indeed fail, as well as having the expected error message for the particular error in the query. These are defined in an XML file. I've written a simple wrapper around the parsing of the XML document and struct-like classes for these test cases.
Now, given that I am using JUnit as a testing framework, I'm running into this issue - the act of running through all of these externally defined tests lives in a single method. If a single test fails, then no more will be run. Is there any way to dynamically dispatch a method to handle each of the tests as I encounter them? This way, if a test fails, we can still run the remaining ones while getting a report on what did and did not fail.
The other alternative is, of course, writing all of the JUnit tests. I'd like to avoid this for many reasons, one of which is that the number of tests to be run is extremely large, and a test case is 99% boilerplate code.
Thanks.

You should look into JUnit's Parameterized annotation.

If I understand correctly, the input data and expected results are all defined in XML, so you don't need specific code to handle each test case?
If you use JUnit4, you could write your own Runner implementation. You could either implement Runner directly or extend ParentRunner. All you need to implement is one method that returns a description of the tests, and another method that runs the tests.

Related

JUnit Parameterized - merge fails

Background:
Our Test-Suite uses an inhouse developed Test-Framework, based on JUnit. Several of our tests use JUnit's Parameterized functionality in order to test the variety of different test data, e.g. layout tests (we use Galen Framework), where we want to verify the correct behaviour at diferrent window resolutions.
Our TestCaseRule, which is applied to all of our tests in a base class, saves the failed tests into a Database, from there we can browse through the fails by a Web Interface.
Problem:
JUnit's Parameterized Runner creates one fail instance for each failed test + parameter combination.
That means, if I have a class with for instance 3 tests, and each one runs 6 times (6 parameters), if all tests fail I would get 6x3=18 fails in my reporting, instead of (desired) 3. Thereby our reporting gets an entire different meaning and becomes useless...
Desired:
I have googled a lot but unfortunately could not find anyone facing the same issue. The best solution for me would be if I could get JUnit to merge all fails per method and concatenating stacktraces, so I could really just ensure one method will result at the most in 1 fail. I also do not want to skip all following tests, so I don't miss fails which would be generated on different parameters.
I experimented with reflection; fetching Parameters data in a #Before method, iterating through the test method, injecting the parameters and finally preveting the actual test to be executed, but it was quite hacky and did not represent an accceptable solution because of its test-scope-lack.
I am thankful for all help attempts!

Does using Method Interceptor in TestNG skips the test on fail or Execute it even if any fails?

I would like to know if using this answer skips the following test methods or marks it as fails. I know if priority is used the following methods will be run and will not be dependent. I wanna know how method interceptor orders the methods....
Thanks...
I believe what you're asking is: When using an IMethodInterceptor to order tests, will a failed test method cause subsequent tests to fail? No. Ordering tests using IMethodInterceptor does not create dependencies.
This is pretty easy to test yourself. I definitely recommend trying it out to see how it behaves in different scenarios.

TestNG - #BeforeMethod for specific methods

I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?

Data Driven testing with Junit

I am new to junit concept.
Can anyone explain to me clearly what the Data Driven concept is?
And another other question would be if we can write two RunWith-annotated methods in one junit class.
#RunWith(Parameterized.class)
...
and
#RunWith(Theories.class)
...
http://support.smartbear.com/viewarticle/29139/
Explains data driven concept in detail. Mainly it invlolves in creating different sets of data to test the code in short. Mainly used to write automated test cases where a certain piece of code is always run through different types of test data and tested for desired output.
And for the second question I don't think multiple #Runwith method makes sense, as it is a directive for junit to load proper runner to execute testcases instead of default runner built into the Junit. I haven't tried it also. Hope this offers some answer to your question.

Why does JUnit run test cases for Theory only until the first failure?

Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".

Categories

Resources