JUnit5 - execute a test method without #Test annotation - java

I have a super class that defines #BeforeEach and #AfterEach methods. The class also has a method that should run only when a system property is set - basically #EnabledIfSystemPropertyCondition. Normal tests with #Test annotation are in subclasses.
This works well, but the conditional test is shown as skipped in test report - which I want to avoid. Is there any way I can run a test method on a condition but not consider it as a test in normal situations?
This question is not related to inheritance of #Test annotations. The basic question is Is there any way I can run a test method on a condition but not consider it as a test in normal situations?

The easiest way to achieve that you want is to check clause manually inside the test case (just as #Paizo advised in comments). JUnit has several features which allow you skip tests execution such as Junit-ext project with #RunIf annotation or special Assume clause which will force test to be skipped if this assumption is not met. But these features also mark test as skipped which is not desired in your case. Another possibility you might think about is modifying your code with the magic of reflection to add/remove annotations at runtime, but this is not possible. Theoretically I can imagine a difficult way using cglib to subclass your test class at runtime and manage annotations just like Spring do, but first of all ask yourself if it worth?
My personal feeling is that you're trying to fix the thing which is working perfectly and not broken. Skipped test in the report is not a failed test. It's cool that you can understand from report was test executed or no.
Hope it helps!

Related

Does using Method Interceptor in TestNG skips the test on fail or Execute it even if any fails?

I would like to know if using this answer skips the following test methods or marks it as fails. I know if priority is used the following methods will be run and will not be dependent. I wanna know how method interceptor orders the methods....
Thanks...
I believe what you're asking is: When using an IMethodInterceptor to order tests, will a failed test method cause subsequent tests to fail? No. Ordering tests using IMethodInterceptor does not create dependencies.
This is pretty easy to test yourself. I definitely recommend trying it out to see how it behaves in different scenarios.

TestNG - #BeforeMethod for specific methods

I'm using Spring Test with TestNG to test our DAOs, and I wanted to run a specific text fixture script before certain methods, allowing the modifications to be rolled back after every method so that the tests are free to do anything with the fixture data.
Initially I thought that 'groups' would be fit for it, but I already realized they're not intended for that (see this question: TestNG BeforeMethod with groups ).
Is there any way to configure a #BeforeMethod method to run only before specific #Tests? The only ways I see are workarounds:
Define an ordinary setup method and call at the beginning of every #Test method;
Move the #BeforeMethod method to a new class (top level or inner class), along with all methods that depend on it.
Neither is ideal, I'd like to keep my tests naturally grouped and clean, not split due to lack of alternatives.
You could add a parameter your #BeforeMethod with the type 'java.lang.reflect.Method'. TestNG will then inject the reflection information for the current test method including the method name, which you could use for switching.
If you add another 'Object' parameter, you will also get the invocation parameters of the test method.
You'all find all on possible parameters for TestNG-annotated methods in chapter 5.18.1 of the TestNG documentation.
Tests are simply not designed to do this. Technically speaking, a single tests is supposed to handle being idempotent for itself meaning it sets up, tests, and takes down. That is a single test. However, a lot of tests sometimes have the same set-up and take down method, whereas other tests need one set-up before they all run. This is the purpose of the #Before type tags.
If you don't like set-up and tear-down inside your test, your more then welcome to architect your own system, but technically speaking, if certain methods require specific set-ups or tear-downs, then that really should be embodied IN the test, since it is a requirement for test to pass. It is ok to call a set-up method, but ultimately, it should be OBVIOUS that a test needs a specific set-up in order to pass. After all, if your using specific set-ups, aren’t you actually testing states rather than code?

Why does a JUnit suite class not execute its own Test, Before, and After annotations?

Why does a JUnit Suite class, in my cases its called TestSuite.class, not execute its own Test, Before, and After annotations? It only exectutes its own BeforeClass, AfterClass, and then ALL annotations of the suite test classes. I proved this is the case by creating a test project around this theory: https://gist.github.com/djangofan/5033350
Can anyone refer me to where this is explained? I need to really understand this.
Because a TestSuite is not a Test itself. Those annotation are for unit tests only. See here for an example.
public class FeatureTestSuite {
// the class remains empty <----- important for your question
}
A TestSuite is way of identifying a group of tests you wish apply some common behaviour to.
Perhaps better explained with an example.
So say you were doing some basic CRUD tests on an Orders Table in a Database MyDB.
Everyone needs mydb to be there and the orders table to exist, so you put them in a suite. It sets up the db and the table, the tests run, then before the suite goes out of scope the db is dropped, everything is nice and clean for the next test run. Otherwise you'd have to do that in every test which is expensive, or worse have test data from previous tests cause the others to fail often apparently randomly as you would have created an implicit dependancy between them.
There are other ways of achieving the same thing, but they clutter up your tests and you have to remember to call them.
You don't have to test it. If it doesn't get done none of your tests will execute.
As others have said, it's because a TestSuite is not a Test. It's just a class with an annotation to group other tests, so that it is more convenient to run.
It does have one special property, however, and that is the execution of #BeforeClass and #AfterClass. These are enabled to allow a global setup/teardown for the suite. It does not execute any tests (including #After, #Before or any rules).

Why does JUnit run test cases for Theory only until the first failure?

Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".

is there any conditional annotation in JUnit to mark few test cases to be skipped?

As far as I know to skip a test case the simplest thing to do is to remove the #Test annotation, but to do it over a large number of test cases is cumbersome. I was wondering if there is any annotation available in JUnit to turn off few test cases conditionally.
Hard to know if it is the #Ignore annotation that you are looking for, or if you actually want to turn off certain JUnit tests conditionally. Turning off testcases conditionally is done using Assume.
You can read about assumptions in the release notes for junit 4.5
There's also a rather good thread here on stack over flow:
Conditionally ignoring tests in JUnit 4
As other people put here #Ignore ignores a test.
If you want something conditional that look at the junit assumptions.
http://junit.sourceforge.net/javadoc/org/junit/Assume.html
This works by looking at a condition and only proceeding to run the test if that condition is satisfied. If the condition is false the test is effectively "ignored".
If you put this in a helper class and have it called from a number of your tests you can effectively use it in the way you want. Hope that helps.
You can use the #Ignore annotation which you can add to a single test or test class to deactivate it.
If you need something conditional, you will have to create a custom test runner that you can register using
#RunWith(YourCustomTestRunner.class)
You could use that to define a custom annotation which uses expression language or references a system property to check whether a test should be run. But such a beast doesn't exist out of the box.
If you use JUnit 4.x, just use #Ignore. See here

Categories

Resources