Background:
I have a a series of 172 integration tests that have been written using JUnit. Since our project needed to have them run on an embedded arm chip the tests have to be compiled into a jar and run using the command line. We realized after writing the tests that JUnit did not support xml output as part of the default library (which we required for jenkins). We added TestNG to our project because it provided a simple way to output XML results for our JUnit tests.
We are using the following command to run our tests:
/usr/local/frc/JRE/bin/java -ea -jar wpilibJavaIntegrationTests-0.1.0-SNAPSHOT.jar -junit -testclass edu.wpi.first.wpilibj.test.TestSuite
Where TestSuite is a bit like this:
#RunWith(Suite.class)
#SuiteClasses({
WpiLibJTestSuite.class,
CANTestSuite.class,
CommandTestSuite.class,
SmartDashboardTestSuite.class
})
public class TestSuite{
static{
//Some basic java.util.logging setup stuff
}
}
Each suite listed has its own set of test classes listed in a similarly formatted class.
Problem:
All 172 tests are being run by TestNG however it is only reporting 81 tests run and not reporting some failures (there were really two tests that failed but only one was reported).
===============================================
Command line suite
Total tests run: 81, Failures: 1, Skips: 8
===============================================
It seems that that unreported tests are the ones #RunWith(Parameterized.class)
Is there any way to get TestNG to properly recognize these tests and report their results appropriately?
You will have to change it to the TestNG way of doing it. (You should probably scan your code for imports from junit and replace them with corresponding TestNG constructs.)
This can be easily done, as most of the assertion statements are nearly identical to the JUnit ones (in fact several frameworks exist for automatically converting all tests to TestNG), and the test classes can be stripped of their direct inheritance from JUnit.
To do this in TestNG you'll need to annotate your test method with a data provider:
#Test(dataProvider = "MyProvider")
public void testSomeStuff() {}
and then implement a corresponding provider using either a 2D array (if your test cases are small and already known):
#DataProvider(name = "MyProvider")
public Object[][] myDataProvider() {}
or by using the following if your test cases are large or unknown:
#DataProvider(name = "MyProvider")
public Iterator<Object[]> myDataProvider() {}
This article may be helpful for understanding the differences between JUnit and TestNG.
Related
A project I am working on involves updating our codebase to JUnit 5. A number of our test classes had previously been using PowerMockito for static mocking. As PowerMockito does not currently support JUnit 5, we updated our Mockito dependency and switched to using Mockito's static mocking. This works for the most part when running the unit tests but has issues when the tests are run with pitest to get mutation coverage.
Despite the tests running and passing fine with mvn test or mvn verify, pitest will give the error:
[ERROR] Failed to execute goal org.pitest:pitest-maven:1.5.2:mutationCoverage (default-cli) on project <PROJECT>: Execution default-cli of goal org.pitest:pitest-maven:1.5.2:mutationCoverage failed: 9 tests did not pass without mutation when calculating line coverage. Mutation testing requires a green suite.
The 9 tests mentioned are the only tests that use static mocking with Mockito.
The tests generally look like this:
Sample Static Mocking Test
#ExtendWith(MockitoExtension.class)
public class SampleTest {
#Test
public void sampleTestWithMocking() {
String param = "test";
String expected = "value";
MockedStatic<MyClass> mockStaticMyClass = Mockito.mockStatic(MyClass.class);
mockStaticMyClass.when(() -> MyClass.myStaticMethod(param)).thenReturn(expected);
assertEquals(expected, MyClass.myStaticMethod(param));
}
}
Pitest does not currently support static mocking with mockito. I'll see if it could be supported, but it likely to be a complex task. Support for Powermock required dark magics (rewriting the bytecode of the bytcode manipulation library it uses), and was always brittle and easily broken by new Powermock releases.
A better long term solution would be to remove the need for static mocking from the test suite. Although it does have some use cases, it is most often a red flag for design issues.
I have test classes wich use other test classes. It is necessary to run these classes depending on the transferred parameter in Jenkins. (For example: if jenkins get parameter testOne then start testsuite testOne. If jenkins get parameter testOther then start testsuite testOther and so on)
How can I implement this in jUnit?
This is an example of one of the test suites:
#RunWith(Categories.class)
#Suite.SuiteClasses({UserLogin.class,
ExampleTests.class
}
)
public class FirstTest {
I believe you can do something like that using junit test categories. For example, for running integration tests, you can configure a separate annotation for the tests.
Maven supports test categories.
Does it solve your problem or is the jenkins input parameter critical? If it is a jenkins parameter, I believe it is some environment/build configuration and you can customize your build (maven tests to be run) as per the the environment or build job.
I have a bunch of JUnit tests that extend my base test class called BaseTest which in turn extends Assert. Some of my tests have a #Category(SlowTests.class) annotation.
My BaseTest class is annotated with the following annotation #RunWith(MyJUnitRunner.class).
I've set up a Gradle task that is expected to run only SlowTests. Here's my Gradle task:
task integrationTests(type: Test) {
minHeapSize = "768m"
maxHeapSize = "1024m"
testLogging {
events "passed", "skipped", "failed"
outputs.upToDateWhen {false}
}
reports.junitXml.destination = "$buildDir/test-result"
useJUnit {
includeCategories 'testutils.SlowTests'
}
}
When I run the task, my tests aren't run. I've pinpointed this issue to be related to the custom runner MyJUnitRunner on the BaseTest. How can I set up my Gradle or test structure so that I can use a custom runner while using the Suite.
The solution to this turned out to smaller and trickier than I thought. Gradle was using my custom test runner and correctly invoking the filter method. However, my runner reloads all test classes through its own classloader for Javaassist enhancements.
This lead to the issue that SlowTest annotation was loaded through the Gradle classloader but when passed to my custom runner, the runner checked if the class was annotated with that annotation. This check never resolved correctly as the equality of the SlowTest annotation loaded through two different classloaders was different.
--
Since I've already done the research, I'll just leave this here. After days of digging through the Gradle and the (cryptic) JUnit sources, here's what I got.
Gradle simply doesn't handle any advanced JUnit functionality except the test categorization. When you create a Gradle task with the include-categories or the exclude-categories conditions, it builds a CategoryFilter. If you don't know, a Filter is what JUnit gives to the test-runner to decide whether a test or a test method should be filtered out. The test runner must implement the Filterable interface.
JUnit comes with multiple runners, the Categories is just another one of them. It extends a family of test runners called Suite. These suite based runners are designed to run a "suite" of tests. A suite of tests could be built by annotation introspection, by explicitly defining tests in a suite or any other method that builds a suite of tests.
In the case of the Categories runner, JUnit has it's own CategoryFilter but Gradle doesn't use that, it uses it's own CategoryFilter. Both provide more or less the same functionality and are JUnit filters so that can be used by any suite that implements Filterable.
The actual class in the Gradle responsible for running the JUnit tests is called JUnitTestClassExecuter. Once it has parsed the command line options it requests JUnit to check the runner should be used for a test. This method is invoked for every test as seen here.
The rest is simply up to JUnit. Gradle just created a custom RunNotifier to generate the standard XML files representing test results.
I hope someone finds this useful and saved themselves countless hours of debugging.
TLDR: You can use any runner in Gradle. Gradle has no specifics pertaining to runners. It is JUnit that decided the runners. If you'd like to know what runner will be used for your test, you can debug this by calling
Request.aClass(testClass).getRunner(). Hack this somewhere into your codebase and print it to the console. (I wasn't very successful in attaching a debugger to Gradle.)
I am trying to run individual spock unit tests using intellij idea.
Consider:
// rest of code
def "Test Something"() {
// test code below
}
In above test, when I goto the test body and right context menu, I get two kinds of tests for Test Something. One is the grails test and other is the junit test.
Referring to this question, the accepted answer recommends using the jUnit runner. But using it, the code simply does not compile(probably because certain plugins and other classes are not available).
(I am not sure though as this is the desired behavior because I am just running a single test and not all tests. So wonder why is it compiling all classes ,including plugin classes not required by the test target class.)
Using the grails runner, I check the configuration and here is the screenshot:
So nothing looks wrong with the command there.
But the test on running gives Test framework quit unexpectedly error.
I try running same command from grails console(CMD windows) and it runs without any error message.
But on checking the output html files(in target/test-reports) I see that none of the tests actually ran!
So what is going on here and why are not individual tests running?
PS:
When I run All tests using test-app command, tests run as expected. Only individual (unit)tests are not running.
Part of the price paid for Spock's nice test naming, is that you can't specify an individual test to run anymore.
Here are some articles about it. The first seems pretty on-point:
Run a specific test in a single test class with Spock and Maven
This one isn't about running a single test, but has some relevance and talks about Spock's test-name conversions, plus Peter Niederwieser chimes in with comments:
Can TestNG see my Spock (JUnit) test results?
A workaround for this could be the #IgnoreRest annotation. Simply annotate the test you want to run with #IgnoreRest, and then specify that test class to run, and only the annotated test will run. http://spockframework.github.io/spock/javadoc/1.0/spock/lang/IgnoreRest.html
Try using the grails unit test and add the following in the command line part:
-Dgrails.env=development
This will run the test as we change the running environment to development . Hope this will help to everyone facing such problems.
A similar question has already been asked here.
One (unaccepted) answer states:
the test class will always be started directly and then through the
"link" in the suite. This is as expected.
Can someone explain what this actually means and whether or not it is possible to prevent the tests running twice.
When I run the tests from the command line using mvn test they only run once.
UPDATE
I have a test suite defined as follows:
#RunWith(Suite.class)
#SuiteClasses({ TestCase1.class, TestCase2.class })
public class MyTestSuite
{
}
When you run tests in Eclipse on project level (or package level), Eclipse searches all project's source folders for JUnit classes (or selected package). These are all classes with #Test annotations and all classes with #RunWith (probably some more too). Then for all these classes it runs them as tests.
As a result of this behavior, if you have a suite class that references tests classes in the same project, these tests will run twice. If you had another suite that did the same, they would run three times and so on. To understand this behavior try running a suite that contains one test case twice, for instance:
#RunWith(Suite.class)
#SuiteClasses({ TestCase1.class, TestCase1.class })
public class TestSuite {}
Accepted strategy here is to define a suite or suites for a project an run them exclusively. Do not start tests on a project level but run selected suites only.
As far as Maven is concerned, I suspect that its default configuration only picks out suite class and omits test cases. Had it been configured differently, it would behave the same as Eclipse.
Elipse tests 2 classes and give you 2 results.
Maven tests 2 classes and give you one result with 2 sub results.
I think is somethink like this, but still most important thing is that result are
positive! :)
Regards!
Same as this question https://github.com/spring-projects/spring-boot/issues/13750
Just exclude individual test cases and include the suite test cases.