How to run Pitest with Mockito static mocking? - java

A project I am working on involves updating our codebase to JUnit 5. A number of our test classes had previously been using PowerMockito for static mocking. As PowerMockito does not currently support JUnit 5, we updated our Mockito dependency and switched to using Mockito's static mocking. This works for the most part when running the unit tests but has issues when the tests are run with pitest to get mutation coverage.
Despite the tests running and passing fine with mvn test or mvn verify, pitest will give the error:
[ERROR] Failed to execute goal org.pitest:pitest-maven:1.5.2:mutationCoverage (default-cli) on project <PROJECT>: Execution default-cli of goal org.pitest:pitest-maven:1.5.2:mutationCoverage failed: 9 tests did not pass without mutation when calculating line coverage. Mutation testing requires a green suite.
The 9 tests mentioned are the only tests that use static mocking with Mockito.
The tests generally look like this:
Sample Static Mocking Test
#ExtendWith(MockitoExtension.class)
public class SampleTest {
#Test
public void sampleTestWithMocking() {
String param = "test";
String expected = "value";
MockedStatic<MyClass> mockStaticMyClass = Mockito.mockStatic(MyClass.class);
mockStaticMyClass.when(() -> MyClass.myStaticMethod(param)).thenReturn(expected);
assertEquals(expected, MyClass.myStaticMethod(param));
}
}

Pitest does not currently support static mocking with mockito. I'll see if it could be supported, but it likely to be a complex task. Support for Powermock required dark magics (rewriting the bytecode of the bytcode manipulation library it uses), and was always brittle and easily broken by new Powermock releases.
A better long term solution would be to remove the need for static mocking from the test suite. Although it does have some use cases, it is most often a red flag for design issues.

Related

How can I use a custom runner when using categories in Junit?

I have a bunch of JUnit tests that extend my base test class called BaseTest which in turn extends Assert. Some of my tests have a #Category(SlowTests.class) annotation.
My BaseTest class is annotated with the following annotation #RunWith(MyJUnitRunner.class).
I've set up a Gradle task that is expected to run only SlowTests. Here's my Gradle task:
task integrationTests(type: Test) {
minHeapSize = "768m"
maxHeapSize = "1024m"
testLogging {
events "passed", "skipped", "failed"
outputs.upToDateWhen {false}
}
reports.junitXml.destination = "$buildDir/test-result"
useJUnit {
includeCategories 'testutils.SlowTests'
}
}
When I run the task, my tests aren't run. I've pinpointed this issue to be related to the custom runner MyJUnitRunner on the BaseTest. How can I set up my Gradle or test structure so that I can use a custom runner while using the Suite.
The solution to this turned out to smaller and trickier than I thought. Gradle was using my custom test runner and correctly invoking the filter method. However, my runner reloads all test classes through its own classloader for Javaassist enhancements.
This lead to the issue that SlowTest annotation was loaded through the Gradle classloader but when passed to my custom runner, the runner checked if the class was annotated with that annotation. This check never resolved correctly as the equality of the SlowTest annotation loaded through two different classloaders was different.
--
Since I've already done the research, I'll just leave this here. After days of digging through the Gradle and the (cryptic) JUnit sources, here's what I got.
Gradle simply doesn't handle any advanced JUnit functionality except the test categorization. When you create a Gradle task with the include-categories or the exclude-categories conditions, it builds a CategoryFilter. If you don't know, a Filter is what JUnit gives to the test-runner to decide whether a test or a test method should be filtered out. The test runner must implement the Filterable interface.
JUnit comes with multiple runners, the Categories is just another one of them. It extends a family of test runners called Suite. These suite based runners are designed to run a "suite" of tests. A suite of tests could be built by annotation introspection, by explicitly defining tests in a suite or any other method that builds a suite of tests.
In the case of the Categories runner, JUnit has it's own CategoryFilter but Gradle doesn't use that, it uses it's own CategoryFilter. Both provide more or less the same functionality and are JUnit filters so that can be used by any suite that implements Filterable.
The actual class in the Gradle responsible for running the JUnit tests is called JUnitTestClassExecuter. Once it has parsed the command line options it requests JUnit to check the runner should be used for a test. This method is invoked for every test as seen here.
The rest is simply up to JUnit. Gradle just created a custom RunNotifier to generate the standard XML files representing test results.
I hope someone finds this useful and saved themselves countless hours of debugging.
TLDR: You can use any runner in Gradle. Gradle has no specifics pertaining to runners. It is JUnit that decided the runners. If you'd like to know what runner will be used for your test, you can debug this by calling
Request.aClass(testClass).getRunner(). Hack this somewhere into your codebase and print it to the console. (I wasn't very successful in attaching a debugger to Gradle.)

How to make ignored junit steps in cucumber-jvm cause test scenario to fail

I am using cucumber-jvm with appium using eclipse and junit.
Some of my tests stop working halfway through. They do not overtly fail the junit tests but instead stop working and ignore the remaining steps.
When I look at these steps in junit (through eclipse) they appear to have passed, until I drill into them and see steps have been ignored.
Is there a way to mark any test scenarios with ignored steps as failures rather than as passes?
I presume you have a JUnit test case with an #CucumberOptions annotation on it. If you have this, you should be able to make ignored tests fail the build by setting strict=true. e.g.
#RunWith(Cucumber.class)
#CucumberOptions(strict = true)
public class CucumberRunnerTest {
}

TestNG running JUnit tests but not reporting all results

Background:
I have a a series of 172 integration tests that have been written using JUnit. Since our project needed to have them run on an embedded arm chip the tests have to be compiled into a jar and run using the command line. We realized after writing the tests that JUnit did not support xml output as part of the default library (which we required for jenkins). We added TestNG to our project because it provided a simple way to output XML results for our JUnit tests.
We are using the following command to run our tests:
/usr/local/frc/JRE/bin/java -ea -jar wpilibJavaIntegrationTests-0.1.0-SNAPSHOT.jar -junit -testclass edu.wpi.first.wpilibj.test.TestSuite
Where TestSuite is a bit like this:
#RunWith(Suite.class)
#SuiteClasses({
WpiLibJTestSuite.class,
CANTestSuite.class,
CommandTestSuite.class,
SmartDashboardTestSuite.class
})
public class TestSuite{
static{
//Some basic java.util.logging setup stuff
}
}
Each suite listed has its own set of test classes listed in a similarly formatted class.
Problem:
All 172 tests are being run by TestNG however it is only reporting 81 tests run and not reporting some failures (there were really two tests that failed but only one was reported).
===============================================
Command line suite
Total tests run: 81, Failures: 1, Skips: 8
===============================================
It seems that that unreported tests are the ones #RunWith(Parameterized.class)
Is there any way to get TestNG to properly recognize these tests and report their results appropriately?
You will have to change it to the TestNG way of doing it. (You should probably scan your code for imports from junit and replace them with corresponding TestNG constructs.)
This can be easily done, as most of the assertion statements are nearly identical to the JUnit ones (in fact several frameworks exist for automatically converting all tests to TestNG), and the test classes can be stripped of their direct inheritance from JUnit.
To do this in TestNG you'll need to annotate your test method with a data provider:
#Test(dataProvider = "MyProvider")
public void testSomeStuff() {}
and then implement a corresponding provider using either a 2D array (if your test cases are small and already known):
#DataProvider(name = "MyProvider")
public Object[][] myDataProvider() {}
or by using the following if your test cases are large or unknown:
#DataProvider(name = "MyProvider")
public Iterator<Object[]> myDataProvider() {}
This article may be helpful for understanding the differences between JUnit and TestNG.

Custom JUnit test detection using gradle

Our test suite is growing quickly and we have reached a point where our more functional tests are dependent on other systems.
We use gradle test tasks to run these tests using the include and exclude filters but this is becoming cumbersome because we are having to name our tests in a particular way.
Our current approach is to name our tests in the following way:
class AppleSingleServiceTest {}
class BananaMultiServiceTest {}
class KiwiIntegrationTest {}
and then include tests in the relevant task using
include '**/*SingleServiceTest.class'
include '**/*MultiServiceTest.class'
include '**/*IntegrationTest.class'
Is it possible find test classes in gradle by looking at annotations?
#SingleServiceTest
public class AppleTest {}
I think any tests that are not annotated would then be run as normal unit tests, so if you forget to annotate a more functional test it will fail
An example of a single service test is a selenium test where any external dependencies of the SUT are stubbed
An example of a multi service test is one where some but maybe not all external dependencies are not stubbed
As of Gradle 1.6, Gradle supports selecting tests with JUnit #Category, like this:
test {
useJUnit {
includeCategories 'org.gradle.junit.CategoryA'
excludeCategories 'org.gradle.junit.CategoryB'
}
}
More details can be found in the docs.
The feature you are asking for doesn't currently exist, but you can make a feature request at http://forums.gradle.org. Or you can use the (cumbersome) JUnit #Category, which requires you to define test suites.
I had a similar need to filter tests with annotations. I eventually managed to create a solution. It is posted here.

TestNG Ant tasks vs Surefire

I was wondering how different surefire is when executing TestNG than TestNG ant tasks? The reason is that I am seeing consistent difference in behavior when trying to run a TestNG test that extends a JUnit test base (this is a workaround to run JBehave tests in TestNG described here: http://jbehave.org/documentation/faq/). Surefire detects my test as a JUnit test incorrectly (probably because its base is TestCase), while the Ant tasks run perfectly. Can anyone provide an insight into how TestNG handle both cases?
The test looks as follows:
public class YourScenario extends JUnitScenario {
#org.testng.annotations.Test
public void runScenario() throws Throwable {
super.runScenario();
}
}
The short answer is that the ant task is part of the TestNG distribution, so it's part of our tests and I always make sure that it remains up to date with TestNG.
Surefire is developed as part of the Maven project, and as such, it sometimes lags behind (and just like you, I have sometimes encountered bugs when running my tests with Surefire that didn't happen when running from the command line/ant/Eclipse).
I'll bring this question to the Maven team's attention, maybe they'll have more to say.
This looks to be a known bug: http://jira.codehaus.org/browse/SUREFIRE-575.
Have you tried using a TestNG XML suite definition instead of Surefire's automatic test case detection?

Categories

Resources