I'm trying to test the following class
#Component
public class StreamListener {
#Value("${kafkaMessageExpiration.maxExpirationInMilliseconds}")
private long messageExpirationTime;
public void handleMessage(Payment payment) {
logIncomingDirectDepositPayment(payment);
if (!isMessageExpired(payment.getLastPublishedDate())) {
messageProcessor.processPayment(payment);
}
}
private boolean isMessageExpired(OffsetDateTime lastPublishedDate) {
return ChronoUnit.MILLIS.between(lastPublishedDate.toInstant(), Instant.now()) > messageExpirationTime;
}
}
I'm getting a "changed conditional boundary → SURVIVED" message on the condition in isMessageExpired().
I have the following tests which test when the difference is less than messageExpirationTime and when the difference is greater than messageExpirationTime.
#BeforeEach
void init() {
ReflectionTestUtils.setField(streamListener, "messageExpirationTime", 60000);
}
#Test
void handleMessage() {
Payment payment = TestObjectBuilder.createPayment();
streamListener.handleMessage(incomingDirectDepositPayment);
verify(messageProcessor).processDirectDepositPayment(incomingDirectDepositPayment);
}
#Test
void handleMessage_expired_message() {
Payment payment = TestObjectBuilder.createPayment();
payment.setLastPublishedDate(OffsetDateTime.now(ZoneId.of("UTC")).minusMinutes(10));
streamListener.handleMessage(incomingDirectDepositPayment);
verify(messageProcessor, never()).processDirectDepositPayment(incomingDirectDepositPayment);
}
I suspect the problem is that I don't have a test where the difference is equal. Assuming that is what I'm missing, I don't know how to get the difference to be equal. Any suggestions on how I can kill this mutation?
BTW, I'm using Java 11, JUnit 5 and PITest 1.6.3
This issue is this
Instant.now()
If time is one of the inputs your code depends on you need to make that explicit so you can control it.
This is normally achieved by injecting a java.util.Clock (held as a field, injected via the constructor). Your call to Instant.now() can then be replaced with clock.instant() and the code becomes properly unit testable.
Related
I'm new to Spring and trying to implement Spring Retry with a simple test.
however i can't make it work, hope someone could show me where i've done wrong.
Also I'm wondering, is it possible to write unit test to verify that Spring Retry has tried the requested maximum number of retries? because so far from google search, it seems it can only work in integration test because it needs Spring to set up the context first.
here is my main class:
#SpringBootApplication
public class SpringtestApplication {
public static void main(String[] args) {
new SpringApplicationBuilder(SpringtestApplication.class).run(args);
}
}
the configuration class
#Configuration
#EnableRetry
public class FakeConfiguration implements ApplicationRunner {
private final FakeParser fakeParser;
public FakeConfiguration(FakeParser fakeParser) {
this.fakeParser = fakeParser;
}
#Override
public void run(ApplicationArguments args) {
this.runParser();
}
#Retryable(maxAttempts = 5, value = RuntimeException.class)
public void runParser() {
fakeParser.add();
}
}
the component/service class:
#Component
public class FakeParser {
public int add(){
int result = 113;
return result;
}
}
the retry test for it:
#RunWith(SpringRunner.class)
#SpringBootTest
class SpringtestApplicationTests {
#Autowired
private FakeConfiguration fakeConfiguration;
#MockBean
private FakeParser fakeParser;
#Test
public void retry5times(){
when(fakeParser.add()).thenThrow(RuntimeException.class);
try {
fakeConfiguration.runParser();
} catch (RuntimeException e){
}
verify(fakeParser, times(5)).add();
}
}
however, the test didn't pass:
org.mockito.exceptions.verification.TooManyActualInvocations:
fakeParser bean.add();
Wanted 5 times:
-> at com.example.springtest.SpringtestApplicationTests.retry5times(SpringtestApplicationTests.java:43)
But was 6 times:
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
-> at com.example.springtest.FakeConfiguration.runParser(FakeConfiguration.java:26)
when(someObject.someMethod()) evaluates the method and makes a real call to it. That's why you're always getting one more invocation than wanted.
If you need to count the actual invocations you could either add 1 to your verify, but that is an ugly workaround that is not recommended (and also not needed). Or you can use the Mockito.doXXX methods that don't have that problem.
In your case you could try
doThrow(new RuntimeException()).when(fakeParser).add();
This should give you the correct amount of invocations in the end. Notice the difference in the usages of when here: when(fakeParser).add() (two methods chained together) vs when(fakeParser.add()) (only one method)
Most probably you try with
Mockito.verify(fakeParser,times(5)).add(Mockito.any());
You should think about first run because when you retry 5 times and run once again. It should be 6 run even if you retry 5 times.
You are forgetting first exceptional case which is normal run behaviour
I have encountered a problem when I was tasked with creating JUnit test to one of my camel processor.
The main class is as follows: (redundant things omitted).
#Stateless
#Named
public class CalculateProportionalAmount implements Plugin{ // where Plugin is our interface extending a Processor
LocalDate now;
#Override
public void process(final Exchange exchange) throws Exception{
now = getNow();
int startDay = getValueOfInputParameter("start") //just gets 1-31 from input parameters on exchange
int value = getValueOfInputParameter("value") //gets value from input parameters
/*
More kind-of irrelevant lines of code. Idea is that the processor calculates number of days between "now" and startDay, calculates what proportion of month this amount of days is and applies this proportion to the value.
So if today would be 1st and startDay is 10th (so 10 days between) when September has 30 days and value = 1000, the processor would calculate (10/30)*1000
*/
}
public LocalDate getNow(){
return LocalDate.now();
}
}
And for the test class:
public class CalculateProportionalAmountTest{
Plugin plugin;
#Before
public void setUp(){
//inicialize parameter maps, instantiate the plugin, so that we can reference it. "plugin" value is then instance of the "CalculateProportionalAmount" class.
}
#Test
public void pluginTestNextMonth() throws Exception {
Mockito.when(((CalculateProportionalAmount) plugin).getNow()).thenReturn(LocalDate.of(2017, 12, 11)); //obviously does not work, because plugin is not mocked....
ruleParameter.put("start", "10"); //here we set the "start" param that processor class gets.
ruleParameter.put("value", "1000"); //here we set the "value" param that processor class gets.
Exchange prepareInput = prepareExchange(scenarioParameters, ruleParameter);
Exchange output = execute(prepareInput);
String resultString = getScenarioParameterByKey(output, "result");
TestCase.assertEquals(String.format("%.6f", Double.valueOf(1000) * 30 / 31), resultString); //obviously will not pass unless on 11th of December
}
}
My biggest problem is that the getNow() method is and has to be called inside the process method, overwriting any attempts to specify a date.
Calculating the "real" proportions is also not viable option as we need to be able to check for variants "later this month", "earlier this month" and "today" on any day.
As the most feasible solution I now have is to rig (mock) the getNow() method to return a specific date when called from the test, but I need to leave the process method to be working as written, without any mocks.
The project this is part of already uses Mockito, but I am not very skilled in how it works and how to correctly mock the class so that it works as described above. I already made an attempt to do so in the beginning of the test class, but it currently ends in exception and I have been browsing tutorials since without much luck.
Thank you for help
You can use #Spy i think.
#Spy
Plugin plugin;
And then in your test method you can manipulate with doReturn for this public method
Mockito.doReturn(LocalDate.of(2017, 12, 11)).when((CalculateProportionalAmount) plugin).getNow();
#Spy tag refer to real object and you can change the return methods of spy object.
Mockito offers partial mocks to mock methods like getNow and call the real implementation for other methods.
In order to test camel route properly you should extend CamelTestSupport class from camel package. In general, they provide a support for testing camel.
My advice is to create custom route:
#Override
protected RoutesBuilder createRouteBuilder() {
return new RouteBuilder() {
#Override
public void configure() {
from( "direct:testRoute" )
.process( new YourPluginClass() )
.end();
}
};
}
Then, you will be able to call it with fluentTemplate from CamelTestSupport class and test processor properly.
In order to mock behaviour of processor (partially) use spy from Mockito as #drowny said. Keep it mind that if you want to use #Spy annotation you have to init it with MockitoAnnotations.initMocks(this) line in #Before set up method.
In my test class, suppose I've 15 test cases. Out of 15, I require common test data for only 5 test cases. Hence I want to write a method, which will create test data, but that method will execute before any of those 5 tests are run.
I know #BeforClass - which will run before any of tests from the class is run and #BeforeMethod - which will run before every test in the class.
I do not want to use #BeforeClass to create test data for 5 test cases out of 15 because if I want to debug a test which does not belong to those 5 test cases still it will create data, which is not required for my current test, also it will increase execution time.
Is there any way with TestNG, I can run specific method before some of the tests are executed(Without using testng.xml)
TestNG only provides dependency on other test methods. which makes a method to make as a test method.
To Archive what you needed You can do This:
#Test
void testMethod(){
//this is your test method
beforemethod();
}
//your before method for you test-case
void beforemethod(){
}
Hope this fixes you issue
From your description I understand you need a dataProvider (which is exactly that, a method providing same data for Multiple test cases or alternatively multiple data for the same test case).
#DataProvider(name = "dataProviderFor5TestCases")
public Object[][] createData() {
return new Object[][] {
{ "Joe", new Integer(43) },
{ "Mary", new Integer(32)},
};
}
Then you can declare the dataProvider on your test case as such:
#Test(dataProvider = "dataProviderFor5TestCases")
public void testCase1(String name, Integer age) {
System.out.println(name + " " + age);
}
Result will be:
Joe 43
Mary 32
So testCase1 will be executed twice with the set of data created in the dataProvider. However, I think you need the same data for all 5 test Cases (achievable).
Now, regarding execution time. I am not 100% sure but I believe data is created on demand (i.e. if the testCase is skipped or failed no data is created; but I had a very small load so please try it and let us know!)
Update after OP's comment:
So, you are probably better off, using testGroups then which will suit you for both setup before the test and cleanup afterwards (without being invoked for irrelevant test cases):
#Test(groups = { "init" })
public void serverInit() {
startServer();
}
#Test(groups = { "init" })
public void initEnvironment() {
createUsers()
}
#Test(groups = { "cleanup"}, dependsOnGroups = { "init.*" })
public void testCase1() {
//perform your tests
}
#Test(dependsOnGroups = { "cleanup"})
puplic void cleanup(){
deleteUsers();
killServer();
}
The above testCase1 won't be executed if any of the init test method fail (i.e. server fails to start). In addition, cleanup method will only be invoked if the testCase1 succeeded. If you want the cleanup method to be run regardless of testCase1 result you can use alwaysRun like so:
#Test(dependsOnGroups = { "cleanup"}, alwaysRun=true)
Hope that helps!
Best of luck!
Example taken from here:
TestNG DataProvider
You can use dependsOnMethods in #Test() annotation
e.g.
#Test
public void testDataSetup()
{
// Setup your testDataHere
}
#Test(dependsOnMethods = { "testDataSetup" })
public void testExecute1()
{
// Use Your logic here which executes after datasetup
}
For Complete Tutorials see this link
I'm trying to figure out a good solution to having specific unit tests run with certain runtime configurations. For example:
public class TestClassAlpha() {
#setup
public void setup() {
}
#After
public void tearDown() {
}
#Test
#<only run in particular env>
public void testA() {
//whatever A
}
//always run below test no mater what env
#Test
public void testB() {
//whatever B
}
}
I am contemplating a custom annotation or custom rule perhaps, but i thought this has to be a question that comes up frequently as running tests in certain conditions (envs) is a very valid scenario. I did some limited searching within stack, and I didn't find anything that really that helped solidify either way.
This post shows you exactly what you require.
You should Write a Custom TestRule and an annotation to mark the condition.
OK, so the #Ignore annotation is good for marking that a test case shouldn't be run.
However, sometimes I want to ignore a test based on runtime information. An example might be if I have a concurrency test that needs to be run on a machine with a certain number of cores. If this test were run on a uniprocessor machine, I don't think it would be correct to just pass the test (since it hasn't been run), and it certainly wouldn't be right to fail the test and break the build.
So I want to be able to ignore tests at runtime, as this seems like the right outcome (since the test framework will allow the build to pass but record that the tests weren't run). I'm fairly sure that the annotation won't give me this flexibility, and suspect that I'll need to manually create the test suite for the class in question. However, the documentation doesn't mention anything about this and looking through the API it's also not clear how this would be done programmatically (i.e. how do I programatically create an instance of Test or similar that is equivalent to that created by the #Ignore annotation?).
If anyone has done something similar in the past, or has a bright idea of how else I could go about this, I'd be happy to hear about it.
The JUnit way is to do this at run-time is org.junit.Assume.
#Before
public void beforeMethod() {
org.junit.Assume.assumeTrue(someCondition());
// rest of setup.
}
You can do it in a #Before method or in the test itself, but not in an #After method. If you do it in the test itself, your #Before method will get run. You can also do it within #BeforeClass to prevent class initialization.
An assumption failure causes the test to be ignored.
Edit: To compare with the #RunIf annotation from junit-ext, their sample code would look like this:
#Test
public void calculateTotalSalary() {
assumeThat(Database.connect(), is(notNull()));
//test code below.
}
Not to mention that it is much easier to capture and use the connection from the Database.connect() method this way.
You should checkout Junit-ext project. They have RunIf annotation that performs conditional tests, like:
#Test
#RunIf(DatabaseIsConnected.class)
public void calculateTotalSalary() {
//your code there
}
class DatabaseIsConnected implements Checker {
public boolean satisify() {
return Database.connect() != null;
}
}
[Code sample taken from their tutorial]
In JUnit 4, another option for you may be to create an annotation to denote that the test needs to meet your custom criteria, then extend the default runner with your own and using reflection, base your decision on the custom criteria. It may look something like this:
public class CustomRunner extends BlockJUnit4ClassRunner {
public CTRunner(Class<?> klass) throws initializationError {
super(klass);
}
#Override
protected boolean isIgnored(FrameworkMethod child) {
if(shouldIgnore()) {
return true;
}
return super.isIgnored(child);
}
private boolean shouldIgnore(class) {
/* some custom criteria */
}
}
Additionally to the answer of #tkruse and #Yishai:
I do this way to conditionally skip test methods especially for Parameterized tests, if a test method should only run for some test data records.
public class MyTest {
// get current test method
#Rule public TestName testName = new TestName();
#Before
public void setUp() {
org.junit.Assume.assumeTrue(new Function<String, Boolean>() {
#Override
public Boolean apply(String testMethod) {
if (testMethod.startsWith("testMyMethod")) {
return <some condition>;
}
return true;
}
}.apply(testName.getMethodName()));
... continue setup ...
}
}
A quick note: Assume.assumeTrue(condition) ignores rest of the steps but passes the test.
To fail the test, use org.junit.Assert.fail() inside the conditional statement. Works same like Assume.assumeTrue() but fails the test.