How to Rerun a Particular Cucumber Scenario when it Fails with Cucumber - java

I know how to use two runner classes to rerun failed scenarios, but I want this feature only for one test.
Let's say that I have 100 scenarios and I only want to rerun scenario 40 when it fails, but if any other scenaria fails, I don't want it to rerun. Is there a way to implement this for one test in particular?
To see how to rerun all failed scenarios, check out this question:
How to rerun the failed scenarios using Cucumber?

You'll have to write custom code for this. Fortunately this is relatively easy with the JUnit Platform API (JUnit 5).
https://github.com/cucumber/cucumber-jvm/tree/main/cucumber-junit-platform-engine#rerunning-failed-scenarios
package com.example;
import org.junit.platform.engine.discovery.DiscoverySelectors;
import org.junit.platform.engine.discovery.UniqueIdSelector;
import org.junit.platform.launcher.Launcher;
import org.junit.platform.launcher.LauncherDiscoveryRequest;
import org.junit.platform.launcher.TestIdentifier;
import org.junit.platform.launcher.core.LauncherFactory;
import org.junit.platform.launcher.listeners.SummaryGeneratingListener;
import org.junit.platform.launcher.listeners.TestExecutionSummary.Failure;
import java.util.List;
import java.util.stream.Collectors;
import static org.junit.platform.engine.discovery.DiscoverySelectors.selectDirectory;
import static org.junit.platform.launcher.core.LauncherDiscoveryRequestBuilder.request;
public class RunCucumber {
public static void main(String[] args) {
LauncherDiscoveryRequest request = request()
.selectors(
selectDirectory("path/to/features")
)
.build();
Launcher launcher = LauncherFactory.create();
SummaryGeneratingListener listener = new SummaryGeneratingListener();
launcher.registerTestExecutionListeners(listener);
launcher.execute(request);
TestExecutionSummary summary = listener.getSummary();
// Do something with summary
List<UniqueIdSelector> failures = summary.getFailures().stream()
.map(Failure::getTestIdentifier)
.filter(TestIdentifier::isTest)
// Filter more to select scenarios to rerun
.map(TestIdentifier::getUniqueId)
.map(DiscoverySelectors::selectUniqueId)
.collect(Collectors.toList());
LauncherDiscoveryRequest rerunRequest = request()
.selectors(failures)
.build();
launcher.execute(rerunRequest);
TestExecutionSummary rerunSummary = listener.getSummary();
// Do something with rerunSummary
}
}

Related

In Selenium using Java is it possible to write only one test method in testNG and form multiple tests from it?

I have one scenario where i would have only one test method in my testNG test class and i have to form multiple tests from it the reason i'm writting only one test method is that i don't know how many tests i would need, because that number depends on the number of URL's which i'm fetching from a excel sheet.
So basically i will have only one testNG test method which will fetch different URL from excel each time and execute the test and according to test result my listener will mark the test as pass or fail.
Say for example i have 20 URL's in my excel sheet so can one testNG test method execute these 20 URL's one by one and give pass/fail results 20 times. Means at the end i should see 20 tests executed with there pass/fail results.
How can i achieve this ?
Basically You would like to run your test with different data sets,
To do so, you may implement a #Factory annotated constructor and tie this constructor's #Factory annotation to a #DataProvider annotated data provider method
In addition you may use Apache POI to fetch the urls from the given excel file.
import org.apache.poi.ss.usermodel.Row;
import org.apache.poi.ss.usermodel.Workbook;
import org.apache.poi.xssf.usermodel.XSSFWorkbook;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Factory;
import org.testng.annotations.Test;
import java.io.IOException;
import java.io.InputStream;
import java.util.Iterator;
import java.util.stream.Stream;
import java.util.stream.StreamSupport;
import static org.testng.Assert.assertNotNull;
public class SampleTest {
private final String url;
#Factory(dataProvider = "getUrls")
public SampleTest(String url) {
this.url = url;
}
#DataProvider(name = "getUrls")
public static Iterator<Object[]> loadUrls() throws IOException {
InputStream inputStream =
SampleTest.class.getClassLoader().getResourceAsStream("urls-sheet.xlsx");
try (Workbook workbook = new XSSFWorkbook(inputStream)) {
Iterable<Row> iterable = () -> workbook.getSheetAt(0).iterator();
Stream<Row> rows = StreamSupport.stream(iterable.spliterator(), false);
return rows.map(row -> new Object[] {row.getCell(0).getStringCellValue()}).iterator();
}
}
#Test
public void testProvidedUrl() {
assertNotNull(url, "url should have a value");
}
}
Demo source code
See
TestNG #DataProvider – Test parameters example
How to run multiple test cases in testng with different set of test data from excel file?

There is unknown field "container" in Tekton

I'm interested in Tekton these days.
However there are some issue when I implement Task with java fabric8.tekton apis.
There exist api which is adding steps in spec in units of container(withContainer) in TaskBuilder class.
However I got error message in rune time like below,
Can I get some advices?
Tekton version - v0.10.1
I used packages like below:
io.fabric8:kubernetes-client:4.7.1
io.fabric8:tekton-client:4.7.1
Here is my complete test code.
package com.example.tekton;
import java.util.ArrayList;
import java.util.List;
import io.fabric8.kubernetes.api.model.Container;
import io.fabric8.kubernetes.api.model.ContainerBuilder;
import io.fabric8.kubernetes.client.BaseClient;
import io.fabric8.kubernetes.client.Config;
import io.fabric8.kubernetes.client.ConfigBuilder;
import io.fabric8.tekton.client.TektonClient;
import io.fabric8.tekton.client.DefaultTektonClient;
import io.fabric8.tekton.client.handlers.TaskHandler;
import io.fabric8.tekton.client.handlers.TaskRunHandler;
import io.fabric8.tekton.pipeline.v1alpha1.ArrayOrString;
import io.fabric8.tekton.pipeline.v1alpha1.Task;
import io.fabric8.tekton.pipeline.v1alpha1.TaskBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRun;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRunBuilder;
import io.fabric8.tekton.pipeline.v1alpha1.TaskRefBuilder;
public class DefaultKubernetesTest {
public Task getTask() {
Container con = new ContainerBuilder()
.withNewImage("ubuntu")
.withNewName("echo-hello-world")
.addNewCommand("echo")
.addNewArg("hello jinwon world")
.build();
Task task = new TaskBuilder()
.withApiVersion("tekton.dev/v1alpha1")
.withKind("Task")
.withNewMetadata()
.withName("echo-hello-world-test")
.endMetadata()
.withNewSpec()
.addNewStep()
.withContainer(con)
.endStep()
.endSpec()
.build();
return task;
}
public TaskRun getTaskRun() {
TaskRun taskRun = new TaskRunBuilder()
.withNewMetadata()
.withName("taskrun")
.endMetadata()
.withNewSpec()
.withTaskRef(new TaskRefBuilder().withName("echo-hello-world-test").withApiVersion("tekton.dev/v1alpha1").withKind("Task").build())
.endSpec().build();
return taskRun;
}
public static void main(String[] args) {
ConfigBuilder config = new ConfigBuilder();
DefaultKubernetesTest kubeTest = new DefaultKubernetesTest();
String username = "testUser";
String password = "testPwd";
config = config.withMasterUrl("https://192.168.6.236:6443");
config = config.withUsername(username);
config = config.withPassword(password);
Config kubeConfig = config.build();
try (DefaultTektonClient test = new DefaultTektonClient(kubeConfig)) {
Task task = kubeTest.getTask();
TaskRun taskRun = kubeTest.getTaskRun();
test.tasks().inNamespace("test").create(task);
test.taskRuns().inNamespace("test").create(taskRun);
test.close();
}
}
}
Tekton ships with an admission controller, which validates the CRD specs before allowing them into the cluster. Because the project is still in alpha, its moving quite fast. Fabric8 may be templating out K8s objects against a different spec from what has been installed on your cluster. You should be able to validate the spec version used in Fabric8 and remove all the Tekton objects in your cluster and re-apply them at a specific version.

How to test Spouts in Storm

I am struggling with a task to unit test my Apache Storm spout which actually reads from a queue and emits a message as a tuple. I want to use embedded queue, put a message onto the queue and asset that message is emitted by my spout. Could anyone help what to use to accomplish that?
There are probably a ton of ways you can do this but here is an example of a simple test using mockito and testng (\src\test\java\com\example\storm\spout\DummySpoutTest.java):
package com.example.storm.spout;
import static org.mockito.Matchers.anyList;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import org.testng.annotations.Test;
import backtype.storm.spout.SpoutOutputCollector;
import com.example.storm.spout.DummySpout;
public class DummySpoutTest {
#SuppressWarnings("unchecked")
#Test
public void shouldCreateDummyMessage() {
// given
DummySpout spout = new DummySpout();
SpoutOutputCollector collector = mock(SpoutOutputCollector.class);
spout.open(null, null, collector);
// when
spout.nextTuple();
// then
verify(collector).emit(anyList());
}
}

Running pact jvm provider tests via IntelliJ

I am very trying the pact-jvm-provider-junit using IntelliJ IDEA as my IDE.
I'm testing out the provided ContractTest example:
import org.junit.BeforeClass;
import org.junit.Before;
import org.junit.ClassRule;
import au.com.dius.pact.provider.junit.State;
import au.com.dius.pact.provider.junit.Provider;
import au.com.dius.pact.provider.junit.target.TestTarget;
import au.com.dius.pact.provider.junit.target.Target;
import au.com.dius.pact.provider.junit.target.HttpTarget;
import au.com.dius.pact.provider.junit.TargetRequestFilter;
import org.apache.http.HttpRequest;
import au.com.dius.pact.provider.junit.PactRunner;
import au.com.dius.pact.provider.junit.loader.PactFolder;
import au.com.dius.pact.provider.junit.loader.PactUrl;
import au.com.dius.pact.provider.junit.VerificationReports;
import com.github.restdriver.clientdriver.ClientDriverRule;
import org.junit.runner.RunWith;
import org.slf4j.LoggerFactory;
import org.slf4j.Logger;
import static com.github.restdriver.clientdriver.RestClientDriver.giveEmptyResponse;
import static com.github.restdriver.clientdriver.RestClientDriver.onRequestTo;
#RunWith(PactRunner.class) // Say JUnit to run tests with custom Runner
#Provider("uicc_repository") // Set up name of tested provider
//#PactFolder("rs") // Point where to find pacts (See also section Pacts source in documentation)
#PactUrl(urls = {"file:///C:/IdeaProjects/src/pack-test-provider/resources/test_consumer-test_provider.json"}) // Point where to find pacts (See also section Pacts source in documentation)
#VerificationReports(value = {"markdown","json"}, reportDir = "C:/IdeaProjects/src/pack-test-provider/resources")
public class ContractTest {
// NOTE: this is just an example of embedded service that listens to requests, you should start here real service
#ClassRule //Rule will be applied once: before/after whole contract test suite
public static final ClientDriverRule embeddedService = new ClientDriverRule(6060);
private static final Logger LOGGER = LoggerFactory.getLogger(ContractTest.class);
#BeforeClass //Method will be run once: before whole contract test suite
public static void setUpService() {
//Run DB, create schema
//Run service
//...
}
#Before //Method will be run before each test of interaction
public void before() {
// Rest data
// Mock dependent service responses
// ...
embeddedService.addExpectation(
onRequestTo("/data"), giveEmptyResponse()
);
}
#TestTarget // Annotation denotes Target that will be used for tests
public final Target target = new HttpTarget(6060); // Out-of-the-box implementation of Target (for more information take a look at Test Target section)
#State("default") // Method will be run before testing interactions that require "default" or "no-data" state
public void toDefaultState() {
// Prepare service before interaction that require "default" state
// ...
System.out.println("Now service in default state");
LOGGER.info("Now service in default state");
}
}
But when I try to run the test (Run -> Run ContractTest), it's like the test is not ran at all:
Mar 15, 2017 3:24:00 PM org.junit.platform.launcher.core.ServiceLoaderTestEngineRegistry loadTestEngines
INFO: Discovered TestEngines with IDs: [junit-jupiter, junit-vintage]
Mar 15, 2017 3:24:01 PM org.junit.vintage.engine.execution.TestRun lookupTestDescriptor
WARNING: Runner au.com.dius.pact.provider.junit.PactRunner on class rs.ContractTest reported event for unknown Description: rs.ContractTest. It will be ignored.
Process finished with exit code 0
enter image description here
I am not sure if it's an issue with how I am running the test in IntelliJ or I am missing something in the pact-jvm-provider "runner".
Appreciate any help on this topic.
Thanks!

How to run Spring Shell scripts in a JUnit test

I have a Spring Shell-based application and a couple of scripts. Is there an easy way to run the scripts in a JUnit test such that a test fails, if some exception/error occurs during the execution of the script?
The purpose of the tests is to make sure that all correct scripts run without errors.
Update 1:
Here's a little helper class for running scripts in JUnit:
import org.apache.commons.io.FileUtils;
import org.springframework.shell.Bootstrap;
import org.springframework.shell.core.CommandResult;
import org.springframework.shell.core.JLineShellComponent;
import java.io.File;
import java.io.IOException;
import java.util.List;
import static org.fest.assertions.api.Assertions.*;
public class ScriptRunner {
public void runScript(final File file) throws IOException
{
final Bootstrap bootstrap = new Bootstrap();
final JLineShellComponent shell = bootstrap.getJLineShellComponent();
final List<String> lines = FileUtils.readLines(file);
for (final String line : lines) {
execVerify(line, shell);
}
}
private void execVerify(final String command, final JLineShellComponent shell) {
final CommandResult result = shell.executeCommand(command);
assertThat(result.isSuccess()).isTrue();
}
}
You can create an instance of Bootstrap, get the shell out of it and then executeCommand() (including the shell command) on it.
You may be interested in what is done in Spring XD for this: https://github.com/spring-projects/spring-xd/blob/master/spring-xd-shell/src/test/java/org/springframework/xd/shell/AbstractShellIntegrationTest.java (although there are a lot of XD specific details)

Categories

Resources