Cucumber Scenarios to be run in Sequential Order - java

I have few concerns regarding cucumber framework:-
1. I have single Feature file(steps are dependent on each other)and i want to run all the scenarios in order, by default they are running in random order.
2. How to run a single feature file multiple times?
I put some tags and tried to run but no luck.
#Given("Get abc Token")
public void get_abc_Token(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
DataTable data=dataTable.transpose();
String tkn= given()
.formParam("parm1",data.column(0).get(1))
.formParam("parm2", data.column(1).get(1))
.formParam("parm3", data.column(2).get(1))
.when()
.post(new URI(testurl)+"/abcapi")
.asString();
jp=new JsonPath(tkn);
Token=jp.getString("access_token");
if (Token==null) {
Assert.assertTrue(false,"Token is NULL");
}else {
}
}
#Given("Get above token")
public void get_abovetoken(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
System.out.println("Token is " +Token);
}
}
So in the above steps i am getting token from one step and trying to print token in other step but i got null and not the actual value, because my steps are nunning randommmally
Please note i am running TestRunner via testng.xml file.

Cucumber and testing tools in general are designed to run each test/scenario as a completely independent thing. Linking scenarios together is a terrible anti-pattern don't do it.
Instead learn to write scenarios properly. Scenarios and feature files should have no programming in them at all. Programming needs to be pushed down into the step definitions.
Any scenario, no matter how complicated can be written in 3 steps if you really want to. Your Given can set up any amount of state. Your When deals with what you are doing, and your Then can check up any number of conditions.
You do this by pushing all the detail down out of the scenario and into the step definitions. You improve this further by having the step definitions call helper methods that do all the work.

Related

Is it a bad practice to test the flow of logic by log statements?

I have a logic that does something like this that I want to test out:
public void doSomething(int num) {
var list = service.method1(num);
if (!list.isEmpty()) {
// Flow 1
LOG.info("List exists for {}", num);
doAnotherThing(num);
} else {
// Flow 2
LOG.info("No list found for {}", num);
}
}
public void doAnotherThing(int num) {
Optional<Foo> optionalFoo = anotherService.get(num);
optionalFoo.ifPresentOrElse(
foo -> {
if (!foo.type().equals("no")) {
// Flow 3
anotherService.filter(foo.getFilter());
} else {
// Flow 4
LOG.info("Foo is type {} - skipping", foo.type());
}
},
// Flow 5
() -> LOG.info("No foo found for {} - skipping", num));
}
For each test that'll test out different flow, my first thought was to Mockito.verify() to see if they were called or not. So to test out Flow 1, I would verify that anotherService.get() was called inside doAnotherThing(). And to test out Flow 2, I would verify that anotherService.get() was never called. This would've been fine except for Flow 4 and Flow 5. They would both invoke anotherService.get() once but not anything else.
Because of that, I've created a class to capture logs in tests. This would check to see if certain logs were logged and I would be able to see by it which flow it landed on. But I wanted to ask: is this a bad practice? I would combine this with verify() so that on flows that can be reached by verify() will take that as higher precedence.
One downside to this would be that the tests would rely on the log messages being correct so it would be a bit unstable. To account for that issue, I thought about taking some of these log messages out as a protected static variable that the tests can also use so the message would remain the same between the methods and the respective tests. This way, only the flow would be tested.
If the answer is that it is a bad practice, I would appreciate any tips on how to test out Flow 4 and Flow 5.
Log statements are usually not part of the logic to test but just a tool for ops. They should be adjusted to optimize ops (not too much info, not too few), so that you can quickly find out if and where something went wrong, if something went wrong. The exact text, the log levels and the number of log statements should not be considered as something stable to rely your tests on. Otherwise it will make it harder to change the logging concept.

How to explicitly fail a test step in Extent Report?

Background
I am using Extent Report Cucumber Adapter for my Cucumber based test automation framework built using Java which is running on Junit. I am using AssertJ assertions for the test conditions.
Scenario
One of the test scenarios require to test all the links on a web page. I have written the code for the same and it's working fine. I am using AssertJ assertion for the test condition under a try block and catching the SoftAssertionError so that my test execution doesn't halt because of the exception and continue validating all the remaining links even if it finds any broken link.
The report mentions the links which are found broken. However, ideally this step should get failed as the script found some broken links. But the report marks the overall step as passed and as a result the scenario is also marked as passed. Now I am not able to figure out how can I mark the step as failed in my Extent Report provided there are some links found which are broken. Kindly suggest a way to do this. I am providing a small snippet of my code for a better understanding.
public void ValidateAllLinks(String linkURL) {
try
{
URL url = new URL(linkURL);
//Creating url connection and getting the response code
HttpURLConnection httpURLConnect=(HttpURLConnection)url.openConnection();
httpURLConnect.setConnectTimeout(5000);
httpURLConnect.connect();
try {
SoftAssertions softly = new SoftAssertions();
softly.assertThat(httpURLConnect.getResponseCode()).as("This is a broken link: " + linkURL).isGreaterThanOrEqualTo(400);
softly.assertAll();
}catch (SoftAssertionError e)
{
e.printStackTrace();
}
if(httpURLConnect.getResponseCode()>=400)
{
System.out.println(linkURL+" - "+httpURLConnect.getResponseMessage()+" is a broken link.");
ExtentCucumberAdapter.addTestStepLog("<b>" + "<font color=" + "red>" + linkURL+" - "+httpURLConnect.getResponseMessage()+" is a broken link." + "</font>" + "</b>");
}
//Fetching and Printing the response code obtained
else{
System.out.println(linkURL+" - "+httpURLConnect.getResponseMessage()+" is working as expected.");
}
}catch (Exception e) {
}
}
Your example is not a good fit for soft assertions as you are testing only one thing. Soft assertions are meant to assert a bunch of things and once you have asserted all the things you wanted to, you call assertAll().
I don't understand why you test twice httpURLConnect.getResponseCode() you could do it once, add the test step log and then fail the test with a fail() method call (either from AssertJ or JUnit)

When writing the unit test, which one should I prefer as the expected value?

I'm developing a project. The subject of this project, companies send message to users. Each company has a message limit and the system throws an exception based on the language chosen by the company when the message limit is exceeded.
I wrote unit test for the exception.
// given
Company company = new Company("Comp1", 2); // constructor (company name, language) ** 2 -> EN
User user = new User("User1");
Email email = new Email("Email Test", "Test");
int emailLimit = company.getEmailLimit();
// when
for (int i = 0; i < emailLimit; i++) {
company.SendEmail(email, user);
}
Throwable throwable = catchThrowable(() -> company.SendEmail(email, user));
// then
assertThat(throwable).isInstanceOf(MessageLimitException.class);
I also want to check the message content.
There is a class named "ErrorMessages" that manages the content of the error message.
public class ErrorMessages {
private static String[] messageLimitErrorMessage = {
"Message Limit Error", // 0 -> default
"Mesaj limiti aşıldı", // 1 -> TR
"Message limit exceeded" // 2 -> EN
};
public static String messageLimitException(int languageIndex) {
return messageLimitErrorMessage[languageIndex];
};
}
Which one should I prefer as the expected value?
// Option 1
assertThat(throwable).hasMessage(ErrorMessages.messageLimitException(company.getLanguage()));
// or
// Option 2
assertThat(throwable).hasMessage("Message limit exceeded");
Both are correct but which one should I prefer for the accuracy of the test, Option 1 or 2 ?
Thanks for your answer in advance.
There is no definitive answer to this question. It depends what you're trying to achieve. If the exact error message that is returned is important (e.g. part of the spec) then you should choose Option 2. If you don't want to hard code the message into the test (e.g. because it may change) then you can choose Option 1.
In general a test should focus on one specific thing that it's trying to test. Testing the exact error message might be better off done in a separate unit test (e.g. where you could test all of the different messages). Personally, I don't usually bother to write tests for error messages, unless there is something special about them (e.g. they have some kind of variability within the message itself). You only have so much time and it's probably better spent elsewhere.
You should also consider using Java's built-in support for internationalized message bundles. It lets you hold Locale-specific messages in properties files and loads them in for you.
"As the tests become more specific, the code becomes more generic."
Writing the test with a very specific expectation, decouples it from the implementation, and allows the code to become more generic over time.
This should tell you that you should probably use option 2.
Here's Robert Martin's take on it.

Run each cucumber test independently

I'm using cucumber and maven on eclipse and what I'm trying to do is run each test independently. For example I have a library system software that basically allows users to borrow books and do other stuff.
One of the conditions is that users can only borrow a max of two books so I wrote to make sure that the functionality works. This is my feature file:
Scenario: Borrow over max limit
Given "jim#help.ca" logs in to the library system
When "jim#help.ca" order his first book with ISBN "9781611687910"
And "jim#help.ca" orders another book with ISBN "9781442667181"
And "jim#help.ca" tries to order another book with ISBN "1234567890123"
Then jim will get the message that says "The User has reached his/her max number of books"
I wrote a corresponding step definition file and every worked out great. However, in the future I want to use the same username ("jim#help.ca") for borrowing books as though jim#help.ca has not yet borrowed any books. I want each test to be independent of each other.
Is there any way of doing this...maybe there's something I can put into my step definition classes such as a teardown method. I've looked into it but I couldn't fine any solid information about it. If there's a way please help me. Any help is greatly appreciated and I thank you in advance!
Yes, you can do setups and teardowns before and after each scenario, but it's not in the step definition file. What you want to use are hooks.
Hooks run before or after a scenario and can run before/after every scenario or just the ones you and #tag to, for example:
#remove_borrowed_books
Scenario: Borrow over max limit
Unfortunately I have only used cucumber with ruby not java so I can't give you step-by-step instructions, but this should tell you what you need to know https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/
You can use the "#After" hook to achieve this as #Derek has mentioned using for example a Map of books borrowed per username:
private final Map<String, Integer> booksBorrowed = new HashMap<>();
#After
public void tearDown() {
booksBorrowed.clear();
}
#Given("...")
public void givenUserBorrowsBook(String username) {
booksBorrowed.put(username, booksBorrowed.containsKey(username) ? booksBorrowed.get(username) + 1 : 1);
....
}
Or the "#Before" hook to perform the cleanup before each scenario is executed, which is the option I would recommend:
private Map<String, Integer> booksBorrowed;
#Before
public void setUp() {
booksBorrowed = new HashMap<>();
}
If you are planning to run scenarios in parallel then the logic will be more complex as you will need to maintain the relationship between the thread executing a particular scenario and the usernames used on that thread.

Use of #Test in JUnit while data driving

I am trying to figure out if something is possible (or if I am being a bit silly.)
I have a very simple Excel sheet - 2 columns, one column is a list of search terms, the second column is a list of expected URLS. I run this via selenium, it will navigate to google, open the Excel sheet, search for the term and if the expected result appears pass the test. It does this for the three rows in the sheet. All good. However, I was hoping to #Test each of the rows - but I can't quite figure out how to achieve this.
Below is the test code, like I said I can't quite get this to work - at present it runs but appears as a single test which has had 3 different searches.
#Test
#Severity(SeverityLevel.CRITICAL)
public void driveDatData() throws InterruptedException, BiffException, IOException {
parameters = WebDriverSteps.currentDriver.toString();
steps.openWebPage("http://www.google.co.uk");
FileInputStream fi = new FileInputStream("C:\\temp\\sites.xls");
Workbook w = Workbook.getWorkbook(fi);
Sheet s = w.getSheet("Sheet1");
for (int i=1;i<=s.getRows(); i++)
{
if (i > 1)
{
steps.goToURL("http://www.google.co.uk");
}
steps.search(s.getCell("A" + i).getContents());
Assert.assertTrue("Check the " + s.getCell("A" + i).getContents() + " link is present", steps.checkForTextPresent(s.getCell("B" + i).getContents()));
}
}
Couple of things:
I assume it makes sense for you to have your test data in an external excel sheet? Otherwise the more common approach would be to keep test data within your project as a test resource. Also I think there are various frameworks around that can help you retrieving test data from excel files.
Having said this:
Change your code to populate the test data into a data structure at #Before, write different #Tests that test different things. This will also separate the retrieval of the test data from the actual test (which is a good things in terms of maintainability and responsibilities). If file reading / performance is an issue, you might want to use #BeforeClass to do this only once per test class.
#Before
// read file, store information into myTestData
#Test
// tests against myTestData.getX
#Test
// tests against myTestData.getY
For any good test code complexity should be 1. Any loops should be replaced by parameterized tests. Please take a look at https://github.com/junit-team/junit/wiki/Parameterized-tests.
I would suggest you add Feed4JUnit to your project.
It is highly configurable and the only library I know that can do parameterized JUnit and TestNG tests out-of-the-box with Excel support.
Feed4Junit
#RunWith(Feeder.class)
public class AddTest {
#Test
#Source("http://buildserv.mycompany.com/wiki/myproject/tests/add.xls")
public void testAdd(int param1, int param2, int expectedResult) {
int result = MyUtil.add(param1, param2);
assert result == expectedResult;
}
}
This example comes straight form the Feed4Junit site.
Important to note the parameters are read left-to-right.
Each row is a test and must have valid values in each column, i.e. if a column has the same value for 3 rows then it needs to appear in each row.
After a bit of effort managed to get this working in JUnit using #RunWith. Found a few examples which, while not exactly what I wanted, gave enough insight to get this woring for me with JUnit.

Categories

Resources