Java Cucumber: creating scenario outlines with dynamic examples - java

We have a test where basically we need to input a specific value in a web site and make sure another value comes out. The data of the input-output for this is stored in an XML file.
Now we can create a single Scenario that runs once and loops through, submitting each value however we run into some reporting problems, if 2 out of 100 pairs fail we want to know which ones and not just have an assertion error for the whole scenario.
We would get much clearer reporting using a Scenario Outline where all the values are in the examples table. then the scenario itself runs repeatedly and we can fail an individual set as an assertion error and have that kick back clearly in a report.
Problem: we do not want to hard code all the values from the xml into the .feature. it's noisy but also if the values change it's slow to update. we would rather just provide the XML parse it and go, if things change we just drop in an updated XML.
Is there a way to create dynamic examples where we can run the scenario repeatedly, one for each data case, without explicitly defining it in the examples table ?

Using Cucumber for this is a bad idea. You should test this functionality lower down your stack with a unit test.
At some point in your code, after the user has input their value, the value will be passed to a method/function that will return your answer. This is the place to do this sort of testing.
A cucumber test going through the whole stack will upwards of 3 orders of magnitude slower than a well written unit tests. So you could test thousands of pairs of values in your unit test in the time it takes to run one single cuke.
If you do this sort of testing in Cucumber you will quickly end up with a test suite that takes far too long to run, or that can only be run quickly at great expense. This is very damaging to a project.
Cuking should be about one happy path (The user can enter a value and see the result) and maybe a sad path (the user enters a bad value and sees an error/explanation). Anything else needs to be pushed down to unit tests.

The NoraUi framework does exactly what you want to do in your project. The NoraUi code is open source. If you have questions about this framework, you can post an issue with the tag "Question"

Related

Karate framework: how to reuse variable used in one sceario to be called in another without declaring it as global variables [duplicate]

Does Karate supports a feature where you can define a variable in a scenario and reuse it in other scenarios in the same feature file. I tried doing the same but get an error. What's the best way to reuse the variables within the same feature file ?
Scenario: Get the request Id
* url baseUrl
Given path 'eam'
When method get
Then status 200
And def reqId = response.teams[0]resourceRequestId
Scenario: Use the above generated Id
* url baseUrl
* print 'From the previous Scenario: ' + reqId
Error:
Caused by: javax.script.ScriptException: ReferenceError: "reqId" is not defined in <eval> at line number 1
Use a Background: section. Here is an example.
EDIT: the variable if in the Background: will be re-initialized for every scenario which is standard testing framework "set up" behavior. You can use hooks such as callonce - if you want the initialization to happen only once.
If you are trying to modify a variable in one scenario and expect it to be now having that modified value when the next Scenario starts, you have misunderstood the concept of a Scenario. Just combine your steps into one Scenario, because think about it: that is the "flow" you are trying to test.
Each Scenario should be able to run stand-alone. In the future the execution order of Scenario-s could even be random or run in parallel.
Another way to explain this is - if you comment out one Scenario other ones should continue to work.
Please don't think of the Scenario as a way to "document" the important parts of your test. You can always use comments (e.g. # foo bar). Some teams assume that each HTTP "end point" should live in a separate Scenario - but this is absolutely not recommended. Look at the Hello World example itself, it deliberately shows 2 calls, a POST and a GET !
You can easily re-use code using call so you should not be worrying about whether code-duplication will be an issue.
Also - it is fine to have some code duplication, if it makes the flow easier to read. See this answer for details - and also read this article by Google.
EDIT: if you would like to read another answer that answers a similar question: https://stackoverflow.com/a/59433600/143475

How to handle dynamic step in cucumber without AfterStep hook?

I am currently using cucumber(info.cukes)-Selenium to run automation test.
Now, I have a situation where a specific step can occur at any point of the flow.
So, I have to design a cucumber scenario to verify the dynamic page in every step.
How I can implement this without AfterStep hook? (cucumber(info.cukes) won't support AfterStep hook)
Example:
Scenario: Complect the order.
Given: Open URL with chrome browser
When: Login with correct ID and password
Then: Complect the details on step 1
And: Complect the details on step 2
And: Complect the details on step 3
My application has a dynamic page which can appear between any pages, so I need to check if the page is displayed or not in every step and the execute the specifc task when the dynamic page is displayed and then move to the next step in the scenario.
Could you please someone help me to achieve this scenario with cucumber Selenium automation.
Thanks for your help.
When it comes to keeping end-to-end test code DRY, page objects are almost always the answer (or at least, they're a great place to start). Even if you had the AfterStep hook, I'd caution about adding too much implicit stuff there, it can be a real headache to follow the flow and debug, especially for others.
In your case, I could imagine a page object for the three pages in the workflow, and each one has a clickSubmit() method that checks for the URL of the mystery page and completes it if present. Something like
public void clickSubmit() {
click(By.className("submitButton"));
if (driver.getCurrentUrl().contains("mysterypage")) {
MysteryPage mysteryPage = new MysteryPage(driver);
mysteryPage.completeForm();
mysteryPage.clickSumbit();
}
}
Admittedly, its a little strange for a method called clickSubmit to be doing all that, so maybe it would be better for a helper method to exist up in the test, and just be called at the end of each step.
As an afterthought, if you have real business rules around when and where this intermediate page shows up, and it's not just random, it may be worth capturing in the gherkin. If the user really cares that it shows up here and not there, but you made the gherkin blind to its appearance so it always "just works", you could be masking off a bug.

JBehave steps makred as PENDING randomly

Heyy!
I have a bunch of story files with some scenarios, calling a WSDL service and checking the answer. The step functions have been written properly (as in no exceptions are thrown and it asserts true when it has to and vv.) and the service returns a valid answer.
My problem is in the case of some scenarios, I get steps (which has been defined as And in the story file, after a Given, or a Then) are getting marked as Pending, still IntelliJ puts the same green tick besides it as any other successful step.
The next time I run the test then they usually aren't pending anymore, but these marks coming and going randomly, which is kinda annoying, because I can't be sure if the step has actually been called or if it had any effect.
I can't show a code unfortunately, but one guess of mine is in the case of Given steps I just collect a number of Condition<>-s (from AssertJ) and don't actually do an assertion in the given step, but assert them all in the one and only When step.

How to save all responses from a test suite in JMeter in one single file?

I am testing a new REST API for my company and I am splitting all of its methods into test suites with different parameter combinations as test cases. I also build negative suites with added assertions.
The suites are huge - they contain more than 100-150 test cases each. I need to be able to save all responses from the suites but in a single file. I found this article which works for me but not entirely http://gerardnico.com/wiki/jmeter/save_response_to_file - here I can add 'Save response to a file' listener but it creates separate files for each response. Basically this is useless, I can check them one by one from the result tree directly in the tool. I searched at multiple articles but I can't seem to find exactly the resolution of my problem.
Thank you in advance!
Add a JSR223 Listener to your Test Plan (on top level so it will collect the data from all the samplers, see Scoping Rules for more information)
Put the following code into "Script" area:
new File("responses.txt") << prev.getResponseDataAsString() + System.getProperty("line.separator")
When you will run the test next time you will be able to see responses.txt file in JMeter's "bin" folder containing all the responses
prev is a shorthand for SampleResult class instance, it provides programmatic access to parent sampler result so you will be able to read or even update certain parts like response code, body, message, overall success, etc. See Groovy Is the New Black article to learn more about using Groovy scripting in JMeter test.
I think Simpledatawriter can solve your problem if you use it at test plan level.
Right click on test plan
Add "Listener" >> Simple Data Writer.
Add a file name.
This will write all responses to a single file. (But it will has record the response code and message)

Where to store expected output of a test?

Writing a test I expect the tested method to return certain outputs. Usually I'm checking that for a given database operation I get a certain output. My practice has usually been that of writing an array as a quick map/properties file in the test itself.
This solution is quick, and is not vulnerable to run-time changes of an external file to load the expected results from.
A solution is to place the data in a java source file, so I bloat less the test and still get a compile-time checked test. How about this?
Or is loading the exepected results as resources a better approach? A .properties file is not good enough since I can have only one value per key. Is commons-config the way to go?
I'd prefer a simple solution where I name the properties per key, so for each entry I might have a doc-length and numFound property value (sounds like the elements of an xml node)?
How do you go about this?
You must remember about maintaining such tests. After writing several web services tests with Spring-WS test support I must admit that storing requests (test setup) and expected responses in external XML files wasn't such a good idea. Each request-response pair had the same name prefix as test case so everything was automated and very clean. But still refactoring and diagnosing test failures becomes painful. After a while I realized that embedding XML in test case as String, although ugly, is much easier to maintain.
In your case, I assume you invoke some database query and you get a list of maps in response. What about writing some nice DSL to make assertions on these structures? Actually, FEST-Assert is quite good for that.
Let's say you test the following query (I know it's an oversimplification):
List<Map<String, Object>> rs = db.query("SELECT id, name FROM Users");
then you can simply write:
assertThat(rs).hasSize(1);
assertThat(rs.get(0))
.hasSize(2)
.includes(
entry("id", 7),
entry("name", "John")
)
);
Of course it can and should be further simplified to fit your needs better. Isn't it easier to have a full test scenario on one screen rather than jump from one file to another?
Or maybe you should try Fitnesse (looks like you are no longer doing unit testing, so acceptance testing framework should be fine), where tests are stored in wiki-like documents, including tables?
Yes, using resources for expected results (and also setup data) works well and is pretty common.
XML may well be a useful format for you - being hierarchical can certainly help (one element per test method). It depends on the exact situation, but it's definitely an option. Alternatively, JSON may be easier for you. What are you comfortable with, in terms of serialization APIs?
Given that you mention you are usually testing that a certain DB operation returns expected output, you may want to take a look at using DBUnit:
// Load expected data from an XML dataset
IDataSet expectedDataSet = new FlatXmlDataSetBuilder().build(new File("expectedDataSet.xml"));
ITable expectedTable = expectedDataSet.getTable("TABLE_NAME");
// Assert actual database table match expected table
Assertion.assertEquals(expectedTable, actualTable);
DBUnit handles comparing the state of a table after some operation has completed and asserting that the data in the table matches an expected DataSet. The most common format for the DataSet that you compare the actual table state with is probably using an XmlDataSet, where the expected data is loaded from an XML file, but there are other subclasses as well.
If you are already doing testing like this, then it sounds like you may have written most of the same logic already - but DBUnit may give you additional features you haven't implemented on your own yet for free.

Categories

Resources