Optimized Way for Data Parameterization in TestNG - java

I am trying to create a framework using selenium and TestNG. As a part of the framework i am trying to implement Data Parameterization. But i am confused about optimized way of implementing Data parameterization. Here is the following approaches i made.
With Data Providers (from excel i am reading and storing in object[][])
With testng.xml
Issues with Data Providers:
Lets say if my Test needs to handle large volumes of data , say 15 different data, then i need to pass 15 parameters to it. Alternative , if i try to create a class TestData to handle this parameters and maintain in it , then for every Test there will be different data sets. so my TestData class will be filled with more than 40 different params.
Eg: In a Ecom Web site , There will be many different params exists like for Accounts , cards , Products , Rewards , History , store Locations etc., for this we may need atleast 40 different params need to declared in Test Data.Which i am thinking not a suggestable solution. Some other tests may need sometimes 10 different test data, some may need 12 . Even some times in a single test one iteration i need only 7 params in other iteration i need 12 params .
How do i manage it effectively?
Issues with Testng.xml
Maintaining 20 different accounts , 40 different product details , cards , history all in a single xml file and configuring test suite like parallel execution , configuring only particular classes to execute etc., all together will mess the testng.xml file
So can you please suggest which is a optimized way to handle data in Testing Framework .
How in real time the data parameterization , iterations with different test datas will be handled

Assuming that every test knows what sort of test data it is going to be receiving here's what I would suggest that you do :
Have your testng suite xml file pass in the file name from which data is to be read to the data provider.
Build your data provider such that it receives the file name from which to read via TestNG parameters and then builds a generic map as test data iteration (Every test will receive its parameters as a key,value pair map) and then work with the passed in map.
This way you will just one data provider which can literally handle anything. You can make your data provider a bit more sophisticated by having it deal with test methods and then provide the values accordingly.
Here's a skeleton implementation of what I am talking about.
public class DataProviderExample {
#Test (dataProvider = "dp")
public void testMethod(Map<String, String> testdata) {
System.err.println("****" + testdata);
}
#DataProvider (name = "dp")
public Object[][] getData(ITestContext ctx) {
//This line retrieves the value of <parameter name="fileName" value="/> from within the
//<test> tag of the suite xml file.
String fileName = ctx.getCurrentXmlTest().getParameter("fileName");
List<Map<String, String>> maps = extractDataFrom(fileName);
Object[][] testData = new Object[maps.size()][1];
for (int i = 0; i < maps.size(); i++) {
testData[i][0] = maps.get(i);
}
return testData;
}
private static List<Map<String, String>> extractDataFrom(String file) {
List<Map<String, String>> maps = Lists.newArrayList();
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
return maps;
}
}

I'm actually currently trying to do the same (or similar) thing. I write automation to validate product data on several eComm sites.
My old method
The data comes in Excel sheet format that I process slightly to get in a format that I want. I run the automation that reads from Excel and executes the runs sequentially.
My new method (so far, WIP)
My company recently started using SauceLabs so I started prototyping ways to take advantage of X # of VMs in parallel and see the same issues as you. This isn't a polished or even a finished solution. It's something I'm currently working on but I thought I would share some of what I'm doing to see if it will help you.
I started reading SauceLabs docs and ran across the sample code below which started me down the path.
https://github.com/saucelabs-sample-scripts/C-Sharp-Selenium/blob/master/SaucePNUnit_Test.cs
I'm using NUnit and I found in their docs a way to pass data into the test that allows parallel execution and allows me to store it all neatly in another class.
https://github.com/nunit/docs/wiki/TestFixtureSource-Attribute
This keeps me from having a bunch of [TextFixture] tags stacked on top of my script class (as in the demo code above). Right now I have,
[TestFixtureSource(typeof(Configs), "StandardBrowsers")]
[Parallelizable]
public class ProductSetupUnitTest
where the Configs class contains an object[] called StandardBrowsers like
public class Configs
{
static object[] StandardBrowsers =
{
new object[] { "chrome", "latest", "windows 10", "Product Name1", "Product ID1" },
new object[] { "chrome", "latest", "windows 10", "Product Name2", "Product ID2" },
new object[] { "chrome", "latest", "windows 10", "Product Name3", "Product ID3" },
new object[] { "chrome", "latest", "windows 10", "Product Name4", "Product ID4" },
};
I actually got this working this morning so I know now the approach will work and I'm working on ways to further tweak and improve it.
So, in your case you would just load up the object[] with all the data you want to pass. You will probably have to declare a string for each of the possible fields you might want to pass. If you don't need that particular field in this run, then pass empty string.
My next step is to load the object[] by loading the data from Excel. The pain for me is how to do logging. I have a pretty mature logging system in my existing sequential execution script. It's going to be hard to give that up or setting for something with reduced functionality. Currently I write everything to a CSV, load that into Excel, and then I can quickly process failures using Excel filtering, etc. My current thought is to have each script write it's own CSV and then pull them all together after all the runs are complete. That part is still theoretical right now though.
Hope this helps. Feel free to ask me questions if something isn't clear. I'll answer what I can.

Related

Run each cucumber test independently

I'm using cucumber and maven on eclipse and what I'm trying to do is run each test independently. For example I have a library system software that basically allows users to borrow books and do other stuff.
One of the conditions is that users can only borrow a max of two books so I wrote to make sure that the functionality works. This is my feature file:
Scenario: Borrow over max limit
Given "jim#help.ca" logs in to the library system
When "jim#help.ca" order his first book with ISBN "9781611687910"
And "jim#help.ca" orders another book with ISBN "9781442667181"
And "jim#help.ca" tries to order another book with ISBN "1234567890123"
Then jim will get the message that says "The User has reached his/her max number of books"
I wrote a corresponding step definition file and every worked out great. However, in the future I want to use the same username ("jim#help.ca") for borrowing books as though jim#help.ca has not yet borrowed any books. I want each test to be independent of each other.
Is there any way of doing this...maybe there's something I can put into my step definition classes such as a teardown method. I've looked into it but I couldn't fine any solid information about it. If there's a way please help me. Any help is greatly appreciated and I thank you in advance!
Yes, you can do setups and teardowns before and after each scenario, but it's not in the step definition file. What you want to use are hooks.
Hooks run before or after a scenario and can run before/after every scenario or just the ones you and #tag to, for example:
#remove_borrowed_books
Scenario: Borrow over max limit
Unfortunately I have only used cucumber with ruby not java so I can't give you step-by-step instructions, but this should tell you what you need to know https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/
You can use the "#After" hook to achieve this as #Derek has mentioned using for example a Map of books borrowed per username:
private final Map<String, Integer> booksBorrowed = new HashMap<>();
#After
public void tearDown() {
booksBorrowed.clear();
}
#Given("...")
public void givenUserBorrowsBook(String username) {
booksBorrowed.put(username, booksBorrowed.containsKey(username) ? booksBorrowed.get(username) + 1 : 1);
....
}
Or the "#Before" hook to perform the cleanup before each scenario is executed, which is the option I would recommend:
private Map<String, Integer> booksBorrowed;
#Before
public void setUp() {
booksBorrowed = new HashMap<>();
}
If you are planning to run scenarios in parallel then the logic will be more complex as you will need to maintain the relationship between the thread executing a particular scenario and the usernames used on that thread.

CommandExecuteIn Background throws a "Not an (encodable) value" error

I am currently trying to implement file exports in background so that the user can do some actions while the file is downloading.
I used the apache isis CommandExexuteIn:Background action attribute. However, I got an error
"Not an (encodable) value", this is an error thrown by the ScalarValueRenderer class.
This is how my method looks like:
#Action(semantics = SemanticsOf.SAFE,
command = CommandReification.ENABLED)
commandExecuteIn = CommandExecuteIn.BACKGROUND)
public Blob exportViewAsPdf() {
final Contact contact = this;
final String filename = this.businessName + " Contact Details";
final Map<String, Object> parameters = new HashMap<>();
parameters.put("contact", contact);
final String template = templateLoader.buildFromTemplate(Contact.class, "ContactViewTemplate", parameters);
return pdfExporter.exportAsPdf(filename, template);
}
I think the error has something to do with the command not actually invoking the action but returns the persisted background command.
This implementation actually worked on the method where there is no return type. Did I miss something? Or is there a way to implement background command and get the expected results?
interesting use case, but it's not one I anticipated when that part of the framework was implemented, so I'm not surprised it doesn't work. Obviously the error message you are getting here is pretty obscure, so I've raised a
JIRA ticket to see if we could at least improve that.
I'm interested to know in what user experience you think the framework should provide here?
In the Estatio application that we work on (that has driven out many of the features added to the framework over the last few years) we have a somewhat similar requirement to obtain PDFs from a reporting server (which takes 5 to 10 seconds) and then download them. This is for all the tenants in a shopping centre, so there could be 5 to 50 of these to generate in a single go. The design we went with was to move the rendering into a background command (similar to the templateLoader.buildFromTemplate(...) and pdfExporter.exportAsPdf(...) method calls in your code fragment, and to capture the output as a Document, via the document module. We then use the pdfbox addon to stitch all the document PDFs together as a single downloadable PDF for printing.
Hopefully that gives you some ideas of a different way to support your use case
Thx
Dan

Enumerate Custom Slot Values from Speechlet

Is there any way to inspect or enumerate the Custom Slot Values that are set-up in your interaction model? For Instance, Say you have an intent schema with the following intent:
{
"intent": "MySuperCoolIntent",
"slots":
[
{
"name": "ShapesNSuch",
"type": "LIST_OF_SHAPES"
}
]
}
Furthermore, you've defined the LIST_OF_SHAPES Custom Slot to have the following Values:
SQUARE
TRIANGLE
CIRCLE
ICOSADECAHECKASPECKAHEDRON
ROUND
HUSKY
Question: is there a method I can call from my Speechlet or my RequestStreamHandler that will give me an enumeration of those Custom Slot Values??
I have looked through the Alexa Skills Kit's SDK Javadocs Located Here
And I'm not finding anything.
I know I can get the Slot's value that is sent in with the intent:
String slotValue = incomingIntentRequest.getIntent().getSlot("LIST_OF_SHAPES").getValue();
I can even enumerate ALL the incoming Slots (and with it their values):
Map<String, Slot> slotMap = IncomingIntentRequest.getIntent().getSlots();
for(Map.Entry<String, Slot> entry : slotMap.entrySet())
{
String key = entry.getKey();
Slot slot = (Slot)entry.getValue();
String slotName = slot.getName();
String slotValue = slot.getValue();
//do something nifty with the current slot info....
}
What I would really like is something like:
String myAppId = "amzn1.echo-sdk-ams.app.<TheRestOfMyID>";
List<String> posibleSlotValues = SomeMagicAlexaAPI.getAllSlotValues(myAppId, "LIST_OF_SHAPES");
With this information I wouldn't have to maintain two separate "Lists" or "Enumerations"; One within the interaction Model and another one within my Request Handler. Seems like this should be a thing right?
No, the API does not allow you to do this.
However, since your interaction model is intimately tied with your development, I would suggest you check in the model with your source code in your source control system. If you are going to do that, you might as well put it with your source. Depending on your language, that also means you can probably read it during run-time.
Using this technique, you can gain access to your interaction model at run-time. Instead of doing it automatically through an API, you do it by best practice.
You can see several examples of this in action for Java in TsaTsaTzu's examples.
No - there is nothing in the API that allows you to do that.
You can see the full extent of the Request Body structure Alexa gives you to work with. It is very simple and available here:
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interface-reference#Request%20Format
Please note, the Request Body is not to be confused with the request, which is a structure in the request body, with two siblings: version and session.

Use of #Test in JUnit while data driving

I am trying to figure out if something is possible (or if I am being a bit silly.)
I have a very simple Excel sheet - 2 columns, one column is a list of search terms, the second column is a list of expected URLS. I run this via selenium, it will navigate to google, open the Excel sheet, search for the term and if the expected result appears pass the test. It does this for the three rows in the sheet. All good. However, I was hoping to #Test each of the rows - but I can't quite figure out how to achieve this.
Below is the test code, like I said I can't quite get this to work - at present it runs but appears as a single test which has had 3 different searches.
#Test
#Severity(SeverityLevel.CRITICAL)
public void driveDatData() throws InterruptedException, BiffException, IOException {
parameters = WebDriverSteps.currentDriver.toString();
steps.openWebPage("http://www.google.co.uk");
FileInputStream fi = new FileInputStream("C:\\temp\\sites.xls");
Workbook w = Workbook.getWorkbook(fi);
Sheet s = w.getSheet("Sheet1");
for (int i=1;i<=s.getRows(); i++)
{
if (i > 1)
{
steps.goToURL("http://www.google.co.uk");
}
steps.search(s.getCell("A" + i).getContents());
Assert.assertTrue("Check the " + s.getCell("A" + i).getContents() + " link is present", steps.checkForTextPresent(s.getCell("B" + i).getContents()));
}
}
Couple of things:
I assume it makes sense for you to have your test data in an external excel sheet? Otherwise the more common approach would be to keep test data within your project as a test resource. Also I think there are various frameworks around that can help you retrieving test data from excel files.
Having said this:
Change your code to populate the test data into a data structure at #Before, write different #Tests that test different things. This will also separate the retrieval of the test data from the actual test (which is a good things in terms of maintainability and responsibilities). If file reading / performance is an issue, you might want to use #BeforeClass to do this only once per test class.
#Before
// read file, store information into myTestData
#Test
// tests against myTestData.getX
#Test
// tests against myTestData.getY
For any good test code complexity should be 1. Any loops should be replaced by parameterized tests. Please take a look at https://github.com/junit-team/junit/wiki/Parameterized-tests.
I would suggest you add Feed4JUnit to your project.
It is highly configurable and the only library I know that can do parameterized JUnit and TestNG tests out-of-the-box with Excel support.
Feed4Junit
#RunWith(Feeder.class)
public class AddTest {
#Test
#Source("http://buildserv.mycompany.com/wiki/myproject/tests/add.xls")
public void testAdd(int param1, int param2, int expectedResult) {
int result = MyUtil.add(param1, param2);
assert result == expectedResult;
}
}
This example comes straight form the Feed4Junit site.
Important to note the parameters are read left-to-right.
Each row is a test and must have valid values in each column, i.e. if a column has the same value for 3 rows then it needs to appear in each row.
After a bit of effort managed to get this working in JUnit using #RunWith. Found a few examples which, while not exactly what I wanted, gave enough insight to get this woring for me with JUnit.

Spring-Data does not find IndexFields for Indices created via MongoDB shell

we are using Spring-Data-Mongo to access our MongoDB from a Java application. In general, everything works fine but I encountered one odd behavior.
When initializing our Repositories in the Java code we use ensureIndex to create indices on the collections. In a unit test we read all indices from the collections as IndexInfo objects and check if those IndexInfo objects contain the fields we want to index in the member indexFields. This also worked fine when we set everything up.
Now it happened that we had to recreate one of the indices on our production environment, so we dropped it and created it again using the Mongo shell. The system seems to run fine and no issues came up. For consistency reasons we then made the same change to our test and even local environments in the same way. Then we noticed that our unit test for index checking fails because the indexField member is now empty.
I tried everything I can imagine but as soon as I create an index using the Mongo shell Spring does not deliver any index fields anymore even when I create an index with the identical configuration.
Can anyone tell me why that happens and if that indicates there is a problem? Is there a way to fix this without having to drop the collection? I was thinking about dropping the index after our next production release and then trigger an insert. On my local machine this created the index the way I expected it and the test succeeds.
---- Additional info -----
Hi Trisha,
sorry for not acting sooner but I just got time to build a small unit test for this.
If you run the following test on an empty db it works fine:
#Test
public void testIndexing() throws Exception {
this.mongoTemplate.indexOps("testcollection").ensureIndex(
new Index().on("indexfield", Order.ASCENDING).unique().sparse());
List<IndexInfo> indexInfos = mongoTemplate.indexOps("testcollection").getIndexInfo();
assertEquals("We want two indexes, id and indexfield", 2, indexInfos.size());
for (IndexInfo info : indexInfos) {
assertEquals("All indexes are only meant to have one field", 1, info.getIndexFields().size());
if (info.getName().startsWith("indexfield")) {
assertTrue("Unexpected index field", info.isIndexForFields(Arrays.asList(new String[]{ "indexfield" })));
assertTrue("Index indexfield must be unique", info.isUnique());
assertTrue("Index indexfield must be sparse", info.isSparse());
assertFalse("Index indexfield must not be droping duplicates", info.isDropDuplicates());
} else if (!"_id_".equals(info.getName())) {
fail("Unexpected index: '" + info.getName() + "'");
}
}
}
Then open the mongo shell and call:
db.testcollection.dropIndexes();
db.testcollection.ensureIndex({"indexfield":1}, {"unique":true, "sparse":true})
The second call should create exactly the same index as the java code did. Now if you run the test again, the ensureIndex-Method does nothing because an index is already there (as it should, I guess) but the test fails on the assert for the index fields. The first assert works fine because the index info is there.
Checking the indexes in the mongo shell produces the same output no matter if the index was created via shell or via java code but spring does not get the index fields for some reason when the index is created via shell.
It would be really cool if you could give me a hint on this.
Thanks to your updated question I was able to reproduce your problem. I finally tracked down the issue to a misunderstanding between how MongoDB stores the index information and how SpringData is expecting it.
When you create an index using the following:
db.testcollection.ensureIndex({"indexfield":1}, {"unique":true, "sparse":true})
under the covers it stores the index as:
{
"v" : 1,
"key" : {
"indexfield" : 1
},
"unique" : true,
"ns" : "TheDatabase.testcollection",
"name" : "indexfield_1",
"sparse" : true
}
However, by default MongoDB treats all numbers as floating point, therefore it's secretly thinking of this as
"key" : {
"indexfield" : 1.0
},
SpringData is expecting this value as an integer, however, as this is how it creates the index values it saves, and so it cannot correctly parse an index created in the shell (incidentally, this means that Geo indexes create in the shell will be parsed fine by SpringData as these are stored as Strings).
I would recommend reporting this to the guys at Spring, you will not be the only one who experiences this. I found the problem in DefaultIndexOperations lines 138 - 141 in version 1.1.1 of spring-data-mongodb.
However you'll be pleased to hear there is a workaround. You can force your command on the shell to store the value as an integer (so Spring can correctly parse the index) using the following command:
db.testcollection.ensureIndex({indexfield: NumberInt(1) }, {unique:true, sparse:true})
It's a bit clumsy, but when I followed your steps but applied this command in the shell the test passed.

Categories

Resources