I am trying to figure out if something is possible (or if I am being a bit silly.)
I have a very simple Excel sheet - 2 columns, one column is a list of search terms, the second column is a list of expected URLS. I run this via selenium, it will navigate to google, open the Excel sheet, search for the term and if the expected result appears pass the test. It does this for the three rows in the sheet. All good. However, I was hoping to #Test each of the rows - but I can't quite figure out how to achieve this.
Below is the test code, like I said I can't quite get this to work - at present it runs but appears as a single test which has had 3 different searches.
#Test
#Severity(SeverityLevel.CRITICAL)
public void driveDatData() throws InterruptedException, BiffException, IOException {
parameters = WebDriverSteps.currentDriver.toString();
steps.openWebPage("http://www.google.co.uk");
FileInputStream fi = new FileInputStream("C:\\temp\\sites.xls");
Workbook w = Workbook.getWorkbook(fi);
Sheet s = w.getSheet("Sheet1");
for (int i=1;i<=s.getRows(); i++)
{
if (i > 1)
{
steps.goToURL("http://www.google.co.uk");
}
steps.search(s.getCell("A" + i).getContents());
Assert.assertTrue("Check the " + s.getCell("A" + i).getContents() + " link is present", steps.checkForTextPresent(s.getCell("B" + i).getContents()));
}
}
Couple of things:
I assume it makes sense for you to have your test data in an external excel sheet? Otherwise the more common approach would be to keep test data within your project as a test resource. Also I think there are various frameworks around that can help you retrieving test data from excel files.
Having said this:
Change your code to populate the test data into a data structure at #Before, write different #Tests that test different things. This will also separate the retrieval of the test data from the actual test (which is a good things in terms of maintainability and responsibilities). If file reading / performance is an issue, you might want to use #BeforeClass to do this only once per test class.
#Before
// read file, store information into myTestData
#Test
// tests against myTestData.getX
#Test
// tests against myTestData.getY
For any good test code complexity should be 1. Any loops should be replaced by parameterized tests. Please take a look at https://github.com/junit-team/junit/wiki/Parameterized-tests.
I would suggest you add Feed4JUnit to your project.
It is highly configurable and the only library I know that can do parameterized JUnit and TestNG tests out-of-the-box with Excel support.
Feed4Junit
#RunWith(Feeder.class)
public class AddTest {
#Test
#Source("http://buildserv.mycompany.com/wiki/myproject/tests/add.xls")
public void testAdd(int param1, int param2, int expectedResult) {
int result = MyUtil.add(param1, param2);
assert result == expectedResult;
}
}
This example comes straight form the Feed4Junit site.
Important to note the parameters are read left-to-right.
Each row is a test and must have valid values in each column, i.e. if a column has the same value for 3 rows then it needs to appear in each row.
After a bit of effort managed to get this working in JUnit using #RunWith. Found a few examples which, while not exactly what I wanted, gave enough insight to get this woring for me with JUnit.
Related
I do have more than 50 e-commerce websites where I want to test multiple features i.e login, register, add to cart, checkout etc using my selenium , Java and testng Framework.
I am kind of confuse in choosing best approach to do it because I am looking for setup where Once I run the test, it should go to each website one by one, complete the testing and move to next.
For now I choose testng Dataprovider but that seems not something which could help. I am getting all the rows / data from the excel but can't split data for the different functions which is stopping me to use data provider.
For example My excel contains all the rows :
Url | Username | Password | Productname | creditcard number | shipping address
And my test class has different method for each purpose
public void login(String username, String password)
public void addtoCart(String prodcutName)
public void FillTheCreditCard(String cardNumber)
But as per design of testng Dataprovider I can not use only few parameters from the excel sheet as per my need, I have to pass all into function even if not needed (which I feel is not good practice).
so now I am giving up from the testng data provider. and looking for better way to manage it.
What I assume you'd want is to get an array containing all the cell contents for each column, which you can go with jxl. I've come up with (and not tested) the following to grab what's in your "Url" column:
import jxl.Cell;
import jxl.Sheet;
import jxl.Workbook;
...
File f=new File("C:\\Users\\data.xls");
Workbook Wb=Workbook.getWorkbook(f);
Sheet sh=Wb.getSheet(0);
int row=sh.getRows();
String[] urlArray = []
for (int i=0;i<row;i++){
Cell c=sh.getCell(0,i);
urlArray = Arrays.copyOf(urlArray, urlArray.length + 1);
urlArray[urlArray.length - 1] = c.getContents()
I have few concerns regarding cucumber framework:-
1. I have single Feature file(steps are dependent on each other)and i want to run all the scenarios in order, by default they are running in random order.
2. How to run a single feature file multiple times?
I put some tags and tried to run but no luck.
#Given("Get abc Token")
public void get_abc_Token(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
DataTable data=dataTable.transpose();
String tkn= given()
.formParam("parm1",data.column(0).get(1))
.formParam("parm2", data.column(1).get(1))
.formParam("parm3", data.column(2).get(1))
.when()
.post(new URI(testurl)+"/abcapi")
.asString();
jp=new JsonPath(tkn);
Token=jp.getString("access_token");
if (Token==null) {
Assert.assertTrue(false,"Token is NULL");
}else {
}
}
#Given("Get above token")
public void get_abovetoken(io.cucumber.datatable.DataTable dataTable) throws URISyntaxException {
System.out.println("Token is " +Token);
}
}
So in the above steps i am getting token from one step and trying to print token in other step but i got null and not the actual value, because my steps are nunning randommmally
Please note i am running TestRunner via testng.xml file.
Cucumber and testing tools in general are designed to run each test/scenario as a completely independent thing. Linking scenarios together is a terrible anti-pattern don't do it.
Instead learn to write scenarios properly. Scenarios and feature files should have no programming in them at all. Programming needs to be pushed down into the step definitions.
Any scenario, no matter how complicated can be written in 3 steps if you really want to. Your Given can set up any amount of state. Your When deals with what you are doing, and your Then can check up any number of conditions.
You do this by pushing all the detail down out of the scenario and into the step definitions. You improve this further by having the step definitions call helper methods that do all the work.
I'm using cucumber and maven on eclipse and what I'm trying to do is run each test independently. For example I have a library system software that basically allows users to borrow books and do other stuff.
One of the conditions is that users can only borrow a max of two books so I wrote to make sure that the functionality works. This is my feature file:
Scenario: Borrow over max limit
Given "jim#help.ca" logs in to the library system
When "jim#help.ca" order his first book with ISBN "9781611687910"
And "jim#help.ca" orders another book with ISBN "9781442667181"
And "jim#help.ca" tries to order another book with ISBN "1234567890123"
Then jim will get the message that says "The User has reached his/her max number of books"
I wrote a corresponding step definition file and every worked out great. However, in the future I want to use the same username ("jim#help.ca") for borrowing books as though jim#help.ca has not yet borrowed any books. I want each test to be independent of each other.
Is there any way of doing this...maybe there's something I can put into my step definition classes such as a teardown method. I've looked into it but I couldn't fine any solid information about it. If there's a way please help me. Any help is greatly appreciated and I thank you in advance!
Yes, you can do setups and teardowns before and after each scenario, but it's not in the step definition file. What you want to use are hooks.
Hooks run before or after a scenario and can run before/after every scenario or just the ones you and #tag to, for example:
#remove_borrowed_books
Scenario: Borrow over max limit
Unfortunately I have only used cucumber with ruby not java so I can't give you step-by-step instructions, but this should tell you what you need to know https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/
You can use the "#After" hook to achieve this as #Derek has mentioned using for example a Map of books borrowed per username:
private final Map<String, Integer> booksBorrowed = new HashMap<>();
#After
public void tearDown() {
booksBorrowed.clear();
}
#Given("...")
public void givenUserBorrowsBook(String username) {
booksBorrowed.put(username, booksBorrowed.containsKey(username) ? booksBorrowed.get(username) + 1 : 1);
....
}
Or the "#Before" hook to perform the cleanup before each scenario is executed, which is the option I would recommend:
private Map<String, Integer> booksBorrowed;
#Before
public void setUp() {
booksBorrowed = new HashMap<>();
}
If you are planning to run scenarios in parallel then the logic will be more complex as you will need to maintain the relationship between the thread executing a particular scenario and the usernames used on that thread.
I am trying to create a framework using selenium and TestNG. As a part of the framework i am trying to implement Data Parameterization. But i am confused about optimized way of implementing Data parameterization. Here is the following approaches i made.
With Data Providers (from excel i am reading and storing in object[][])
With testng.xml
Issues with Data Providers:
Lets say if my Test needs to handle large volumes of data , say 15 different data, then i need to pass 15 parameters to it. Alternative , if i try to create a class TestData to handle this parameters and maintain in it , then for every Test there will be different data sets. so my TestData class will be filled with more than 40 different params.
Eg: In a Ecom Web site , There will be many different params exists like for Accounts , cards , Products , Rewards , History , store Locations etc., for this we may need atleast 40 different params need to declared in Test Data.Which i am thinking not a suggestable solution. Some other tests may need sometimes 10 different test data, some may need 12 . Even some times in a single test one iteration i need only 7 params in other iteration i need 12 params .
How do i manage it effectively?
Issues with Testng.xml
Maintaining 20 different accounts , 40 different product details , cards , history all in a single xml file and configuring test suite like parallel execution , configuring only particular classes to execute etc., all together will mess the testng.xml file
So can you please suggest which is a optimized way to handle data in Testing Framework .
How in real time the data parameterization , iterations with different test datas will be handled
Assuming that every test knows what sort of test data it is going to be receiving here's what I would suggest that you do :
Have your testng suite xml file pass in the file name from which data is to be read to the data provider.
Build your data provider such that it receives the file name from which to read via TestNG parameters and then builds a generic map as test data iteration (Every test will receive its parameters as a key,value pair map) and then work with the passed in map.
This way you will just one data provider which can literally handle anything. You can make your data provider a bit more sophisticated by having it deal with test methods and then provide the values accordingly.
Here's a skeleton implementation of what I am talking about.
public class DataProviderExample {
#Test (dataProvider = "dp")
public void testMethod(Map<String, String> testdata) {
System.err.println("****" + testdata);
}
#DataProvider (name = "dp")
public Object[][] getData(ITestContext ctx) {
//This line retrieves the value of <parameter name="fileName" value="/> from within the
//<test> tag of the suite xml file.
String fileName = ctx.getCurrentXmlTest().getParameter("fileName");
List<Map<String, String>> maps = extractDataFrom(fileName);
Object[][] testData = new Object[maps.size()][1];
for (int i = 0; i < maps.size(); i++) {
testData[i][0] = maps.get(i);
}
return testData;
}
private static List<Map<String, String>> extractDataFrom(String file) {
List<Map<String, String>> maps = Lists.newArrayList();
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
maps.add(Maps.newHashMap());
return maps;
}
}
I'm actually currently trying to do the same (or similar) thing. I write automation to validate product data on several eComm sites.
My old method
The data comes in Excel sheet format that I process slightly to get in a format that I want. I run the automation that reads from Excel and executes the runs sequentially.
My new method (so far, WIP)
My company recently started using SauceLabs so I started prototyping ways to take advantage of X # of VMs in parallel and see the same issues as you. This isn't a polished or even a finished solution. It's something I'm currently working on but I thought I would share some of what I'm doing to see if it will help you.
I started reading SauceLabs docs and ran across the sample code below which started me down the path.
https://github.com/saucelabs-sample-scripts/C-Sharp-Selenium/blob/master/SaucePNUnit_Test.cs
I'm using NUnit and I found in their docs a way to pass data into the test that allows parallel execution and allows me to store it all neatly in another class.
https://github.com/nunit/docs/wiki/TestFixtureSource-Attribute
This keeps me from having a bunch of [TextFixture] tags stacked on top of my script class (as in the demo code above). Right now I have,
[TestFixtureSource(typeof(Configs), "StandardBrowsers")]
[Parallelizable]
public class ProductSetupUnitTest
where the Configs class contains an object[] called StandardBrowsers like
public class Configs
{
static object[] StandardBrowsers =
{
new object[] { "chrome", "latest", "windows 10", "Product Name1", "Product ID1" },
new object[] { "chrome", "latest", "windows 10", "Product Name2", "Product ID2" },
new object[] { "chrome", "latest", "windows 10", "Product Name3", "Product ID3" },
new object[] { "chrome", "latest", "windows 10", "Product Name4", "Product ID4" },
};
I actually got this working this morning so I know now the approach will work and I'm working on ways to further tweak and improve it.
So, in your case you would just load up the object[] with all the data you want to pass. You will probably have to declare a string for each of the possible fields you might want to pass. If you don't need that particular field in this run, then pass empty string.
My next step is to load the object[] by loading the data from Excel. The pain for me is how to do logging. I have a pretty mature logging system in my existing sequential execution script. It's going to be hard to give that up or setting for something with reduced functionality. Currently I write everything to a CSV, load that into Excel, and then I can quickly process failures using Excel filtering, etc. My current thought is to have each script write it's own CSV and then pull them all together after all the runs are complete. That part is still theoretical right now though.
Hope this helps. Feel free to ask me questions if something isn't clear. I'll answer what I can.
When I try to run map/reduce job on Hadoop cluster without specifying any input file I get following exception:
java.io.IOException: No input paths specified in job
Well, I can imagine cases when running a job without input files does make sense. Generation of test file would be the case. Is it possible to do that with Hadoop? If not do you have some experience on generating files? Is there better way then keeping dummy file with one record on cluster to be used as input file for generation jobs?
File paths are relevant for FileInputFormat based inputs like SequenceInputFormat, etc. But inputformats that read from hbase, database do not read from files, so you could make your own implementation of the InputFormat and define your own behaviour in getSplits, RecordReader, createRecordReader. For insperation look into the source code of the TextInputFormat class.
For MR job unit testing you can also use MRUnit .
If you want to generate test data with Hadoop, then I'd recommend you to have a look at the source code of Teragen .
I guess your are looking to test your map-reduce on samll set of data so in that case i will recommand following
Unit Test For Map-Reduce will solve your problem
If you want to test your mapper/combiner/reducer for a single line of linput from your file , best possible thing is to use UnitTest for each .
sample code:-
using Mocking Frame work in java Use can run these test cases in your IDE
Here i have used Mockito OR MRunit can also be used which too is depended on a Mockito(Java Mocking Framework)
public class BoxPlotMapperTest {
#Test
public void validOutputTextMapper() throws IOException, InterruptedException
{
Mapper mapper=new Mapper();//Your Mapper Object
Text line=new Text("single line from input-file"); // single line input from file
Mapper.Context context=Mockito.mock(Mapper.Context.class);
mapper.map(null, line, context);//(key=null,value=line,context)//key was not used in my code so its null
Mockito.verify(context).write(new Text("your expected key-output"), new Text("your expected value-output")); //
}
#Test
public void validOutputTextReducer() throws IOException, InterruptedException
{
Reducer reduer=new Reducer();
final List<Text> values=new ArrayList<Text>();
values.add(new Text("value1"));
values.add(new Text("value2"));
values.add(new Text("value3"));
values.add(new Text("value4"));
Iterable<Text> iterable=new Iterable<Text>() {
#Override
public Iterator<Text> iterator() {
// TODO Auto-generated method stub
return values.iterator();
}
};
Reducer.Context context=Mockito.mock(Reducer.Context.class);
reduer.reduce(new Text("key"),iterable, context);
Mockito.verify(context).write(new Text("your expected key-output"), new Text("your expected value-output"));
}
}
If you want to generate a test file why would you need to use hadoop in the first place? Any kind of file you'd use an input to a mapreduce step can be created using type-specific API's outside on a mapreduce step, even HDFS files.
I know I'm resurrecting an old thread, but there was no best answer chosen, so I thought I'd throw this out there. I agre MRUnit is good for many things, but sometimes I just wanna play around with some real data (especially for tests where I'd need to mock it out to make it work in MRUnit).
When that's my goal, I create a separate little job to test my ideas and use SleepInputFormat to basically lie to Hadoop and say there's input when really there's not. The old API provided an example of that here: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.22/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/SleepJob.java, and I converted the input format to the new API here: https://gist.github.com/keeganwitt/6053872.