How to enter input data from an Excel to a Selenium project? - java

I would like your support by providing information (scripts, videos, or books) regarding how to enter input data (for example: username and password) to a selenium project from an Excel file using Cucumber and Serenity DB.
Is it possible?
Thanks for all.

By principle, Cucumber doesn't supports data from external files. Instead it encourages to provide examples with scenario. However there are few non standard way available with cucumber to use examples from the external file. One of them, you can refer in grasshopper's post.
Another alternate is using gherkin with QAF which provides lots of features inbuilt data-providers including XML/CSV/JSON/EXCEL/DB. Here is the step-by-step-tutorial to start with.

From the FAQ:
"We advise you not to use Excel or csv files to define your test cases; using Excel or csv files is considered an anti-pattern.
One of the goals of Cucumber is to have executable specifications. This means your feature files should contain just the right level of information to document the expected behaviour of the system. If your test cases are kept in separate files, how would you be able to read the documentation?
This also means you shouldn’t have too many details in your feature file. If you do, you might consider moving them to your step definitions or helper methods. For instance, if you have a form where you need to populate lots of different fields, you might use the Builder pattern to do so."

Related

How to Build a Data Driven Selenium WebDriver + Java + TestNG Framework

I'm planning to automate some website using Selenium WebDriver + Java + POM(Page Object Model) + TestNG.
I've all other WebPages common for the given website but for each transaction one WebPage which is almost like a form would be different.
So, I've following choices.
Have a Page Object Model (POM) Created for all static common pages and Start creating POMs for the pages which differs in each transaction.
Have a Page Object Model (POM) Created for all static common pages and using some external data source (XML, Excel etc) I can generate the tests for that perticular page.
I'm in favour of second approach here as I don't need to write code again for each new transaction just because there is one page is different?
Any thoughts? or anybody implemented something like this already?
Yes 2nd approach- 'Page Obj model' based is best to keep your code isolated and easily maintainable.
For your test data maintenance I'd suggest you using Cucumber based (BDD driven) framework.
It goes very well with automation FW (POM, Selenium, Java, TestNG/Junit and Maven based) projects.
By using cucumber you need not depend on any other source of test data, i.e. excel or xml, this can easily be maintaned with feature files of cucumber.
Also BDD gives you the main advantage of keeping 'BA-QA-DEV and Management' on same page.
If you dont want to use Cucumber/BDD, then you can use Test NG data provider feature with Excel to achieve better test data management.
If you want to learn Cucumber/BDD-> there are lots of very good video tutorials available online. One of my Fav is here-
https://www.youtube.com/playlist?list=PL6tu16kXT9PpteusHGISu_lHcV6MbBtA6
for web reading:
https://www.lambdatest.com/blog/automation-testing-with-selenium-cucumber-testng/
Happy Testing!
I have worked in similar kind of project. I would suggest to with #1. The reason is that might be possible in future you find difference in web pages, So common function will not be always applicable to each page.
So if you go with #2 as of now then its fine but you gonna end up by following #1 in such cases.
The above answers are mixing the Page object model with a data-driven framework.
Basically in data-driven framework, the data is read from an external file
Well if you want to build a simple pure data-driven framework then it should have
Independent tests
All tests should read data from JSON/XML/YAML/XLS... any source
Properties file having your locators and other settings
You should also create a base class which will have the common reusable functions which can be used in tests
You should make it in such a way that running on grid is easy and by just chaning external flag.. tests should run in GRID
Proper HTML reporting with screenshots, errors, failures should be done
Also watch this video
https://youtu.be/s-W8pw9GnWc

Drools fusion: generate rules automatically

I'm working with drools fusion and I want to test the perfomance of this cep system based on the number of rules implemented. I now have a simple rule file with the .drl extension. I would like to dynamically generate about a 1000 rules. So how can this be done automatically without having them to create one for one in the .drl file?
Have you ever heard about template engines? After all, DRL files are just plain text files. Here are some of them you can use:
String Template: http://www.stringtemplate.org/
Velocity: http://velocity.apache.org/
FreeMarker: http://freemarker.org/
Even Drools comes with some support for templates: http://docs.jboss.org/drools/release/6.3.0.Final/drools-docs/html_single/#d0e5930
If you don't like fancy stuff, you can always go back to the old good StringBuffer class.
Hope it helps.

JUnit testing for reading JSON files

Suppose I want to write a test(s) for a Java class that would provide a method for reading and parsing external files (to be precise, files would be JSON and I would be using Jackson).
Moreover, I've got some examples of JSON files I'd parse, and I also have a vague idea what kind of Java object this SomeMagicalReader.readPony("path/to/location/pony.json") method should return; if I manage to getreadPony to return a some kind of PonyObject, I think have an idea how to test that the produced PonyObject is what I expect.
The question I have concerns providing the readPony function with the test data. I'm probably thinking about this way too much, but (1) is there an idiomatic "Java + Junit" way of doing that? (= testing a method that reads external files?) Copypaste the contents of the example file as a String variable in the test code? (They're fairly short, but that would still end up looking ugly quite fast.) Place the example JSONs just ...somewhere and call readPony with the path? (This sounds more sensible.) (2) What would then be a canonical place to put such external JSON test files, if my tests are organized in a Maven style test package hierarchy, e.g. src/test/java/com/stuff/app/package/SomeMagicalReaderTest.java?
As per maven's standard directory layout, I would advise you to put you JSON test files in src/test/resources since they are test resources. Note that you can (and should) organize your own folders hierarchy under the resources folder (other developers would find it easier to locate specific test resources when fixing or adding some other tests).
So yes you would end up with your JSON files somewhere but not anywhere if your own test resources hierarchy is good enough (for example, if you think your package structure is well organized with meaningful package names, following it for your test resources hierarchy isn't a bad idea at all).
You should ask yourself what the mission critical code is for your project - reading files or parsing their content. For my projects, parsing was the interesting part, so I placed files to parse as test resources, read them in the unit test to string and pass them to the parser to unit test the parser. It is also possible to include the contents directly in unit tests as big ugly strings, but when you have a dedicated place for test resources, why not use it.
Copypaste the contents of the example file as a String variable in the test code
I suggest against doing this as it makes modifying the input for your tests more difficult. Also, using an external file makes your tests more flexible. For example, reading an external file allows you to create multiple tests while reusing the basic framework for each test. Of course, this also means that you will need to take some time to design the methods that actually perform the tests.

Java crawler with custom file save ability

I'm looking for a open-source web crawler written in Java which, in addition to usual web crawler features such as depth/multi-threaded/etc. has the ability to custom handling each file type.
To be more precise, when a file is downloaded (or is going to be downloaded), I want to handle the saving operation of the files. The HTML files should be saved in a different repository, images to another location and other files somewhere else. Also, the repository could be not just a simple file system.
I've heard a lot about Apache Nutch. Does it have the ability to do this? I'm looking to achieve this as simple and fast as possible.
Based on assumption that you want a lot of control over how crawler works, I would recommend crawler4j. There are many examples, so you can get quick glimpse of how things are working.
You could easily handle resources based on their content type (take a look at Page.java class - it is class of object that contains information about fetched resource).
There is no limitations regarding repository. You can use anything you wish.

Validating files having tree based structures

I am looking for a validator to validate tree structure based configuration files.
e.g.
a.student.name joe
a.student.class arts
Can you suggest any ideas on validating such config. So far I have searched and I could find validator for xml files only.
Unfortunately, schema validation for configuration files is rare outside of XML. The only other option I am aware of is Config4J (which I wrote).
If you visit the website, then you should scroll down to the bottom of the main web page to access the complete set of manuals (available in both PDF and HTML versions). I recommend you have a look at the following parts of the manuals to get an overview of Config4J and decide if it satisfies your needs for validation.
Chapters 2 and 3 of the "Getting Started Guide" provide an overview of the configuration syntax and the API. In particular, Section 3.10 provides a quick example of the schema language.
Chapter 9 of the "Getting Started Guide" provides a complete definition of the schema language.
Chapter 3 of the "Java API Guide" discusses the API for writing your own schema types to extend the functionality of the schema language.
Update: I discovered from the answer by kiran.kumar M that the Java and Ruby implementations of YAML have a schema validator called Kwalify.
Update: There is now a schema language for JSON.
Try www.yaml.org . Yaml supports tree structures.
Here is the list of few parsers,
jYaml, SnakeYaml , YamlBeans
Yaml is a file format and supports complex hirarchial structures. Most of the structural validations can be performed automatically by the parsers listed above. You may need addtional code to validate your business needs.
Also few online validators are also available
see :
http://yaml-online-parser.appspot.com/
http://instantyaml.appspot.com/
Also see https://stackoverflow.com/questions/450399/which-java-yaml-library-should-i-use
Validation https://stackoverflow.com/questions/287346/yaml-validation
If the structure is the same as a Java properties file, you can read properties from it. Then, you need to decide what you mean by "validate". If applicable, you probably have an easy way to solve your problem.
Without seeing any form of sample.
If you want to validate the structure of something, and semantics is not an issue you could use lex/yacc (read flex/bison).
Depending on the problem one could venture out and use ox after that. Basically starting to write a mini-compiler.

Categories

Resources