I'm currently working at my job to perform GUI testing of our web page using Selenium 2 via Java in Eclipse. I've been trying to program my tests in such a way that I maximize the amount of code I can reuse and as a consequence I now have a lot of helper methods that function almost like a framework. This has lead to my test class becoming fairly bloated with only one method used as the actual test and the rest being the implementation of the test.
Currently I just run the testing right from Eclipse with all my methods being static.
From what I understand there are a couple different ways I could try to separate things out:
One way would be to put all the methods into a class I use as a framework and extend it when writing an actual test, but I don't know if having a framework in a framework (Selenium) makes sense.
Another way would possibly be making my helper methods into an object where I can have one of these objects for each test. I don't know if this is good practice though, or if it will cause problems down the road. It would also mean I'd have to type more to do the same amount of testing.
My main questions are:
What's the best way to split up my testing class into test classes and an implementation class?
Is what I'm doing outside the intended usage of Selenium?
The best practice is that create a page object model for each web UI.That will help you to access the web element easily.selenium provide that feature and you also have to do some R&D things.
Home_Page.lnk_MyAccount(driver).click();
LogIn_Page.txtbx_UserName(driver).sendKeys("testuser_1");
LogIn_Page.txtbx_Password(driver).sendKeys("Test#123");
And put all selenium related actions into a one class.like Action.click(),Action.search(), or what ever your common set of actions.
Next thing is that reusable code implement via a function.let say login(usernName,Password) then handle the login code inside that.and you can reuse thease codes in your other places.always try to modularize your implementation.
Related
I am currently writing Selenium WebDriver tests for a variety of websites each using the same proprietary framework.
Because of this, there are many test cases that can be quite similar across different websites. As such I have made my test classes as generic as possible and made it so that every XPath/CSS Selector/ID used to locate elements is defined in a Constants class, which is unique to every project.
But, in some cases, the code for the same test can be the same across different websites, since I wrote them generically.
In addition each test is a direct/indirect extension of a BasicTest class which contains code that is likely to be reused by different tests (ex: WebDriver instance declaration, etc)
The way I thought about setting my test structure was the following:
one generic project that is "reused" by each subsequent project;
one project per website with its own definition of the Constantsclass and a TestSuite class that it can use to run both generic tests and tests specific to itself.
This would allow me not to have copies of these generic tests in each of my test projects.
The problem is that I don't really know how to set this up. The GenericProject is going to contain tests that require variables from Constant, but it makes no sense to have generic Constants. Plus, will I be able to call those tests inside my website project-specific TestSuites? If I redefine Constants in each specific project, will those constants be used for the generic tests defined in GenericProject?
How can I even set it up so that I can reuse Project A's classes inside of Project B, C, D... etc?
Extract your constants to a properties file which exists in each module as src/test/resources/common.properties.
Use org.apache.commons:commons-configuration2 PropertiesConfiguration to read this file. It will handle nested properties just fine.
Common code can be shared by depending on your GenericModule. Official instructions for two models of doing this (extract common tests to a new module or use a test-jar) are here
In general in order to reuse code over projects you would create a library containing the reusable code. In order to do so you'd need to think about a suitable API for the library.
This contains decisions about:
How will functionality be called from dependent code
How will dependent code provide required data.
If you are using constants for e.g. CSS selectors, that are different but have the same semantics, e.g.
root frame
side panel
main area
...
you might want to define an interface that the dependent code can provide. This could look like:
interface CssSelectors {
String rootFrame();
String sidePanel();
//...
}
If you are building this for tests you might also want to use features of your test framework (e.g. Rules in JUnit).
When reusing code in tests you also should consider another aspect:
If s.o. reads the tests written with your library, will she be able to sufficiently understand what is happening behind the border of the library to understand what the test is all about? This is a lot more of a question when dealing with test code than with production code as for test coverage and validity of tests it often matters a lot more how a setup or verification is done than is the case for production code.
I'm working on an automated tests project, where I create UI tests with Selenium/Selenide in java for a web-based media player, that is then integrated in other products of my company. However, there was a UI for that Player (as a React component) in the application my team is developping before there was a UI for the Player itself, in standalone. So, basically, I have to redesign the code that I had for both the application and the standalone Player: the goal is to create the code in the Player package, to then import it to the application test project.
The problem here is that, in the standalone version, the Player HTML code contains a shadowroot, and the Player React component in the main application does not. This means that I have to use WebElements in the standalone version, whereas I am to continue using SelenideElements in the main application test code (to be able to deal with some particular interactions that occur in the main application and that are not possible in the standalone Player).
Normally, for each "part" of the Web page, I create a "Client" class that contains the methods to find the elements, and then to perform the interactions with them and/or verify their state. Since the two UIs have the differences I explained above, I imagine that I will have to have two different sets of Client classes. I was thinking of doing something like creating another class or an interface to try to find the shadowroot element, and, depending on whether it was found on the HTML code or not, initialising one of the two sets of clients.
So, my question is, how can I overall structure all of this in terms of classes/interfaces/methods, so I can have as little doubled code as possible in both Client sets?
Any help is welcome, even if it's just to show me that I'm thinking about this in the wrong way.
Whenever I hit such a case
Since the two UIs have the differences ... I will have to have two different sets of Client classes ... creating another class or an interface to try to find the shadowroot element, and, depending on whether it was found on the HTML code or not, initialising one of the two sets of clients.
and especially for as little doubled code as possible in both Client sets, in my experience Strategy provides quite suitable solution. It defines a family of algorithms, encapsulate each one, and make them interchangeable. Also lets the algorithm vary independently from clients that use it. So, you may think for the algorithms to be your test steps and the clients are the apps loaded.
I am new to Selenium and i want to crate a test case of my dummy website for practice purpose. I have learn about Keyword-driven and Data-driven frame work. I also learned about TestNG but i am in a confusion that how to implement all these things, I want to automate full website with reports.
You are mixing a lot of orthogonal concepts together. Rather than unpack them, please allow me to start from the beginning.
First, you want to use the Page Object Pattern with Selenium. This pattern decouples your tests from the internal structure of a page--via a services abstraction where all the test "knows about" is the services provided by the page. This way the structure of a page can change (as it certainly will during the project), but your tests remain the same (assuming the services don't change, but of course you want the tests to change in that case).
Next, you have tests that will use PageFactory and other aspects of the Selenium API to perform assert's and verify's on the page objects. These tests can be written as TestNG or JUnit tests.
So you will have a TestNG test (since that seems your preference) where the test methods will perform assert's and verify's on page objects by using the Selenium API.
Hope that helps.
I think what you were seeking for is #DataProvider, which TestNG provides.
All you need is to return data in type :Object[][] or Iterator<Object[]> ,then use this data provider in your testcase.
The testcase will be run (length of your Object[][] or Iterator<Object[]>) times
We are considering to use Cucumber on our project for acceptance testing.
When we write a scenario in a Cucumber feature, we write a list of Given, When and Then statements.
As we use cucumber-jvm project, the Given, When and Then statement are related to Java methods in (JUnit) classes.
I want to know what is the best organization for the code related to Given / When / Then in the project structure. My main concern is the maintenance of the cucumber tests on a big project, where the number of scenario is quite important, and especially regarding the items that are shared between features.
I can see at least 2 main approaches:
Each feature is related to it's own JUnit class. So if I have a foo/bar/baz.feature cucumber file, I will find the releated foo.bar.Baz JUnit class with the adequate #Given, #When and #Then annotated methods.
Separate #Given, #When and #Then methods into "thematic" classes and packages. For example, if in my cucumber scenario I have a statement Given user "foo" is logged, then the #Given("^user \"([^\"]*)\" is logged$") annotated method will be located in the foo.user.User class method, but potentially, the #When method used later in the same cucumber scenario will be in a different Java class and package (let say foo.car.RentCar).
For me, the first approach seems good in the way that I can easily do the relation between my cucumber features and my Java code. But the drawback is that I can have a lot of redundancies or code duplication. Also, it may be hard to find a possible existing #Given method, to avoid to recreate it (the IDE can help, but here we are using Eclipse, and it does not seem to give a list of existing Given statement?).
The other approach seems better essentially when you have Given conditions shared among several cucumber feature, and thus I want to avoid code duplication. The drawback here is that it can be hard to make the link between the #Given Java method and the Given cucumber statement (maybe, again, the IDE can help?).
I'm quite new to cucumber, so maybe that my question is not a good question, and with time and experience, the structure will be self-evident, but I want to get good feedbacks on its usage...
Thanks.
I would suggest grouping your code according to the objects it refers to, similar to option #2 you presented in your question. The reasons being:
Structuring your code based on how and where it's being used is a big no-no. It's actually creating coupling between your feature files and your code.
Imagine such a thing in your product's code- the SendEmail() function wouldn't be in a class called NewEmailScreenCommands, would it? It would be in EmailActions or some such.
So the same applies here; structure your code according to what it does, and not who uses it.
The first approach would make it difficult to re-organize your feature files; You'd have to change your code files whenever you change your feature files.
Keeping code grouped by theme makes DRYing it much easier; you know exactly where all the code dealing with the user entity is, so it's easier for you to reuse it.
On our project we use that approach (i.e BlogPostStepDefinitions class), with further separating the code, if the class gets too large, to types of steps (i.e BlogPostGivenStepDefinitions).
We have also started using Cucumber-JVM for acceptance testing and have similar problems with organising code. We have opted to have 1 step definition class for each feature. At the moment this is fine as the features we are testing aren't very complex and quite separate, there is very little overlap in our features.
The second approach you mentioned would be better I think, but it is often challenging to tie together several different step definition classes for a single scenario. I think the best project structure will become clearer once you start adding more features and refactor as normal.
In the meantime here is an Eclipse plugin for cucumber,
https://github.com/matthewpietal/Eclipse-Plugin-for-Cucumber
it has syntax highlighting as well as a list of existing available steps when writing a feature.
On the current project I am taking part in, we asked ourselves the very same question.
After fiddling a bit with the possibilities, what we opted for was a mix of both the solutions you exposed.
Have steps regrouped in theme-centric common steps classes
app-start steps
security check steps
[place random feature concern here] steps
And classes of scenario (and in some case even feature) specific steps
This was to have at the same time the grouping of factorized code which is pretty easily identifiable on it's whatabouts, whereabouts and whatnot.
Yet it allows not to clutter those common classes with overly specific code.
The wiring between all these classes is handled by spring (with cucumber spring which does a great job once you get the hang of it).
I have been working on a comparatively large system on my own, and it's my first time working on a large system(dealing with 200+ channels of information simultaneously). I know how to use Junit to test every method, and how to test boundary conditions. But still, for system test, I need to test all the interfacing and probably so some stress test as well (maybe there are other things to do, but I don't know what they are). I am totally new to the world of testing, and please give me some suggestions or point me to some info on how a good code tester would do system testing.
PS: 2 specific questions I have are:
how to test private functions?
how to testing interfaces and avoid side effects?
Here are two web sites that might help:
The first is a list of open source Java tools. Many of the tools are addons to JUnit that allow either easier testing or testing at a higher integration level.
Depending on your system, sometimes JUnit will work for system tests, but the structure of the test can be different.
As for private methods, check this question (and the question it references).
You cannot test interfaces (as there is no behavior), but you can create an abstract base test classes for testing that implementations of an interface follow its contract.
EDIT: Also, if you don't already have unit tests, check out Working Effectivly with Legacy Code; it is a must for testing code that is not set up well for testing.
Mocking is a good way to be able to simulate system tests in unit testing; by replacing (mocking) the resources upon which the other component depends, you can perform unit testing in a "system-like" environment without needing to have the entire system constructed to do it.
As to your specific questions: generally, you shouldn't be using unit testing to test private functions; if they're private, they're private to the class. If you need to test something, test a public method which uses that private method to do something. Avoiding side effects that can be potentially problematic is best done using either a complete test environment (which can easily be wiped back to a "virgin" state) or using mocking, as described above. And testing interfaces is done by, well, testing the interface methods.
Firstly, if you already have a large system that doesn't have any unit tests, and you're planning on adding some, then allow me to offer some general advice.
From maintaining the system and working with it, you'll probably already know the areas of the system which tend to be buggiest, which tend to change often and which tend not to change very much. If you don't, you can always look through the source control logs (you are using source control, right?) to find out where most of the bug fixes and changes are concentrated. Focus your testing efforts on these classes and methods. There's a general rule called the 80/20 rule which is applicable to a whole range of things, this being one of them.
It says that, roughly on average, you should be able to cover 80 percent of the offending cases by doing just 20% of the work. That is, by writing tests for just 20% of the code, you can probably catch 80% of the bugs and regressions. That's because most of the fragile code, commonly changed code and worst offending code makes up just 20% of the codebase. In fact, it may be even less.
You should use junit to do this and you should use something like JMock or some other mocking library to ensure you're testing in isolation. For system testing/integration testing, that is, testing things while they're working together, I can recommend FitNesse. I've had good experience with it in the past. It allows you to write your test in a web browser using simple table-like layouts, where you can easily define your inputs and expected outputs. All you have to do is write a small backing class called a Fixture, which handles the creation of the components.
Private functions will be tested when the public functions that call them. Your testing of the public function only cares that the result returned is correct.
When dealing with API (to other packages or URLS or even to file/network/database) you should mock them. A good unit test should run in a few milliseconds not in seconds. Mocking is the only way to do that. It means that bugs between packages can be dealt with a lot easier than logical bugs at the functional level. For Java easymock is a very good mocking framework.
You may have a look on this list : Tools for regression testing / test automation of database centric java application? for a list of interesting tools.
As you seem to already use Junit extensively it means that you're already "test infected", that is a good point...
In my personal experience, the most difficult thing to manage is data. I mean, controlling very acutely the data agaisnt which the tests are runned.
The lists of tools given before are useful. From personal experience these are the tools I find useful:
Mocking - Mockito is an excellent implementation and has clever techniques to ensure you only have to mock the methods you really care about.
Database testing - DBunit is indespensible for setting up test data and verifying database interactions.
Stress testing - Jmeter - once you see passed the slightly clunky gui this is a very robust tool for setting up scenarios and running stress tests.
As for general approach start by trying to get tests running for the usual "happy paths" through your application these can form a basis for regression testing and performance testing. Once this is complete you can start looking at edge cases and error scenarios.
Although this level of testing should be secondary to good unit testing.
Good luck!