Must have contents/conventions for "Selenium Automation" project? - java

I am involved in multiple projects in my company for manual testing. When we have so many test cases in the project we have to automate regression suite for those. Now problem is I have to have a portable framework that work for any project I move to. I simply import my java project(as existing maven project) and start executing selenium test cases after writing them. As of now, this is how I am doing but I don't know if this the optimum way or not.
I create a maven project, which gives, few source folders ready to use./myproject/src/main/java and /myproject/src/test/java.
In /myproject/src/test/java folder I create a class which has setup() and teardown() methods.
I create another class by the name "Define" where I define variables/string/class objects example:this class has WebDriver driver; or UserLogin userlogin = new UserLogin();
I create more classes in /myproject/src/test/java by the name of functionality let say "CreateZoo" and extend them with Define class. Later I use methods of these classes in classes inside /myproject/src/main/java, for example: A class of Main package would be "DailyTests" and I call methods here from classes inside /myproject/src/test/java
Apart from this I keep chromedriverexe, properties file, data.xls in main>>resources folder.
I also have CommonFunctions class extends Define in /myproject/src/test/java, here I have written common java functions that I use frequently like :
class CommonFunctions extends Define {
/*
*
* Click linktext, click partial, name,id,xpath,css, classname
*/
static class clik {
static void txt(String locator) {
new WebDriverWait(driver, 60).until(
ExpectedConditions.presenceOfElementLocated(By
.linkText(locator))).click();
}
and another common fucntions class for myproject like: I have written a long method for user login and I call it where ever I need it.
So, the way I am doing above is good or should I have classes by the name of each page, for example "LoginPage.java"
This class would've html elements defined using pagefactory(as of now I am not using page factory.)
I am a selenium2.0 aspirant , I don't have much experience on it. How to beautify code and create/maintain selenium projects?

Your code structure is fairly optimal.
But I do have some suggestions:-
In my opinion, maintainability of test code can be improved we you define classes based on pages instead of functionality.
Consider a scenario where you have to click on manage account>edit>change address. There can be a similar scenario where you have to click on manage account>edit>change username. In both the scenarios the first 2 steps are same. But they have to be written twice one in ChangeAddress Class and other in ChangeUserNameClass. If property of edit changes, we have to change the code in places. So less maintainability and redundant code.
This can be avoided by defining classes according to functionality. Class1 contains clickManageAccount(), Class2 contains clickEdit() and Class3 contains clickChangeAddress() and changeUserName().
So it is just a matter of assembling the methods above methods in a Junit class based on the functionality.
This will ensure each Test is modular and maintainable. Also page factory can be efficiently used in each class.
Please consider the following folder structure :-
/myproject/src/test/java/page - Contains all your page classes which extends define class
/myproject/src/test/java/common - Contains all common classes and define class
/myproject/src/test/java/scenarios - Contains JUnit test class with #Before[setUp()], #After[tearDown()] and #Test methods. #Test Methods use methods in page classes to assemble the required methods for a particular scenario.
/myproject/src/main/resources - Contains resources like chromedriverexe, properties file, data.xls
/myproject/src/main/java/ - Contains classes like DailyTests which use classes in /myproject/src/test/java/scenarios
This is just my thoughts. Please do revert back with your ideas on it.

I'd look at implementing cucumber (with Selenium): http://cukes.info/
It'll make your code OO, cleaner, and easier to maintain and read.

Related

Splitting up Step Definitions

Currently I have one java class (Step Definitions.java) that contains all -
#Before, #After, #Given, #When, #Then
I want to split these annotations into three different classes. One class for the #Before / #After. One class for #Given; One class for #When; and One class for #Then.
All 4 classes will be under the same package. Can this be done? Do I have to change anything in the runner class? Any other references I have to make to these seperate classes? or should this just work like that, when I call the Gherkin in my feature file?!
It will work out off the box. As long as the code is mentioned in the package structure given to the glue option for cucumberoptions, it will be loaded and executed. All the stepdefinition and hook code are loaded for each scenario, so doesnt matter which class the code exists.
Though better would be to separate them according to parts of the application they deal with.

Selenium test organisation of classes

Ok selenium gurus a bit of an open question here. I am looking for some guidance on the best way to organize my tests that use object oriented principles.
At the moment I am creating a testrunner main class from which I create an object of a general test class. I am then extending this class for more granular tests.
An example.
I need to open the browser, enter the url, log in as a user.
From there you can access perhaps 40 different links each containing their own pieces of functionality. E.g. A profile link which leads to a profiule screen where you can enter introduction text, upload a picture, change a picture etc...
Another example would be a notification screen where you can navigate to view and mark as read etc...notifications you have received.
I can write the code to test this by for example by creating a ton of methods in that 1 class and then calling these from the main testrunner class. There has to be a better organized way where I can have a separate class for functionality but wont I then have to create a new object for each test?
Sorry about the confused post I'm trying to learn Java thoroughly and selenium also.
EDITED
I have copied the process of creating a page object hybrid model that is documented in the YouTube video:
https://www.youtube.com/watch?v=gxwh8D_tx-0
I created a Pages package which contains all of the Page specific class such such ProfilePage, NotificationPage etc...
I have a second package which contains the tests and a testbase class which generates the driver object, opens the browser.
I want to get to the stage where in my tests class I can have a specific class for a test for example:
class test_that_user_can_upload_profile_picture
When I create such a class I have methods inside the class such as: test_that_navigation_to _profile_page_successful()
test_to_upload_valid_picture()
Should such navigation methods be inside this class?
Also I find that in order to access my methods from a package I need to mark my methods as static. Is this ok? I noticed on the youtube video the instructors methods were not static. Looking at the setup I dont quite understand why I cant access the methods unless I mark them as static. The error i get is
"Cannot make a static reference to a non-static method"
Here is my setup:
Also Im finding that in my ProfilePageNavigation class I have a bunch of methods that run in a specific order based alphabetical order.
Is it simply the case I should just have 1 method in each test class and just call the page classes methods(or any other pertinent class) to execute this test? If it is just 1 method inside each test class then wouldnt I have too many test classes each with a name like (for example) upload_valid_profile_picture with a method using the same name? and then another class with upload_invalid_profile_picture with it's method. I dont want to go down that path - how do I resolve that?
Also all my Pages class methods have to take WebDriver driver as a parameter is there any way around this - it is a lot of duplication.
If you could point me on the right track and let me know in it is ok to have the pages class methods as static it would be appreciated.
I guess I just want to know whether I am on the right track or going down the wrong route at this early stage.
#tarquin - you can find many articles on that #web, There are multiple ways to handle your code and work with it, my way is :
Create a objects repository in notepad or excel, from where you can pick/change/manage all your objects.
Create a class with all re-usable methods.
Create a class of your tests.
Thanks
Keshav

Make a Java class have higher compile priority than one inside a library in IDEA

We have some portion of functionality packed in an external library and it is attached to our project. That library can't be changed in any way. Amongst others there are two classes lying inside it: com.myorg.Grandpa and com.myorg.Dad that extends com.myorg.Grandpa. Also there are com.myorg.Grandson extending com.myorg.Dad and a few other classes outside of the library extending com.myorg.Grandpa.
I decompile com.myorg.Grandpa class and add a new method new_method() to it.
Then I try to use new_method() in com.myorg.Grandson but IDEA won't let me do it cause Grandson extends Dad which extends library's Grandpa which doesn't contain new_method().
I tried to delete Grandpa from library and surprisingly IDEA didn't say a word and successfully compiled a project despite of the fact that in the boundaries of a library Dad extends non existing class.
The question is how to force Dad to extend a new Grandpa without deleting the one inside a library?
You could
Add an abstract class between Dad and GrandSon: Extend Dad, and add your method in the sub class. Then derive GrandSon from that sub class.
Put an instance of Dad in a new class, and let your IDE create delegate methods to the aggregated Dad instance. Again add your new method to the new class.
There is another possibility:
If you have to modify classes in place, use aspectj to weave in code: aspectj changes the byte-code (it does not need source code) at run-time. This way you can add methods or fields.
The fact is that you are duplicating classes with full package signature, so you will get the one that the classloader loads first. I know that in Websphere you can tweak classloader priorities, but couldn't say in your case.
Anyway, why not just do it without decompiling? You are causing yourself hard coupling to an external library and bad practices (maybe copyright issues) by decompiling/customizing. Besides, if the library gets updated, you will run into trouble having to reconstruct your customized classes.
Options:
Create your own implementation, for instance:
Create an Interface that replicates all methods in Grandpa plus the one you need.
Extend Grandpa class and implement the added method from your interface, all other methods will be left intact.
Extend all other extending classes from your own class hierarchie.
Instead of using the libraries common class, use your Interface as naming
This way you are kind of creating your own interface to the library, if it changes, you know where to make changes.
You could even do it without the interface, it's kind of wrapping the functionality, it would depend on what you need to achieve.
Anyway, I would try to solve it by own code and not by messing up with the library, it is just not worth it to do such tricks, if a new Programmer takes the project, they will need a lot of time to find out why and how it behaves.
Now, there might be variations in how to structure the class hierarchie, but it would depend on the specific implementation you need, so you would have to post more detailed data on what the library is and what you're trying to add to it if you expect some more specific answer...
Regards
It has to appear first to the class loader.
IDEA should load your class first if is in your project. You may also try to create a separate library for your class and include it in your project.
See also: http://www.jetbrains.com/idea/webhelp/configuring-module-dependencies-and-libraries.html

Make a class extends another class at runtime

There is a library have a base class (let's call it CBase) that performs some tasks and one can create classes that extends this CBase class.
The behavior of the CBase is not enough for me, so I would like to create my own CBase class (let's call it MyCBase) that have the same methods and members but these methods don't do the same thing.
Until now everything is ok. But what blocks me is that I would like to replace CBase by MyCBase. However, I have a lot of classes that extend CBase and I don't want to change them all.
Is it possible to replace CBase by MyCBase at runtime ?
So that
public class A extends CBase {}
becomes
public class A extends MyCBase {}
Can I perform this using code enhancement ? (like we do to add methods to a class at runtime. Is it also possible to change inheritance this way ?)
Thank you for your help !
EDIT
I would like to write a plugin for a framework, this is why I would like to change inheritance at runtime. This way users of the framework can use my plugin without changing their source code (changing the inheritance of their classes from CBase to MyCBase)
EDIT 2
Is it possible to do like this: ?
CtClass cc = CtClass.forName("pkg.AClass");
cc.setSuperclass(CtClass.forName("mylib.MyCBase"));
cc.compile();
I'm not expert. Probably you could extend ClassLoader. But I highly recommend don't do it. The replacement will touch many of your classes but it will be clear in code reading and app execution.
I think there is also room for architecture improvement since you have so many classes extend CBase. People are trying to remove dependencies from other libraries or keep it really small. Because in this case you could easily switch to another library or add your own functionality.
I dont think you can change the extends of a class at runtime. I would suggest to change the extends of the objects or build an interface, which contains all the things your need
Changing all derived classes is a simple matter, provided you control their source code:
Create a new class in your project. Call it CBase, and put it in the same package as the library class.
Use the rename/move refactoring of your IDE to rename CBase to MyBase. This will have the IDE rename all references to the renamed/moved class ...
Write the code for MyBase, extending from CBase.
If you can not do this (for instance because some derived classes are in a library you do not control), you replace the implementation of CBase with your own. Simply create a class of the same package and name in your project (the classloader searches the classpath in order, and uses the first class of the proper package and name it finds). This approach however is very brittle, as the compiler can not check binary compability between the old and new version of CBase. The JVM will check this compatibility when classes are loaded, but since classes are only loaded when needed, its hard to test your changes. (Which is why I do not recommend this approach if there are other options).
You could also change the classes as they are loaded my manipulating the class file, that that's going to be even more brittle, and the compiler would allow you to use any additional features MyBase might have. ==> Definitely not a good idea.

Dummy data and unit testing strategies in a modular application stack

How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.

Categories

Resources