I am building a test framework that should support test execution on Android, iOS and Web. Since the app is complex with many test cases (1000+) one of the requirements is to be able to execute in parallel (here should be included tests that require only single device), also having tests with parent-child behavior (chat application - the SMS send from device 'A' to device "B"). I am planning to use POM with PageFactory.
So here my initial thoughts on how to do this:
In before suite to install the application on android/ios using CLI
In testng.xml provide device related parameters such as udid, platform, os etc...
In beforeMethod in BaseTest that extends all tests - to initialize driver (android, ios or web (chrome,ff...))
AfterMethod to quit the driver (also I can have before/after methods in test level)
Pros:
easy to manage driver instance creation before each test,
manageable sequential (parent-child) test execution (in testng.xml I can specify parent and child methods)
easy manageable parallel execution.
Cons:
create an instance of the driver before each tests method will be time consuming,
if one test perform sign in, and the next test needs to send SMS, due to driver initialization in beforeMethod I need to perform sign in (code duplication) again.
Could you suggest or point to a framework that can support all mentioned requirements.
Should I use the selenium grid?
Any help is highly appreciated :)
Based on your description, you can check AppiumTestDistribution framework and extend it for your needs.
I suggest checking official docs and this project is a good point to start
One note from myself: start driver only once in BeforeSuite: installing the app and appium helpers apps is time consuming; you can simply reset app state in BeforeTest, e.g. by starting activity
Related
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
I created a java wrapper to feed jmeter. I have implemented java classes with selenium that are invoked by the wrapper and perform GUI tests.
I activated the headless option.
launching tests with a single user from jmeter all works correctly.
trying to launch two users tests fail.
can you help me to understand why?
Most probably you missed an important bit: each Selenium session need to have a separate URL and Selenium server needs to be running on a different port. So make sure to amend your "wrapper" to be aware of multiple WebDriver instances and to kick off a separate instance of Selenium server (or standalone client) for each JMeter thread (virtual user).
Unfortunately we cannot help further without seeing your code, just keep in mind that your wrapper needs to be thread-safe. Also pay attention to jmeter.log file - normally it should contain enough information to get to the bottom of your test failure.
P.S. Are you aware of WebDriver Sampler plugin? It's designed in line with JMeter threads model and you should be able to kick off as many browsers as your machine can handle. If you for some reason it doesn't fit your needs you can at least take a look into the source code to get an idea with regards to what you need to change in your "wrapper"
We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.
One option for running my tests in my Play! application is by executing the command play auto-test.
One of the ways Play seems to identify tests to run is to find all test classes with the super class play.test.UnitTest (or another Play equivalent). Having a test class extend UnitTest seems to come with some overhead as shown by this bit of stuff spat out in the console:
INFO info, Starting C:\projects\testapp\.
WARN warn, Declaring modules in application.conf is deprecated. Use dependencies.yml instead (module.secure)
INFO info, Module secure is available (C:\play-1.2.1\modules\secure)
INFO info, Module spring is available (C:\projects\testapp\.\modules\spring-1.0.1)
WARN warn, Actually play.tmp is set to null. Set it to play.tmp=none
WARN warn, You're running Play! in DEV mode
INFO info, Connected to jdbc:h2:mem:play;MODE=MYSQL;LOCK_MODE=0
INFO info, Application 'Test App' is now started !
Obviously having a Play environment for tests that requires such a setup is useful, however, if I have a test class that tests production code that executes logic that does not require a Play environment I don't want to have to extend UnitTest so that I can avoid the overhead of starting up a Play environment.
If I have a test class that does not extend UnitTest then it does not get executed by the command play auto-test. Is there a way to get the play auto-test command to execute all tests regardless of whether I extend Play's UnitTest?
Edit: Someone has actually raised a ticket for this very issue
the short answer: no. A tad longer answer: no unless you change code in the framework. The autotest is an Ant task that sets the server and triggers the testing, but it's not using the ant task, so it won't detect (by default) your 'normal' unit tests.
You have two options: either you add an extra task to the Ant file of Play to run unit tests via the task (you will need to include the relevant jars too) or you edit the code used to launch the Play test environment.
Both imply changing the framework to a certain level. Although giving that you are using Play, I wonder why you should not have all your tests follow the Play pattern...
If these tests doesn't require any Play! feature, why don't you put them on a library ? With your example (math add) : create a calculator.jar package, and build it with Ant or Maven after running tests.
Like this, you can use your library in several Play! projects (or Spring, Struts, ... if you want.
I really don't get why the problem itself is even debatable. Having simple and small unit tests (even in the web-part of your project) is the most normal thing to do.
The extra overhead of framework initialisation slows down your roundtrips significantly if you have many tests. As it can be seen in the ticket, the current workaround is to make your unit tests extend org.junit.Assert instead of play.test.UnitTest