I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.
Related
I am building a test framework that should support test execution on Android, iOS and Web. Since the app is complex with many test cases (1000+) one of the requirements is to be able to execute in parallel (here should be included tests that require only single device), also having tests with parent-child behavior (chat application - the SMS send from device 'A' to device "B"). I am planning to use POM with PageFactory.
So here my initial thoughts on how to do this:
In before suite to install the application on android/ios using CLI
In testng.xml provide device related parameters such as udid, platform, os etc...
In beforeMethod in BaseTest that extends all tests - to initialize driver (android, ios or web (chrome,ff...))
AfterMethod to quit the driver (also I can have before/after methods in test level)
Pros:
easy to manage driver instance creation before each test,
manageable sequential (parent-child) test execution (in testng.xml I can specify parent and child methods)
easy manageable parallel execution.
Cons:
create an instance of the driver before each tests method will be time consuming,
if one test perform sign in, and the next test needs to send SMS, due to driver initialization in beforeMethod I need to perform sign in (code duplication) again.
Could you suggest or point to a framework that can support all mentioned requirements.
Should I use the selenium grid?
Any help is highly appreciated :)
Based on your description, you can check AppiumTestDistribution framework and extend it for your needs.
I suggest checking official docs and this project is a good point to start
One note from myself: start driver only once in BeforeSuite: installing the app and appium helpers apps is time consuming; you can simply reset app state in BeforeTest, e.g. by starting activity
I'm currently writing a Java program that is an interface to another server. The majority of the functions (close to >90%) do something on the server. Currently, I'm just writing simple classes that run some actions on the server, and then check it myself, or add methods to the test that read back the written information.
Currently, I'm developing on my own computer, and have a version of the server running locally on a VM.
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected too. I am not sure the best way to go about my testing. I have my JUnit tests (on simple functions that do not interact externally) that run every build. I can't seem to see the set way in JUnit to write tests that do not have to run at every build (perhaps when their functions change?).
Or, can anyone point me in the correct direction of how to best handle my testing.
Thanks!
I don't want to continually run the tests at every build, as I don't want to keep modifying the server I am connected to
This should have raised the alarms for you. Running the tests is what gives you feedback on whether you broke stuff. Not running them means you're blind. It does not mean that everything is fine.
There are several approaches, depending on how much access you have to the server code.
Full Access
If you're writing the server yourself, or you have access to the code, then you can create a test-kit for the server - A modified version of the server that runs completely in-memory and allows you to control how the server responds, so you can simulate different scenarios.
This kind of test-kit is created by separating the logic parts of the server from its surroundings, and then mocking them or creating in-memory versions of them (such as databases, queues, file-systems, etc.). This allows the server to run very quickly and it can then be created and destroyed within the test itself.
Limited/No Access
If you have to write tests for integration with a server that's out of your control, such as a 3rd party API, then the approach is to write a "mock" of the remote service, and a contract test to check that the mock still behaves the same way as the real thing. I usually put those in a different build, and run that occasionally just to know that my mock server hasn't diverged from the real server.
Once you have your mock server, you can write an adapter layer for it, covered by integration tests. The rest of your code will only use the adapter, and therefore can be tested using plain unit tests.
The second approach can, of course, be employed when you have full access as well, but usually writing the test-kit is better, since those kinds of tests tend to be duplicated across projects and teams, and then when the server changes a whole bunch of people need to fix their tests, whereas if the test-kit is written as part of the server code, it only has to be altered in one place.
I created a java wrapper to feed jmeter. I have implemented java classes with selenium that are invoked by the wrapper and perform GUI tests.
I activated the headless option.
launching tests with a single user from jmeter all works correctly.
trying to launch two users tests fail.
can you help me to understand why?
Most probably you missed an important bit: each Selenium session need to have a separate URL and Selenium server needs to be running on a different port. So make sure to amend your "wrapper" to be aware of multiple WebDriver instances and to kick off a separate instance of Selenium server (or standalone client) for each JMeter thread (virtual user).
Unfortunately we cannot help further without seeing your code, just keep in mind that your wrapper needs to be thread-safe. Also pay attention to jmeter.log file - normally it should contain enough information to get to the bottom of your test failure.
P.S. Are you aware of WebDriver Sampler plugin? It's designed in line with JMeter threads model and you should be able to kick off as many browsers as your machine can handle. If you for some reason it doesn't fit your needs you can at least take a look into the source code to get an idea with regards to what you need to change in your "wrapper"
We are currently improving the test coverage of a set of database-backed applications (or 'services') we are running by introducing functional tests. For me, functional tests treat the system under test (SUT) as a black box and test it through its public interface (be it a Web interface, REST, or our potential adventure into the messaging realm using AMQP).
For that, the test cases either A) bootstrap an instance of the application or B) use an instance that is already running.
The A version allows for test cases to easily test the current version of the system through the test phase of a build tool or inside a CI job. That is what e.g. the Grails functional test phase is for. Or Maven could be set up to do this.
The B version requires the system to already run but the system could be inside (or at least closer to) a production environment. Grails can do this through the -baseUrl option when executing functional tests.
What now puzzles me is how to achieve a required state of the service prior to the execution of every test case?
If I e.g. want to test a REST interface that does basic CRUD, how do I create an entity in the database so that I can test the HTTP GET for it?
I see different possibilities:
Using the same API (e.g. HTTP POST) to create the entity. Downside: Changing the creation method breaks two test cases. Furthermore, there might not be a creation method for all APIs.
Adding an additional CRUD API for testing and only activating that in non-production environments. That API is then used for testing. Downside: adds additional code to the production system, API logic might not be trivial, e.g. creation of complex entity graphs (through aggregation/composition), and we need to make sure the API is not activated for production.
Basically the same approach is followed by the Grails Remote Control plugin. It allows you to "grab into your application" and invoke arbitrary code through serialisation. Downside: Feels "brittle". There might be similar mechanisms for different languages/frameworks (this question is not Grails specific).
Directly accessing the relational database and creating/deleting content, e.g. using DbUnit or just manually creating entities through JDBC. Downside: you duplicate creation/deletion logic and/or ORM inside the test case. Refactoring the DB breaks the test case though the SUT still works.
Besides these possibilities, Grails when using the (-inline) option for functional tests allows accessing Spring services (since the application instance is run inside the same JVM as the test case). Same applies for Spring Boot "integration tests". But I cannot run the tests against an already running application version (as described as option B above).
So how do you do that? Did I miss any option for that?
Also, how do you guarantee that each test case cleans up after itself properly so that the next test case sees the SUT in the same state?
as with unit testing you want to have a "clean" database before you run a functional test. You will need some setup/teardown functionality to bring the database into a defined state.
easiest/fastest solution to clean the database is to delete all content with an sql script. (For debugging it is also useful to run this in the test setup to keep the state of the database after a test failure.) This can be maintained manually (it just contains delete <table> statements). If your database changes often you could try to generate the clean script (disable foreign keys (to avoid ordering problem), delete tables).
to generate test data you can use an sql script too but that will be hard to maintain, or create it by code. The code can be placed in ordinary Services. If you don't need real production data the build-test-data plugin is a great help at simplifying test data creation. If you are on the code side it also makes sense to re-use the production code to create test data to avoid duplication.
to call the test data setup simply use remote-control. I don't think it is more brittle than all the http & ajax stuff ;-). Since we now have all the creation code in a service the only thing you need to call with remote control is the Service that does create the data. It does not have to get more complicated than remote { ctx.testDataService.setupDataForXyz() }. If it is that simple you can even drop remote-control and use a controller/action to run it.
do not test too much detail with functional tests to make it not more complicated as it already is. :)
Similar to how do i pass program arguments in java for my Fitnesse Fixture?
.. I wish to kick off my Fitnesse tests in parallel using fitnesseMain.FitNesseMain.launchFitNesse(Arguments arguments)... and pass thread-safe objects to each test to be accessed later by test code run by Fitnesse.
The test code itself is plain old Java, invoked from Fitnesse using givwenzen. The Java test code goes on to dynamically kick off Selenium tests.
I need to pass these thread-safe objects through Fitnesse all the way to the Java test scripts so that they start a Selenium RemoteWebDriver with the correct org.openqa.selenium.remote.DesiredCapabilities.
I have tried using the good old java.lang.ThreadLocal, but it appears that Fitnesse is spawning threads of its own to run the tests which effectively eliminates this option.
Considering Givwenzen is written using Slim, I don't think what you want to do is possible. If possible, it certainly isn't easy, as Slim works by running the tests in a separate process.
So when you run FitNesse, it creates the web server and the wiki. That runs as one java task. When you click on the Test or Suite buttons (or use the URLs) it creates a new Java process that is the SlimServer. Then the FitNesse server sends instructions to the SlimServer as strings and then the SlimServer processes those into instructions to run tests. So the coupling between the code that is launched via FitNesseMain to the slim tests is actually sort of loose. This is done on purpose, as it lets the SlimServer implementation be language independent.
Within the SlimServer, there is the ability to work with actual object references, and that might be OK, but I have doubts that the chain of custody will be threadsafe at each step.
Sorry. Maybe someone else will have an idea for how to work around the issues I've described.