I'm trying to understand where I need to look to implement such scenario:
What I have:
1. I have variety of testing scenarios implemented on Java with Selenium 2 Webdriver.
2. I'm running my test scenarios (suits) using TestNG (defined list of classes and methods in order in xml file), building using Maven.
3. All on my local machine and from my local machine
What I need:
1. I need to implement ability to call my test suites from external test framework. As result of this call - test execution should be started.
2. I need to send parameters to my Tests, so call should contain parameters that should be passed to my code.
3. I need to send "a message" to external framework when tests would be done (with info : success of failed)
I'm thinking to implement this collaboration between two frameworks using REST
My questions:
1. How architecture should look like? Should I run any server so my service would be available 24/7 for external framework? How to write such service?
2. Where to read about it?
3. Any examples of classes/methods that should be implemented to send parameters/ to get parameters?
thank you in advance for any ideas.
Related
I created a project using cucumber to perform e2e tests of various apis I consume. I would like to know if I can run these tests through endpoints to further automate the application that was created.
That way I would be able to upload this app and would not need to keep calling locally.
You can do that if you create a Rest API with a get method which executes the test runner when called.
How to run cucumber feature file from java code not from JUnit Runner
But I don't recommend you to do that since what you are trying to achieve seems to me similar to a pipeline definition.
If you're in touch with the developers of these APIs, you can speak with them about including your test cases in their pipeline, since they probably have one in place.
If, for some reason, you still want to trigger your tests remotely and set it up by your own, I would recommend you to start reading about Jenkins. You can host it on any machine and run your tests from there, accessing from any machine to your jenkins instance:
https://www.softwaretestinghelp.com/cucumber-jenkins-tutorial/
If your code is hosted in any platform like github or gitlab, they already have its own way of creating pipelines and you can use it to run your tests. Read about Gitlab pipelines or Github actions.
API - RestAssured API
UI - Selenium UI
Integration needed from API to UI
using Selenium and Maven dependencies in JAVA
How to call the UI framework from API ?
From the comments section here is my understanding of your requirement :
You want to use rest assured to call an API and create data and then verify the same whether it is visible or present on UI using selenium scripts.
Now, I don't know the exact functionality you want to test but here are my 2 cents.
First of all, what you are trying to achieve from your test is that whether the front and code and the back end code honour the contract between each other or not or in other words whether they are integrated correctly or not.
For example: From the browser, you try to create the record, for which browser may be making API calls then display the same data, one of the ways the test case can fail is that if there is any mismatch in the response objects data type and the data type your front end code is expecting.
Other failure points are on the API level - maybe service is down or the code logic itself is wrong, and UI level - front end code is itself not able to load the record because of UI code logic issue or browser-specific issue.
Now, if we follow your approach, then in cucumber I will write my steps like this :
GivenICallCreateRecordAPICall
WhenICheckTheDataFromUI
ThenItShouldMatchCorrectly
Now, the first step will implement the rest assured call to create the record in the same environment and saving data to be verified with UI.
Second, Call UI using selenium, extract and save the data to be verified,
And Last, assert all the data to verify and throw errors if otherwise.
But, this approach is wrong.
All layers of test should ideally be independent as this will cause flakiness.
The objective here is to check whether the integration is correct or not, so write a contract test Or, in your API test cases verify the response schema and data type.
Still, if you want to know more detail on how to implement then, please let me know I will provide the implementation detail.
I have some BDD tests for my software, declared in Gherkin and run using Cucumber JVM. The Cucumber JVM tests could be run at any of several levels (layers) of my application: through the front-end (HTML using Testcontainers), through the back-end (JSON over HTTP through the REST API using Testcontainers), through the back-end in a (Spring Boot Test using Java method calls) test harness using a mock HTTP server, or (for some tests) through the service layer (Java method calls).
But of course I want to test all those layers of my application, to some extent. And that means I want to have some duplication of my BDD tests. I don't want to run all the BDD tests at all the levels. And I don't want to test only through the front-end, so it is easier to debug test failures. At some levels I want to do only a few key tests to show that the layers of the application are properly glued together.
If I naively implement some duplicate Cucumber JVM tests, Cucumber will complain about duplicate step definitions. How do I do duplicated tests, without having Cucumber be confused by duplicate step definitions?
This is a distinct problem from reusing step definitions: at different levels, the code for a step is very different. And it is distinct from testing variants of and application, where different build environments use different step definitions.
In order to do this, you would have to implement your step definitions on multiple levels. So for a step that should operate on the UI in one test, but on the API in another; you'd need 2 step definitions.
If you group these step definitions into different files, you can then create different runners pointing to different "glue" classes (step definition files".
You can group the step definitions that can be shared among the different levels into one file that is used in all the runners.
That said, I wonder whether you'd need to test the same thing (even if only a subset) at multiple levels of your application? Think about what the value of each test is, and how that would change what you are trying to validate.
For example:
If a method gives different output on different input, this can be tested in a uni test.
To test if that result is displayed correctly, that might be a test on UI or API level.
If there is additional logic in the UI on how this is shown, that might be a test on UI level.
I am making an framework that internally user JUnit and REST Assured. This framework will have the 4 #Test methods for CRUD operations. Whenever the user want to do any operation, he will call only that particular Test method. But at the end of the each operation(say GET or DELETE or any other), it should generate the report.
I tried using surefire-report plugin. As I have read, that will generate report only when we build the project(running all the Test methods).
Is there any mechanism that fulfills my requirement of generation report for individual run also?
Execution will be like : final output will be the jar with individual CRUD facility.API.execute(GET, end_point_name);API.execute(POST, end_point_name,data);Test method get and post is called respectively for the above calls. Report should be generated for both the test cases for normal run as java application.
There are 3 solutions to your problem :
Either you write your logger statement and do proper logging of the events. You can either store it in DEBUG, INFO etc mode for better understanding and more control.
ExtentReports is another way to go :
http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/ refer the above link where they have a provided a detailed way of using the same.
You can also create a separate testng.xml file. Like maintaining a separate suite file this will internally make sure with the help surefire to create a separate reports.
I have developed a micro-framework in Java which does the following function:
All test cases list will be in a MS-Access database along with test data for the Application to be tested
I have created multiple classes and each having multiple methods with-in them. Each of these methods represent a test-case.
My framework will read the list of test cases marked for execution from Access and dynamically decide which class/method to execute based on reflection.
The framework has methods for sendkeys, click and all other generic methods. It takes care of reporting in Excel.
All this works fine without any issue.
Now I am looking to run the test cases across multiple machines using Grid. I read in many sites that we need a framework like TestNG to have this in Grid. But I hope that it could be possible to integrate Grid in my own framework. I have read many articles and e-books which does not explain the coding logic for this.
I will be using only windows 7 with IE. I don't need cross browser/os testing.
I can make any changes to the framework to accomplish this. So please feel free.
In the Access DB which I mentioned above, I will have details about test case and the machine in which the test case should run. Currently users can select the test cases they want to run locally in the Access DB and run it.
How will my methods(test scripts) know which machine its going to be executed? What kind of code changes I should do apart from using RemoteWebDriver and Capabilities?
Please let me know if you need any more information on my code or have any question. Aslo kindly correct me if any of my understanding on Grid is wrong.
How will my methods know which machine it is going to be executed? - You just need to know one machine with a grid setup - the ip of your hub machine. The hub machine will decide where to send the request to from the nodes that are registered with, depending upon the capabilities you specify while instantiating the driver. When you initialize the RemoteWebDriver instance, you need to specify the host (ip of your hub). I would suggest to keep the hub ip as a configurable property.
The real use of the grid is for parallel remote execution. So how do you make your tests run in parallel is a thing that you need to decide. You can use a framework like Testng which provides parallelism with simple settings. You might need to restructure your tests to accomodate testng. The other option would be to implement multithreading yourself to trigger your tests in parallel. I would recommend testng based on my experience since it provides many more capabilities apart from parallelism. You need to take care that each instance of driver is specific to your thread and not a global variable.
All tests can hit the hub and the hub can take care of the rest.
It is important to remember that Grid does not execute your tests in parallel for you. It is the job of your framework to divide tests across multiple threads and collate the results . It is also key to realise that when running on Grid, the test script still executes in the machine the test was started on. Grid provides a REST API to open and interact with browsers, so your test will be using this rather than opening a browser locally. Any other non-selenium code will be executed within the context of the original machine not machine where the browser has been opened (e.g. File System access is not where the browser has opened). Any use of static classes and globals in your framework may also cause issues as each test will acces these concurrently. Your code must be thread safe.
Hopefully this hasn't put you off using Grid. It is an awesome tool and really easy to use. It is the parallel execute which is hard and frameworks such as TestNG provide this out of the box.
Good luck with your framework.