I am making an framework that internally user JUnit and REST Assured. This framework will have the 4 #Test methods for CRUD operations. Whenever the user want to do any operation, he will call only that particular Test method. But at the end of the each operation(say GET or DELETE or any other), it should generate the report.
I tried using surefire-report plugin. As I have read, that will generate report only when we build the project(running all the Test methods).
Is there any mechanism that fulfills my requirement of generation report for individual run also?
Execution will be like : final output will be the jar with individual CRUD facility.API.execute(GET, end_point_name);API.execute(POST, end_point_name,data);Test method get and post is called respectively for the above calls. Report should be generated for both the test cases for normal run as java application.
There are 3 solutions to your problem :
Either you write your logger statement and do proper logging of the events. You can either store it in DEBUG, INFO etc mode for better understanding and more control.
ExtentReports is another way to go :
http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/ refer the above link where they have a provided a detailed way of using the same.
You can also create a separate testng.xml file. Like maintaining a separate suite file this will internally make sure with the help surefire to create a separate reports.
Related
I have some BDD tests for my software, declared in Gherkin and run using Cucumber JVM. The Cucumber JVM tests could be run at any of several levels (layers) of my application: through the front-end (HTML using Testcontainers), through the back-end (JSON over HTTP through the REST API using Testcontainers), through the back-end in a (Spring Boot Test using Java method calls) test harness using a mock HTTP server, or (for some tests) through the service layer (Java method calls).
But of course I want to test all those layers of my application, to some extent. And that means I want to have some duplication of my BDD tests. I don't want to run all the BDD tests at all the levels. And I don't want to test only through the front-end, so it is easier to debug test failures. At some levels I want to do only a few key tests to show that the layers of the application are properly glued together.
If I naively implement some duplicate Cucumber JVM tests, Cucumber will complain about duplicate step definitions. How do I do duplicated tests, without having Cucumber be confused by duplicate step definitions?
This is a distinct problem from reusing step definitions: at different levels, the code for a step is very different. And it is distinct from testing variants of and application, where different build environments use different step definitions.
In order to do this, you would have to implement your step definitions on multiple levels. So for a step that should operate on the UI in one test, but on the API in another; you'd need 2 step definitions.
If you group these step definitions into different files, you can then create different runners pointing to different "glue" classes (step definition files".
You can group the step definitions that can be shared among the different levels into one file that is used in all the runners.
That said, I wonder whether you'd need to test the same thing (even if only a subset) at multiple levels of your application? Think about what the value of each test is, and how that would change what you are trying to validate.
For example:
If a method gives different output on different input, this can be tested in a uni test.
To test if that result is displayed correctly, that might be a test on UI or API level.
If there is additional logic in the UI on how this is shown, that might be a test on UI level.
I am trying to read multiple rows from a csv and execute it using one single test method. For instance, In Create User, I could create multiple users for each row using a single test method.
Now the question is, how to configure reportng to display the status of multiple execution on the same method?
Have a look at extentreports or allure reports. They are far better than reportNG as well and have got great deal in customizing reports.
All,
I am using JUnit/Selenium (Java). I have over 400 test cases (separate java class files) distributed in different packages. I need to generate a basic test run report which would tell me if the test failed or passed and how much time it took.
TestNG is not an option since i am using TestWatcher along to make calls to a bug tracking tool API.
Any suggestions?
Thanks.
You may to use the RunListener class that listens to the test events. Then you also may to prepare the custom listener and create the report file. Unfortunately you probably will need add such listener to each your package.
The following link provides more details.
I'm working with a large project with 50,000 source files about 5000 tests spread over 2000 classes. These are mostly JUnit 3 with a smattering of JUnit 4. These tests are run on a frequent (daily or weekly) basis from batch files to ensure the product is working and not suffering from regressions or other issues.
I want to enumerate through the source or class files and produce a list of classes and methods that represent the test cases in the project. I can compare this list to determine which tests are not being run by the batch files.
Is there any simple way to do this such as functionality in JUnit? Or is there functionality in JUnit I could programatically drive to get it, e.g. to scan a dir full of class files and figure out which are tests. I realise that I could write code to individually load each class and start examining it for certain characteristics but if something exists I'd like to be able to use it.
The second part of the question, is there any commonly used way to to annotate tests? If I could annotate tests I could potentially filter them and generate batch files on the fly that run them according to the filter criteria. e.g. run all tests which need a network connection, run all tests that use subsystem A and so on.
Is there anything which would help me do this? I realise I could roll my own annotation with a comma separated list of values or something, but perhaps this sort of thing has been formalised and there are tools that work with it.
Tests are classes that extend TestCase (in case of JUnit 3) or annotated as #Test (in case of JUnit 4). So it is not a problem to do either using IDE or programmatically: write program that finds all *class files recursively into your classes folder(s) and tests whether class extends TestCase or is annotated with #Test.
We are looking to migrate our testing framework over to JMeter. We have 50 + test cases, each of them with repeating actions like Logging in and logging out for example. How can I modularize my approach?
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
We also have things like connectionID's that need to be passed on every request. Is there anyway jMeter can AUTOMATICALLY replace all occurrences of it with a Jmeter variable? Atm the proxy-recorder records the actual connection string, and we have to manually replace that with ${connectionID}. Is there a better way?
This works fine for me.
Make a new thread group at the bottom of the test plan and put a simple controller in it. Inside the simple controller put the code you want to repeat. I use two simple controllers but one is actually a DB test case suite. While keeping whatever is inside the thread group enabled, make sure to put the thread group itself as disabled, or else it will execute again on its own.
Now, in any particular test case, add User Parameters and add a Module Controller. The Module Controller can point to the simple controller section(s) you made before. Have the simple controller with a ${variables} set, then override them here inside the particular test you are running by putting the variable in the User Parameters. Thus you get the different variables and tests with the same suite.
I put a Simple Controller inside the Simple Controller to add lengthy db tests. This ends up as
Thread Group > Simple Controller > Simple Controller > JDBC Request. All are renamed.
You can select different ones in the Module Controller inside the tests. I have about six right now but this gets repeated dozens of times.
This is all with a stock Jmeter 2.3 . If you are in an environment such that you can't install the plugins, this will work fine. I've never tried them
HTH
As far as automatically replacing the connection IDs, there is not, to my knowledge, a way to do that via the GUI. However, the test scripts are simple XML files and so it would be very easy to write a sed or awk script to do that replacement for you.
As far as the "add test file from here" part, in 2.6 (not sure about other versions, not used them) there is a logic controller called "Include Controller" that can load test snippets.There is also the ability to save snippets of test code called "test fragments" to their own .jmx files.
If you start a new test plan, right click on test plan then add -> test fragment -> test fragment this will add the container, then you can add your other requests in and use this chunk inside the aforementioned Include element.
If you are able to use the latest version, or if the prior versions support this, this may be a simpler option than writing your own plugin.
By using below Jmeter elements, we can modularize the test scripts.
Test Fragment
Module Controller
Parameterized Controller
Include Controller
Please check this for more details & examples.
http://www.testautomationguru.com/jmeter-modularizing-test-scripts/
I know 2 options for you:
Module Controller
Parameterized Controller
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
Sounds like you might want to encapsulate some of that repeated logic in your own custom Samplers or Config Elements. There is a guide to writing plugins to JMeter on the project page.
This is the approach that we have taken on my current team for handling JMeter simulating requests of a custom RPC format.
One thing is you can run the jmeter scripts under "noGUI" model. You can specify the jmeter test scripts to run and put a batch of them into a .bat file. Like:
#echo
JMeter -n -t MyTestPlan1.jmx
JMeter -n -t MyTestPlan2.jmx
Another way I agree with #matt is you can write the plugin to get what your need.