How to generate a basic JUnit/Selenium Report? - java

All,
I am using JUnit/Selenium (Java). I have over 400 test cases (separate java class files) distributed in different packages. I need to generate a basic test run report which would tell me if the test failed or passed and how much time it took.
TestNG is not an option since i am using TestWatcher along to make calls to a bug tracking tool API.
Any suggestions?
Thanks.

You may to use the RunListener class that listens to the test events. Then you also may to prepare the custom listener and create the report file. Unfortunately you probably will need add such listener to each your package.
The following link provides more details.

Related

TestNG/Surefire: How to generate an XML report after each test?

We have a large number of complex integration tests that run for several hours.
How do I receive TestNG XML reports while the tests are running, not after the run?
You can build a TestNG listener which extends org.testng.TestListenerAdapter and override its org.testng.TestListenerAdapter#onFinish wherein you can build the logic to push the results of a <test> tag after its run to a data source of your own. You can also try making it more real-time by building an implementation of the listener interface org.testng.IInvokedMethodListener and within org.testng.IInvokedMethodListener#afterInvocation check if a method is a test method and if yes, start recording the results to a data source of your choice.

JUnit report generation for individual testcase

I am making an framework that internally user JUnit and REST Assured. This framework will have the 4 #Test methods for CRUD operations. Whenever the user want to do any operation, he will call only that particular Test method. But at the end of the each operation(say GET or DELETE or any other), it should generate the report.
I tried using surefire-report plugin. As I have read, that will generate report only when we build the project(running all the Test methods).
Is there any mechanism that fulfills my requirement of generation report for individual run also?
Execution will be like : final output will be the jar with individual CRUD facility.API.execute(GET, end_point_name);API.execute(POST, end_point_name,data);Test method get and post is called respectively for the above calls. Report should be generated for both the test cases for normal run as java application.
There are 3 solutions to your problem :
Either you write your logger statement and do proper logging of the events. You can either store it in DEBUG, INFO etc mode for better understanding and more control.
ExtentReports is another way to go :
http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/ refer the above link where they have a provided a detailed way of using the same.
You can also create a separate testng.xml file. Like maintaining a separate suite file this will internally make sure with the help surefire to create a separate reports.

Extent Report Monitoring

Just a question to anyone here who is using ExtentReport as a listener.
Is it possible to use ExtentReport to generate the HTML Report during the tests being executed when its being used as a listener?
For example, instead of the report being generated when the tests are finished, the report is generated after the first test, and so on? I want to use ExtentReport to monitor the progress of my tests aswell and showing the results?
I've learnt its possible when using as a logger, as you can do a flush after each tests. However is it possible while using Extent as a Listener?
Thanks in advanced.
Kind regards,
Colin.
Yes it is possible. Since extent report is open source. One could edit the function which formulate the structure of our final report. But since extent report uses different functionalities, just modifying an endtest would cause a cascading effect on everything.
Or another way is to save the data before flush is called and by calling flussh everytime. One could replace the existing file over and over.
My best bet is too leave it as it is and get other listener jars to do the work for you.

Modularizing JMeter tests?

We are looking to migrate our testing framework over to JMeter. We have 50 + test cases, each of them with repeating actions like Logging in and logging out for example. How can I modularize my approach?
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
We also have things like connectionID's that need to be passed on every request. Is there anyway jMeter can AUTOMATICALLY replace all occurrences of it with a Jmeter variable? Atm the proxy-recorder records the actual connection string, and we have to manually replace that with ${connectionID}. Is there a better way?
This works fine for me.
Make a new thread group at the bottom of the test plan and put a simple controller in it. Inside the simple controller put the code you want to repeat. I use two simple controllers but one is actually a DB test case suite. While keeping whatever is inside the thread group enabled, make sure to put the thread group itself as disabled, or else it will execute again on its own.
Now, in any particular test case, add User Parameters and add a Module Controller. The Module Controller can point to the simple controller section(s) you made before. Have the simple controller with a ${variables} set, then override them here inside the particular test you are running by putting the variable in the User Parameters. Thus you get the different variables and tests with the same suite.
I put a Simple Controller inside the Simple Controller to add lengthy db tests. This ends up as
Thread Group > Simple Controller > Simple Controller > JDBC Request. All are renamed.
You can select different ones in the Module Controller inside the tests. I have about six right now but this gets repeated dozens of times.
This is all with a stock Jmeter 2.3 . If you are in an environment such that you can't install the plugins, this will work fine. I've never tried them
HTH
As far as automatically replacing the connection IDs, there is not, to my knowledge, a way to do that via the GUI. However, the test scripts are simple XML files and so it would be very easy to write a sed or awk script to do that replacement for you.
As far as the "add test file from here" part, in 2.6 (not sure about other versions, not used them) there is a logic controller called "Include Controller" that can load test snippets.There is also the ability to save snippets of test code called "test fragments" to their own .jmx files.
If you start a new test plan, right click on test plan then add -> test fragment -> test fragment this will add the container, then you can add your other requests in and use this chunk inside the aforementioned Include element.
If you are able to use the latest version, or if the prior versions support this, this may be a simpler option than writing your own plugin.
By using below Jmeter elements, we can modularize the test scripts.
Test Fragment
Module Controller
Parameterized Controller
Include Controller
Please check this for more details & examples.
http://www.testautomationguru.com/jmeter-modularizing-test-scripts/
I know 2 options for you:
Module Controller
Parameterized Controller
What I'm looking for specifically is a "Add test item from file" so that I could for example, add the login code.
Sounds like you might want to encapsulate some of that repeated logic in your own custom Samplers or Config Elements. There is a guide to writing plugins to JMeter on the project page.
This is the approach that we have taken on my current team for handling JMeter simulating requests of a custom RPC format.
One thing is you can run the jmeter scripts under "noGUI" model. You can specify the jmeter test scripts to run and put a batch of them into a .bat file. Like:
#echo
JMeter -n -t MyTestPlan1.jmx
JMeter -n -t MyTestPlan2.jmx
Another way I agree with #matt is you can write the plugin to get what your need.

Generating Unit Tests Automatically

I have a web tool which when queried returns generated Java classes based upon the arguments in the URL.
The classes we retrieve from the webserver change daily and we need to ensure that they still can process known inputs.
Note these classes do not test the webserver, they run locally and transform xml into a custom format. I am not testing the webserver.
These classes must then be placed in specific package structure compiled and run against a known set of input data and compared against known output data.
I would like to do this automatically each night to make sure that the generated classes are correct.
What is the best way to achieve this?
Specifically whats the best way to:
retrieve the code from a webserver and place it in a file
compile the code and then call it
I'm sure a mix of junit and ant will be able to achieve this but is there and standard solution / approach for this?
First up, to answer your question: No, I do not think that there is a standard approach for this. This sounds like quite an unusual situation ;-)
Given that, what I would do is to write your JUnit tests to all call a class GeneratedCode, and then, once you download the code, rename the class to GeneratedCode, compile, and run your unit tests.
You have the same goal as continuous integration ;-)
Maybe a bit overkill for this simple task, but this is the standard way to get something, compile something and test something regularly.
E.g. you could try hudson.
You should be creating a "mock" interface for your web service that (a) behaves the same way and (b) returns a known answer.
You should then do some other integration testing with the live web service where a person looks at the results and decides if they worked.
Can you only test the generated classes after they were published on the webservice ? You have no way to test during or just after the generation ?
One idea, if the generated code isn't to complex, is to load it via the GroovyClassLoader and to run your tests against it. See this page for examples.

Categories

Resources