how to fail testng test on misspelled dataprovider? - java

testNg binds data providers by name (a string). when i misspelled it, it returned success. just no tests were run. is there any configuration option to fail-fast in case of such error?

Actually, when a test case does not find its data provider (which happens in your case of misspelled data provider name), that test case is skipped. And therefore, no test is run. What you need is a way to see skipped test cases.
You can print messages or throw exceptions whenever a test is skipped. However, throwing exceptions may not be recommended because it may stop build after the first skipped tests and your entire test suite may remain untested.
Approach 1
You need to implement ITestListener which provides method
onTestSkipped(ITestResult testResult)
Approach 2 TestNG also enables you to generate reports at the end of test run. You need to implement
IReporter interface
You need to code for generateReport() method.
In addition for both the above approaches, you need to configure your implementation class as a listener in testng.xml like following.
<listeners>
<listener class-name="com...test.reporter.TestStatusReporter" />
</listeners>
Next
Once you run
mvn install
and is successful, you can view test results including skipped tests details in the following location of your project.
../target/surefire-reports/index.html
Hope this helps.

Related

Programmatically enable or disable Assertion in TestNG

Usually when we run testcases using TestNG, assertion error stops further execution after that point. But, sometimes it would be better if we could run the whole script. Manually blocking/disabling those assertions become tedious process. Hence, if there were some ways to programmatically enable/disable assertions other than manual it would be of great help indeed.
Does TestNG support this? If not, can anyone help me please?
As Julien mentioned above you are better off making a custom softAssert of your own. I don't know, I could be horribly wrong but the standard softAssert that comes with testNG didn't give me the behaviour that I was after.
I suppose the most common reason that your tests are failing is an ElementNotFound or TimeOutException. So in your waitForElement method you can capture these exceptions (or any exception for that matter) and print a warning msg on the console (or don't print anything or even take a screenshot if you might, like a warning error but not a show-stopper error). Something like the below:
public boolean waitForElement(String elementName, int timeOut) {
try{
elementPresent=wait.until(ExpectedConditions.visibilityOfElementLocated(By.xpath(findXpath(elementName)))).isDisplayed());
}
catch(org.openqa.selenium.TimeOutException e1){e1.printStackTrace();elementPresent=false;takeScreenshot();}
}
return elementPresent;
}
Hope that helps!
Using QAF Validation you can fulfill your requirements, for selenium it provides inbuilt verification methods, if it fails then testcase also continues to execution.
As suggested , SoftAssert can be used which does not halt execution even if assertion fails.
Also enable/disable assertions is possible by flagging tests as enabled=false or enabled=true. This is in turn runs all tests [and thereby assertions] except those are marked as enabled=false
Example. Assertion in example won't be executed as test is disabled.
#Test (enabled=false)
public void verifySearchReport {
soft.assertEquals("About*", "About123Soft","FAILED ASSERT");
soft.assertAll();
}
Assertion in example will be executed as test is enabled. Further execution of tests won't be halted [Even if assertion is failing] as SoftAssert is used.
#Test (enabled=true)
public void verifySearchReport {
soft.assertEquals("About*", "About123Soft","FAILED ASSERT");
soft.assertAll();
}
//Further #Test here

In my testNG integration tests can I use #factory more than once (using Jenkins and Maven for my builds)?

What - Detailed Steps
My test calls a 3rd party API and sends a request for a new transaction (let's say I need to do this for 5 tests which were generated by #Factory). These tests end here with the status of 'Pending'.
The 3rd party API takes 5 minutes to process the data. I need to make a second call to the API after 5 minutes (for all my pending tests) to get the transaction ID for my request and then pass/fail the test.
I want to spin up another #Factory here to re-generate all the Pending tests. These pending tests call the API again (with different inputs) to get the transaction ID and pass/fail the test based on this info.
How
I am trying to use #Factory to generate a bunch of tests dynamically and run them. After these tests are run I want to use #Factory again to generate a second batch of new tests and run them. The problem is, I did not have success when trying to call #Factory for the second time.
I am using Jenkins and Maven in my setup for generating builds and that is when I would want the tests to run.
Questions
Is step 3 possible?
Is there a better way to do this?
Thanks everyone!
Reading the extra comment / improves question, it sounds indeed like an Integration Test.
There are some need Integration Test libraries like JBehave, Serenity, Cucumber, etc which would probably better for setting this up.
With TestNG, you could create 3 tests, where each next test depends on the previous test. See code sample below, from testng dependency test
package com.mkyong.testng.examples.dependency;
import org.testng.annotations.Test;
public class App {
#Test
public void method1() {
System.out.println("This is method 1");
}
#Test(dependsOnMethods = { "method1" })
public void method2() {
System.out.println("This is method 2");
}
}
Here the most simple dependency is show. See the sample code for more complex cases, like groups etc. For setting up two test classes each with their own #Factory
Solved! Responses to this question led me in finding the answer - Thanks #Verhagen
I added 2 tests in my testng.xml.
And have 2 factories setup in my code.
When a build is triggered,
#Factory 1 creates tests -->
#Factory 2 creates more tests -->
tests by #Factory 1 are executed -->
tests by #Factory 2 are executed
This solves my requirement for running a batch of tests (first batch) and then running a second batch of tests based on the out come of the first batch.

How to log the custom skipped test method details in report using TestNG?

I have created my own custom report and It looks good for smooth execution. When I had some code change in my application then some of test cases were failed, and skipped. But I can only logged the failure details in report not for skipped tc details. I have tracking the report based on test methods. Please, assume that I have 4 test methods in a class file,and I have 4 assertion points in each test methods. When the second method failed then the remaining methods need to skipped and It works as expected. But In report I didn't find any skipped test methods details. please can some one help me to resolve this. So far I didn't use any TestNG listeners to log the execution activity. I'm using my own report.
When you use your own custom reporter, you should use the logging reporters features and its appropriate listener: IReporter.
From IReporter, you should be able to find all the information you need. For example: skipped test methods:
ISuite#getResults() -> ISuiteResult#getTestContext() -> ITestContext#getSkippedTests()

testng skipexception.isSkip method

I am doing some queries in a data provider. If the data queries do not return data I can use, rather than fail the test I would like to skip it. So I
throw new SkipException( "Could not find adequate data" ) but this is failing the test rather than skipping it.
Some research shows that SkipException has a method isSkip() which will skip if true and fail if false. I dumped it before throwing the exception and it showed true, but the test is still failing.
Am I doing something wrong or is there a better way to skip? (yes I know you can put it in the #test() but I don't know how to do that after the test is running.
If you throw a SkipException yes the test will fail. It's just like any other exception. If you want to skip a test you can do the following
Do #Test(enabled=false)
Implement iAnnotationTransformer and then do a annotation.setEnabled(false);
This will be a listener and you have to include that in your test or testng.xml or maven
You are not supposed to use SkipException in a data provider. Just send the result of the next query.

Determining which test cases covered a method

The current project I'm working on requires me to write a tool which runs functional tests on a web application, and outputs method coverage data, recording which test case traversed which method.
Details:
The web application under test will be a Java EE application running in a servlet container (eg. Tomcat). The functional tests will be written in Selenium using JUnit. Some methods will be annotated so that they will be instrumented prior to deployement into the test enviornment. Once the Selenium tests are executed, the execution of annotated methods will be recorded.
Problem: The big obstacle of this project is finding a way to relate an execution of a test case with the traversal of a method, especially that the tests and the application run on different JVMs, and there's no way to transmit the name of the test case down the application, and no way in using thread information to relate test with code execution.
Proposed solution: My solution would consist of using the time of execution: I extend the JUnit framework to record the time the test case was executed, and I instrument the application so that it saves the time the method was traversed. And I try to use correlation to link the test case with method coverage.
Expected problems: This solution assumes that test cases are executed sequentially, and a test case ends befores the next one starts. Is this assumption reasonable with JUnit?
Question: Simply, can I have your input on the proposed solution, and perhaps suggestions on how to improve and make it more robust and functional on most Java EE applications? Or leads to already implemented solutions?
Thank you
Edit: To add more requirements, the tool should be able to work on any Java EE application and require the least amount of configuration or change in the application. While I know it isn't a realistic requirement, the tool should at least not require any huge modification of the application itself, like adding classes or lines of code.
Have you looked at existing coverage tools (Cobertura, Clover, Emma, ...). I'm not sure if one of them is able to link the coverage data to test cases, but at least with Cobertura, which is open-source, you might be able to do the following:
instrument the classes with cobertura
deploy the instrumented web app
start a test suite
after each test, invoke a URL on the web app which saves the coverage data to some file named after the test which has just been run, and resets the coverage data
after the test suite, generate a cobertura report for every saved file. Each report will tell which code has been run by the test
If you need a merged report, I guess it shouldn't be too hard to generate it from the set of saved files, using the cobertura API.
Your proposed solution seems like a reasonable one, except for the proposed solution to relate the test and request by timing. I've tried to do this sort of thing before, and it works. Most of the time. Unless you write your JUnit code very carefully, you'll have lots of issues, because of differences in time between the two machines, or if you've only got one machine, just matching one time against another.
A better solution would be to implement a Tomcat Valve which you can insert into the lifecycle in the server.xml for your webapp. Valves have the advantage that you define them in the server.xml, so you're not touching the webapp at all.
You will need to implement invoke(). The best place to start is probably with AccessLogValve. This is the implementation in AccessLogValve:
/**
* Log a message summarizing the specified request and response, according
* to the format specified by the <code>pattern</code> property.
*
* #param request Request being processed
* #param response Response being processed
*
* #exception IOException if an input/output error has occurred
* #exception ServletException if a servlet error has occurred
*/
public void invoke(Request request, Response response) throws IOException,
ServletException {
if (started && getEnabled()) {
// Pass this request on to the next valve in our pipeline
long t1 = System.currentTimeMillis();
getNext().invoke(request, response);
long t2 = System.currentTimeMillis();
long time = t2 - t1;
if (logElements == null || condition != null
&& null != request.getRequest().getAttribute(condition)) {
return;
}
Date date = getDate();
StringBuffer result = new StringBuffer(128);
for (int i = 0; i < logElements.length; i++) {
logElements[i].addElement(result, date, request, response, time);
}
log(result.toString());
} else
getNext().invoke(request, response);
}
All this does is log the fact that you've accessed it.
You would implement a new Valve. For your requests you pass a unique id as a parameter for the URL, which is used to identify the tests that you're running. Your valve would do all of the heavy lifting before and after the invoke(). You could remove remove the unique parameter for the getNext().invoke() if needed.
To measure the coverage, you could use a coverage tool as suggested by JB Nizet, based on the unique id that you're passing over.
So, from junit, if your original call was
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14");
}
You would change this to be:
#Test void testSomething() {
selenium.open("http://localhost/foo.jsp?bar=14&testId=testSomething");
}
Then you'd pick up the parameter testId in your valve.

Categories

Resources