My TestNG tests are weird... sometimes they finish, but sometimes they don't, even though there is nothing random.
Most of the tests run, then nothing happens, except the TestNG result bar is green, and flickering every few seconds (as if something would happen in the background)
It's not always at the same test method.
The problem occures about every 3. run.
btw., there is no while-loop or something similar in my test classes and tested classes.
I would be really thankful if someone has a solution for my problem.
Thanks, Thomas
My Tests look like
public class Test extends BaseTest{
private static final boolean ENABLED = true;
#Test(groups = { GROUP_UNIT }, enabled = ENABLED)
public void valueOfInt() {
Assert.assertEquals(StringUtils.valueOf(5), "5");
}
}
the corresponding method:
public static String valueOf(final Object object) {
return object != null ? object.toString() : "null";
}
with current versions of eclipse and testng the problem does not occur anymore
Related
Desired: A "Test of the Tests"
Imagine there is some additional "sanity check" that could be performed after a test class completes all its tests that would indicate whether test execution as a whole executed successfully. This final sanity check could possibly use some aggregated information about the tests. Just as a crude example: The number of calls to a shared method is counted, and if the count is not above some minimum expected threshold after all tests complete, then it is clear that something is wrong even if all the individual tests pass.
What I have described is probably in some "grey area" of best practices because while it does violate the doctrine of atomic unit tests, the final sanity check is not actually testing the class being tested; rather, it is checking that test execution as a whole was a success: A "test of the tests," so-to-speak. It is additional logic regarding the tests themselves.
This Solution Seems Bad
One way to accomplish this "test of tests" is to place the sanity check in a static #AfterClass method. If the check fails, one can call Assert.fail(), which actually works (surprisingly, since I presumed it could only be invoked from within methods annotated with #Test, which by nature must be instance methods, not static):
public class MyTest {
[...]
#AfterClass
public static void testSufficientCount() {
if (MyTest.counterVariable < MIN_COUNT) {
Assert.fail("This fail call actually works. Wow.");
}
}
}
There are many reasons why this solution is a kludge:
Assume there are N tests in total (where a "test" is an instance method annotated with #Test). When Assert.fail() is not called in #AfterClass, N tests in total are reported by the IDE, as expected. However, when Assert.fail() is called in #AfterClass, N + 1 tests in total are reported by the IDE (the extra one being the static #AfterClass method). The additional static method was not annotated with #Test, so it should not be counted as a test. Further, the total number of tests should not be a function of whether some tests pass or fail.
The #AfterClass method is static by definition. Therefore, only static members are accessible. This presents a problem for my specific situation; I will leave this statement without elaboration because the explanation is out of the scope of the question, but basically it would be most desirable if only instance members were used.
[Other reasons too...]
Is There a Better Way?
Is there a way to implement this "test of tests" that is considered good and common practice? Does JUnit 4 support adding some kind of logic to ensure a group of unit tests within a class executed properly (failing in some way if they did not)? Is there a name for this thing I have called a "test of tests"?
About variable number of tests
I don't think there is a valid solution ...
About static fields
I tried to follow your example and, if I understood well, with a combination of Verifier, TestRule and ClassRule it is possible to use only the instance fields of the test class
Here my code from which to take a cue:
public class ATest {
public int countVariable = 0;
private static class MyVerifier extends Verifier {
public int count = 0;
#Override
protected void verify() throws Throwable {
assertTrue(count < 1); // cause new failed test
// assertTrue(count >= 1); // it's all ok
}
}
#ClassRule
public static MyVerifier v = new MyVerifier();
private class MyRule implements TestRule {
ATest a;
MyVerifier v;
public MyRule(ATest a, MyVerifier v) {
this.a = a;
this.v = v;
}
#Override
public Statement apply(Statement base, Description description) {
try {
base.evaluate();
this.v.count = a.countVariable;
} catch (Throwable ex) {
Logger.getLogger(ATest.class.getName()).log(Level.SEVERE, null, ex);
}
return base;
}
}
#Rule
public MyRule rule = new MyRule(this, v);
#org.junit.Test
public void testSomeMethod() {
countVariable++; // modifies instance counter
assertTrue(true);
}
#org.junit.Test
public void testSomeMethod2() {
countVariable++; // modifies instance counter
assertTrue(true);
}
}
Having said that
"test of tests" isn't consider a common and good practice because, as you know, it violates at least two of five principles of the FIRST rule(see Cleean code from Uncle Bob Martin): tests must be
F: Fast
I: Indipendent
R: Repeteable
S: Self-validating
T: Timely (linked to TDD practice)
I am somewhat new to test driven development and am trying to determine whether I have a problem with my approach to unit testing.
I have several unit tests (let's call them group A) that test whether my method's return is as expected.
I also have a unit test "B" whose passing condition is that an IllegalArgumentException is thrown when my method is given invalid input.
The unit tests in group A fail when the method is given invalid input since the method needs valid input to return correctly.
If I catch the exception, unit test "B" will fail, but if I don't catch the exception, the tests in group A will fail.
Is it OK to have unit tests fail in this way, or can I modify the code in some way so that all tests always pass?
Am I doing TDD all wrong?
Here's a notion of my code for more clarity:
public class Example{
public static String method(String inputString, int value){
if(badInput){
throw new IllegalArgumentException();
}
//do some things to inputString
return modifiedInputString;
}
}
public class ExampleTests{
#Test
public void methodReturnsIllegalArgumentExceptionForBadInput(){
assertThrows(IllegalArgumentException.class, ()->{Example.method(badInput,badValue);})
}
//"Group A" tests only pass with valid input. Bad input causes IllegalArgumentException
#Test
public void methodReturnsExpectedType(){
assertTrue(actual == expected);
}
#Test
public void methodReturnsExpectedValue(){
assertTrue(actual == expected);
}
#Test
public void methodReturnsExpectedStringFormat(){
assertTrue(actual == expected);
}
}
As you've correctly noted in the comment the problem is with too broad test setup and too implicit tests.
Tests are much more readable when they are self-contained on the business level. Common test setup should be focused on setting up technical details, but all business setup should be within each test itself.
Example for your case (a conceptual example, must be reworked to match the details of your implementation):
#Test
public void givenWhatever_whenDoingSomething_methodReturnsExpectedType(){
given(someInputs);
Type result = executeSut(); //rename executeSut to actual function under test name
assertTrue(result == expected);
}
This way by just looking at a test a reader knows what scenario is being tested. Common test setup and helper functions such as given abstract away technical details, so the reader does not get distracted at the first inspection. If they are interested, the details are always available - but are usually less important, such can be hidden.
We are using TestNG for our integration tests. We recently converted from jUnit, and we used to use a org.junit.rules.TestRule to automatically retry each test up to 3 times before counting it as failed. This eliminated a lot of false positives whenever a test case failed only occasionally.
In our conversion to TestNG, this retry rule was overlooked, and now we have a bunch of test cases "failing" that are really false positives.
I found a few articles on how to automatically re-run TestNG test cases:
https://jepombar.wordpress.com/2015/02/16/testng-adding-a-retryanalyzer-to-all-you-tests/
http://mylearnings.net/11.html
The gist of it is you can specify a retryAnalizer for each individual #Test-annotated test case. I set up my own analyzer and applied it to a test case, and that works. But applying a retry analyzer to every single test case manually is not a good solution, when we want every test case in the suite to do this. The article on jepombar.wordpress.com shows a way to apply it to all tests in a class, but for whatever reason it doesn't seem to work as written.
I made the following IAnnotationTransformer:
public class RetryListener implements IAnnotationTransformer {
#Override
public void transform(ITestAnnotation annotation, Class testClass, Constructor testConstructor, Method testMethod) {
IRetryAnalyzer retry = annotation.getRetryAnalyzer();
if (retry == null) {
annotation.setRetryAnalyzer(RetryRule.class); // my TestNG RetryAnalizer implementation
}
}
}
And I apply it to a class like this:
#Listeners(RetryListener.class)
public class FooTest extends SeleniumMockedTest {
...
}
This doesn't work; the code in RetryListener.transform() never executes, so RetryRule is never added to any of the test cases for the class.
How can I get this to work?
Or, better yet, my real question: How can I get all the test cases in our integration test suite to automatically try 3 times before failing counts as actually failing?
I can not get it to work using #Listeners either but I can get it to work using the command line. e.g.:
java org.testng.TestNG -listener MyTransformer testng.xml
It not working using #Listeners may be a bug. You can report the issue here.
Try this,
Make 2 classes,
public class RetryAnalyzer implements IRetryAnalyzer {
int counter = 0;
#Override
public boolean retry(ITestResult result) {
RetryCountIfFailed annotation = result.getMethod().getConstructorOrMethod().getMethod()
.getAnnotation(RetryCountIfFailed.class);
result.getTestContext().getSkippedTests().removeResult(result.getMethod());
if((annotation != null) && (counter < annotation.value()))
{
counter++;
return true;
}
return false;
}
And write the other class to look like this
#Retention(RetentionPolicy.RUNTIME)
public #interface RetryCountIfFailed {
int value() default 0;
}
And now, pass the value of the count (retryCountGlobal) like below to every test you want to retry,
#Test
#listeners.RetryCountIfFailed(retryCountGlobal)
public void verifyRetryOnTestMethod(){
}
P.S : Remember that if retryCountGlobal=3, then the test will run 4 times, the first time if it fails, the test will be retried 3 more times.
I have a void method and I want to test it. How do I do that?
Here's the method:
public void updateCustomerTagCount() {
List<String> fileList = ImportTagJob.fetchData();
try {
for (String tag : fileList) {
Long tagNo = Long.parseLong(tag);
Customer customer = DatabaseInterface.getCustomer(tagNo);
customer.incrementNoOfTimesRecycled();
DatabaseInterface.UpdateCustomer(customer);
}
} catch(IllegalArgumentException ex) {
ex.printStackTrace();
}
}
when the method returns void, you can't test the method output. Instead, you must test what are the expected consequences of that method. For example:
public class Echo {
String x;
public static void main(String[] args){
testVoidMethod();
}
private static void testVoidMethod() {
Echo e = new Echo();
//x == null
e.voidMethod("xyz");
System.out.println("xyz".equals(e.x)); //true expected
}
private void voidMethod(String s) {
x = s;
}
}
It might not be always true, but basic concept of unit test is to check if function works as expected and properly handling errors when unexpected parameters/situation is given.
So basically unit test is against the functions that takes input parameters and return some output so we can write those unit test.
The code like yours, however, includes some other dependency (database call) and that's something you can't execute unless you write integration-test code or real database connection related one and actually that's not recommended for unit test.
So what you need to do might be introducing unit test framework, especially Mockto/Powermock or some other stuff that provides object mocking feature. With those test framework, you can simulate database operation or other function call that is going to be happening outside of your test unit code.
Also, about how do I test void function, there is nothing you can with Assert feature to compare output since it returns nothing as you mentioned.
But still, there is a way for unit test.
Just call updateCustomerTagCount() to make sure function works. Even with just calling the function, those unit test can raise your unit test coverage.
Of course for your case, you need to mock
ImportTagJob.fetchData();
and
DatabaseInterface.getCustomer(tagNo);
and have to.
Let mocked
ImportTagJob.fetchData();
throw empty list as well as non-empty list and check if your code works as you expected. Add exception handling if necessary. In your code, there are two condition depends on whether fieList are null or non-null, you need to test it.
Also, mock those objects and let them throw IllegalArgumentException where you expect it to be thrown, and write an unit test if the function throws a exception. In Junit, it should be like
#Test(expected = IllegalArgumentException.class)
public void updateCustomerTagCountTest(){
// mock the objects
xxxxx.updateCustomerTagCount();
}
That way, you can ensure that function will throw exception properly when it has to.
I am in a project now that is using JUnit as a framework to test engineering data (ref: last question Creating a Java reporting project -- would like to use JUnit and Ant but not sure how)
Since a picture (err a code block) tells a 1,000 words, so let me paste my loop:
JUnitCore junit = new JUnitCore();
RunListener listener = new RunListener();
junit.addListener(listener);
[...]
for (AbstractFault fault : faultLog) {
theFault = fault;
Result result = junit.run(GearAndBrakeFaultLogReports.class);
for (Failure f : result.getFailures()) {
output.println(log.getName());
output.println(fault.getName());
output.println(HelperFunctions.splitCamelCase(f.getDescription()
.getMethodName()));
output.println(f.getMessage());
output.println();
}
}
As you can see, I am running the "junit.run" many times (for each fault in the log).
However, if any one of my tests fires a fail() I don't want to repeat that test. In other words, if there are 50 faults in a log, and in fault #1 a test fails, I don't want to attempt that test in the 49 future faults I am looping through.
Here is an example test:
private static boolean LeftMLGDownTooLongFound = false;
#Test
public final void testLeftMLGDownTooLong() {
if (!LeftMLGDownTooLongFound
&& handleLDGReportFaults(false)
&& theFault.getName().equals(FaultNames.LDG_LG_DWN_TIME.toString())) {
assertNotGreater(getPCandRecNum(), 8f, ldgFault.getLeftStrutUpTime());
LeftMLGDownTooLongFound = true;
}
}
Currently, do to this, I am making a static bool that is set to false at first, but switches to true after the first assertion. Not sure if this works, but its the idea. I don't want to do this for every single test (100's of them).
Is there any public function, method, or way in the JUnitCore or Runner class that I can flag it so a test never runs more than once after a fail() is called?
Ah, figured it out. To do this, I need to implement a way to find the failed tests, then in the #Before area, ax out of the test. Here is what I added.
#Rule public TestName name = new TestName();
#Before
public void testNonFailedOnly() {
Assume.assumeTrue(!failedTests.contains(name.getMethodName()));
}
private static List<String> failedTests = new ArrayList<String>(256);
#Rule
public TestWatcher watchman = new TestWatcher() {
/* (non-Javadoc)
* #see org.junit.rules.TestWatcher#failed(java.lang.Throwable, org.junit.runner.Description)
*/
#Override
protected void failed(Throwable e, Description description) {
super.failed(e, description);
failedTests.add(description.getMethodName());
}
};
It does add about 1.5 seconds of overhead, which sucks... but better than the alternative!!
Anyone have ideas on how to optimize this? I believe the overhead is from the TestWatcher, don't think it from the arraylist.
I used a Java Class that every test extends.
In the #Before of this class I set a boolean hasPassed = false;
At the end of every #Test method I set this variable hasPassed = true;
In the #AfterMethod you can then check the variable.
If your test causes an exception, it wont reach the end and the variable is still false.