Is it possible in JUnit to add a brief description of the test for the future reader (e.g. what's being tested, some short explanation, expected result, ...)? I mean something like in ScalaTest, where I can write:
test("Testing if true holds") {
assert(true)
}
Ideal approach would be using some annotation, e.g.
#Test
#TestDescription("Testing if true holds")
public void testTrue() {
assert(true);
}
Therefore, if I run such annotated tests using Maven (or some similar tool), I could have similar output to the one I have in SBT when using ScalaTest:
- Testing if entity gets saved correctly
- Testing if saving fails when field Name is not specified
- ...
Currently I can either use terribly long method names or write javadoc comments, which are
not present in the build output.
Thank you.
In JUnit 5, there is #DisplayName annotation:
#DisplayName is used to declare a custom display name for the
annotated test class or test method. Display names are typically used
for test reporting in IDEs and build tools and may contain spaces,
special characters, and even emoji.
Example:
#Test
#DisplayName("Test if true holds")
public void checkTrue() {
assertEquals(true, true);
}
TestNG does it like this, which to me is the neatest solution:
#Test(description="My funky test")
public void testFunk() {
...
}
See http://testng.org/javadocs/org/testng/annotations/Test.html for more information.
Not exactly what you are looking for, but you can provide a description on any assert methods.
Something like:
#Test
public void testTrue() {
assertTrue("Testing if true holds", true);
}
I prefer to follow a standard format when testing in JUnit. The name of the test would be
test[method name]_[condition]_[outcome]
for Example:
#Test
public void testCreateObject_nullField_errorMessage(){}
#Test
public void testCreateObject_validObject_objectCreated(){}
I think this approach is helpful when doing TDD, because you can just start writing all the test names, so you know what you need to test / develop.
Still I would welcome a test description functionality from JUnit.
And this is certainly better than other tests I have seen in the past like:
#Test public void testCreateObject1(){}
#Test public void testCreateObject2(){}
#Test public void testCreateObject3(){}
or
#Test public void testCreateObjectWithNullFirstNameAndSecondNameTooLong(){}
You can name the test method after the test:
public void testThatOnePlusOneEqualsTwo() {
assertEquals(2, 1 + 1);
}
This will show up in Eclipse, Surefire, and most other runners.
The detailed solution would be: You could add a Logger to your test, to log the results to a File. See log4j, for example. Then you can read the results in the File and also print successfull statements, what assertstatemens cannot.
The simple solution: You can add a JDoc description to every test method, this will be outlined, if you generate the JavaDoc.
Also every assertstatement can provide a Message, that will be printed, whenever the assert fails.
/**
* test the List#size() increasement after adding an Object to a List.
*/
public void testAdd(){
List<Object> list = new LinkedList<>();
list.add(new Object());
assertEquals("size should be 1, because of adding an Object", 1, list.size());
}
Do NOT use System.out.println("your message"); because you don't know how the tests will be executed and if the environment does not provide a console, your messages will not be displayed.
Related
I have a test that does a cycle to run the same test but with different inputs. The problem is that when one assert fails, then the test stops, and it is marked as if one test failed.
Here is the code:
#Test
public void test1() throws JsonProcessingException {
this.bookingsTest(Product.ONE);
}
#Test
public void test2() throws JsonProcessingException {
this.bookingsTest(Product.TWO);
}
public <B extends Sale> void bookingsTest(Product product) {
List<Booking> bookings = this.prodConnector.getBookings(product, 100);
bookings.stream().map(Booking::getBookingId).forEach((bookingId) -> {
this.bookingTest(bookingId);
});
}
public <B extends Sale> void bookingTest(String bookingId) {
...
// Do some assert:
Assert.assertEquals("XXX", "XXX");
}
In that case, the methods test1 and test2, execute as two different tests, but inside them, I do a cycle to check some stuff on every item that is returned by the collection. But what I want is to make that test on each item to be treated as a different one, so that if one fails, the others continue executing and I can see which one failed and how many failed over the total.
what you described is parameterized testing. there is plenty of frameworks that may help you. they just have different simplicity to power ratio. just pick the one that fits your needs.
junit's native way is #Parameterized but it's very verbose. other junit plugins that you may find useful are zohhak or junit-dataprovider. if you are not enforced to use junit and/or plain java, you can try spock or testng.
I have a test class written using JUnit 4 that has multiple test methods. Each of the tests prints some important informational output. When I look in IntelliJ's log, however, some of the last output from the previous test is recorded as being part of the first.
Why this this? Is there a way of correcting this behaviour?
I'm just writing to System.out, and calling flush before returning from each test actually made the problem worse.
It is simply a bug. I workaround it by Thread#sleep in #After
https://youtrack.jetbrains.com/issue/IDEA-66683
Test case:
public class LogBug {
#Test
public void see() {
System.out.println("foo");
throw new AssertionError("bar");
}
#Test
public void see2() {
System.out.println("foo2");
throw new AssertionError("bar2");
}
}
I'm in the process of migrating a test framework from JUnit to TestNG. This framework is used to perform large end-to-end integration tests with Selenium that take several minutes to run and consist of several hundred steps across dozens of browser pages.
DISCLAIMER: I understand that this makes unit testing idealists very uneasy, but this sort of testing is required at most large service oriented companies and using unit testing tools to manage these integration tests is currently the most widespread solution. It wasn't my decision. It's what I've been asked to work on and I'm attempting to make the best of it.
At any rate, these tests fail very frequently (surprise) and making them easy to debug is of high importance. For this reason we like to detect test failures before they're reported, append some information about the failure, and then allow JUnit to fail with this extra information. For instance, without this information a failure may look like:
java.lang.<'SomeObscureException'>: <'Some obscure message'> at <'StackTrace'>
But with the added information it will look like:
java.lang.AssertionError:
Reproduction Seed: <'Random number used to generate test case'>
Country: <'Country for which test was set to run'>
Language: <'Localized language used by test'>
Step: <'Test step where the exception occurred'>
Exception Message: <'Message explaining probable cause of failure'>
Associated Exception Type: <'SomeObscureException'>
Associated Exception Message: <'Some obscure message'>
Associated Exception StackTrace: <'StackTrace'>
Exception StackTrace: <'StackTrace where we appended this information'>
It's important to note that we add this information before the test actually fails. Because our reporting tool is based entirely on the exceptions thrown by JUnit this ensures that the information we need is present in those exceptions. Ideally I'd like to add this information to an HTML or XML document using a reporter class after the test fails but before teardown is performed and then modify our reporting tool to pick up this extra information and append it to our e-mail reports. However, this has been a hard sell at our sprint planning meetings and I have not been allotted any time to work on it (running endless regressions for the developers is given higher priority than working on the test framework itself. Such is the life of the modern SDET). I also believe strongly in balance and refuse to cut into other parts of my life to get this done outside of tracked time.
What we're currently doing is this:
public class SomeTests extends TestBase {
#Test
public void someTest() {
// Test code
}
// More tests
}
public abstract class TestBase {
#Rule
public MyWatcher watcher = new MyWatcher();
// More rules and variables
#Before
public final void setup() {
// Read config, generate test data, create Selenium WebDriver, etc.
// Send references to all test objects to MyWatcher
}
}
public class MyWatcher extends TestWatcher {
// Test object references
#Override
public void failed(Throwable throwable, Description description) {
StringBuilder sb = new StringBuilder();
// Append custom test information to sb.
String exceptionSummary = sb.toString();
Assert.fail(exceptionSummary);
}
#Override
public void finished(Description description) {
// Shut down Selenium WebDriver, kill proxy server, etc.
}
// Miscellaneous teardown and logging methods
}
JUnit starts.
SomeTests inherits from TestBase class. TestBase instantiates our own instance of a TestWatcher via #Rule annotation (MyWatcher).
Test setup is run in TestBase class.
References to test objects are sent to MyWatcher.
JUnit begins someTest() method.
someTest fails at some point.
JUnit calls overridden failed() method in MyWatcher.
failed() method appends custom test information to new message using references passed by TestBase.
failed() method calls JUnit's Assert.fail() method with the customized message.
JUnit throws a java.lang.Assertion error for this new failure with the customized message. This is the exception that actually gets recorded in the test results.
JUnit calls overridden finished() method.
finished() method performs test teardown.
Our reporting tool picks up the summarized errors thrown by JUnit, and includes them in the e-mails we receive. This makes life easier than debugging the original exceptions would be without any of the extra information added by MyWatcher after the original failure.
I'd now like to implement a similar mechanism using TestNG. I first tried adding an IInvokedMethodListener in a #Listener annotation to our TestBase class as a way of replacing the TestWatcher that we were using in JUnit. Unfortunately the methods in this listener were getting called after every #BeforeMethod and #AfterMethod call as well as for the actual tests. This was causing quite a mess when I called Assert.fail from inside the IInvokedMethodListener so I opted to scrap this approach and insert the code directly into an #AfterMethod call in our TestBase class.
Unfortunately TestNG does not appear to handle the 'failing twice' approach that we were using in JUnit. When I call Assert.fail in the #AfterMethod of a test that has already failed it gets reported as an additional failure. It seems like we're going to have to come up with another way of doing this until I can get authorization to write a proper test reporter that includes the information we need for debugging.
In the meantime, we still need to dress up the exceptions that get thrown by TestNG so that the debugging information will appear in our e-mail reports. One idea I have for doing this is to wrap every single test in a try/catch block. If the test fails (an exception gets thrown), then we can catch that exception, dress it up in a summary exception with the debugging information added to that exception's message, and call Assert.fail with our new summarized exception. That way TestNG only ever sees that one exception and should only report one failure. This feels like a kludge on top of a kludge though, and I can't help but feel that there's a better way of doing this.
Does anybody know of a better method for modifying what gets reported by TestNG? Is there some kind of trick I can use for replacing the original exception with my own using ITestContext or ITestResult? Can I dive in somewhere and remove the original failure from some list, or is it already too late to stop TestNG's internal reporting by the time I get to the #AfterMethod functions?
Do you have any other advice regarding this sort of testing or exception handling in general? I don't have many knowledgeable co-workers to help with this stuff so I'm pretty much just winging it.
Implement IInvokedMethodListener:
public class InvokedMethodListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult result) {
if (method.isTestMethod() && ITestResult.FAILURE == result.getStatus()) {
Throwable throwable = result.getThrowable();
String originalMessage = throwable.getMessage();
String newMessage = originalMessage + "\nReproduction Seed: ...\nCountry: ...";
try {
FieldUtils.writeField(throwable, "detailMessage", newMessage, true);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Register it in your test:
#Listeners(InvokedMethodListener.class)
public class YourTest {
#Test
public void test() {
Assert.fail("some message");
}
}
or in testng.xml.
If you execute it, you should get:
java.lang.AssertionError: some message
Reproduction Seed: ...
Country: ...
You can user SoftAssert Class in testNG for implementing above scenario. SoftAssert Class has an hash map array which stores all the error message from Asserts in test cases and prints them in the end of the test case. you can also extend Assertion class to implement methods as per your requirement.
More information regarding SoftAssert class and its implementation can be found here
I have a class that makes native Windows API calls through JNA. How can I write JUnit tests that will execute on a Windows development machine but will be ignored on a Unix build server?
I can easily get the host OS using System.getProperty("os.name")
I can write guard blocks in my tests:
#Test public void testSomeWindowsAPICall() throws Exception {
if (isWindows()) {
// do tests...
}
}
This extra boiler plate code is not ideal.
Alternatively I have created a JUnit rule that only runs the test method on Windows:
public class WindowsOnlyRule implements TestRule {
#Override
public Statement apply(final Statement base, final Description description) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
if (isWindows()) {
base.evaluate();
}
}
};
}
private boolean isWindows() {
return System.getProperty("os.name").startsWith("Windows");
}
}
And this can be enforced by adding this annotated field to my test class:
#Rule public WindowsOnlyRule runTestOnlyOnWindows = new WindowsOnlyRule();
Both these mechanisms are deficient in my opinion in that on a Unix machine they will silently pass. It would be nicer if they could be marked somehow at execution time with something similar to #Ignore
Does anybody have an alternative suggestion?
In Junit5, There are options for configuring or run the test for specific Operating System.
#EnabledOnOs({ LINUX, MAC })
void onLinuxOrMac() {
}
#DisabledOnOs(WINDOWS)
void notOnWindows() {
// ...
}
Have you looked into assumptions? In the before method you can do this:
#Before
public void windowsOnly() {
org.junit.Assume.assumeTrue(isWindows());
}
Documentation: http://junit.sourceforge.net/javadoc/org/junit/Assume.html
Have you looked at JUnit assumptions ?
useful for stating assumptions about the conditions in which a test
is meaningful. A failed assumption does not mean the code is broken,
but that the test provides no useful information. The default JUnit
runner treats tests with failing assumptions as ignored
(which seems to meet your criteria for ignoring these tests).
If you use Apache Commons Lang's SystemUtils:
In your #Before method, or inside tests that you don't want to run on Win, you can add:
Assume.assumeTrue(SystemUtils.IS_OS_WINDOWS);
Presumably you do not need to actually call the Windows API as part of the junit test; you only care that the class which is the target of the unit test calls what it thinks is a the windows API.
Consider mocking the windows api calls as part of the unit tests.
The selenium tests I'm gonna be doing are basically based on three main steps, with different parameters. These parameters are passed in from a text file to the test. this allows easy completion of a test such as create three of "X" without writing the code to do the create three times in one test.
Imagine i have a test involving creating two of "X" and one of "Y". CreateX and CreateY are already defined in separate tests. Is there a nice way of calling the code contained in createX and createY from say, Test1?
I tried creating a class with the creates as seperate methods, but got errors on all the selenium.-anything-, ie every damn line. it goes away if i extend seleneseTestCase, but it seems that my other test classes wont import from a class that extends seleneseTestCase. I'm probably doing something idiotic but i might as well ask!
EDIT:
well for example, its gonna be the same setUp method for every test, so id like to only write that once... instead of a few hundred times...
public void ready() throws Exception
{
selenium = new DefaultSelenium("localhost", 4444, "*chrome", "https://localhost:9443/");
selenium.start();
selenium.setSpeed("1000");
selenium.setTimeout("999999");
selenium.windowMaximize();
}
thats gonna be used EVERYWHERE.
its in a class called reuseable. Id like to just call reuseable.ready(); from the tests SetUp... but it wont let me....
public class ExampleTest {
#Before
public void setup() {
System.out.println("setup");
}
public void someSharedFunction() {
System.out.println("shared function");
}
#Test
public void test1() {
System.out.println("test1");
someSharedFunction();
}
#Test
public void test2() {
System.out.println("test2");
someSharedFunction();
}
}
The contents of the function after the #Before annotation is what will be executed before every test. someSharedFunction() is an example of a 'reusable' function. The code above will output the following:
setup
test1
shared function
setup
test2
shared function
I would recommend using JUnit and trying out some of the tutorials on junit.org. The problem you have described can be fixed using the #Before annotation on a method that performs this setup in a super class of your tests