Purposefully failing a JUnit test upon method completion - java

Background
I am working with a Selenium/Junit test environment and I want to implement a class to perform "soft asserts": meaning that I want it to record whether or not the assert passed, but not actually fail the test case until I explicitly tell it to validate the Asserts. This way I can check multiple fields on a page an record all of the ones which do not match.
Current Code
My "verify" methods appear as such (similar ones exist for assertTrue/assertFalse):
public static void verifyEquals(Object expected, Object actual) {
try {
assertEquals(expected, actual);
} catch (Throwable e) {
verificationFailuresList.add(e);
}
}
Once all the fields have been verified, I call the following method:
public static void checkAllPassed() {
if (!verificationFailuresList.isEmpty()) {
for (Throwable failureThrowable : verificationFailuresList) {
log.error("Verification failure:" + failureThrowable.getMessage(), failureThrowable);
// assertTrue(false);
}
}
}
Question
At the moment, I am currently just using assertTrue(false) as a way to quickly fail the test case; however, this clutters the log with a nonsense failure and pushes the real problem further up. Is there a cleaner way to purposefully fail a JUnit testcase? If not, is there a better solution to implement soft asserts? I know of an article which has a very well done implementation, but to my knowledge JUnit has no equivalent to the IInvokedMethodListener class

In case you want to fail a JUnit test on purpose you should use org.junit.Assert.fail()
Other option is to switch to TestNG framework which already has a SoftAssert class in it's latest version.

You can use JUnit's ErrorCollector rule.

Related

Java (JUnit 4.xx) How to force call an exception in the try block with a void method using mocking tools?

Here is a snippet of my code. I want to force call the catch block with WakeupException.
public void run() {
try {
try {
while (true) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Attempting to Poll");
ConsumerRecords<String, String> records = consumer.poll(10000);
if (records.count() == 0) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "No Response. Invalid Topic");
break;
}
else if(records.count()>0) {
LOGGER.logp(Level.INFO, CLASS_NAME, "run()", "Response Received");
}
}
}
} catch (WakeupException e) {
consumer.close();
}
}
Here is what I tried:
#Test(expected = WakeupException.class)
public void failRun() throws WakeupException, IOException {
KafkaConsumerForTests consumerThread3;
consumerThread3 = Mockito.mock(KafkaConsumerForTests.class);
doThrow(new WakeupException()).when(consumerThread3).run();
//Mockito.when(consumerThread2.run()).thenThrow(new WakeupException());
consumerThread3.run();
}
I just want to call the WakeupException so that I get line coverage for that block of code. What would I do. This is a void method by the way. I'm open to suggestions involving PowerMock as well.
After seeing the code, I am quite sure that the call we want to mock is consumer.poll(...). I am not an expert in using Kafka so take everything from here with a grain of salt. Seeing that consumer is an attribute of the class under test, it should be possible to inject a mocked instance into the class under test and throw the WakeupException we need. Instead of (or additional to - your decision) the class under test, we create a(n additional) mock of the consumer and mock its poll(...)-method to throw the desired WakeupException when called. Instead of mocking the call to consumerThread3.run(), we mock the call to consumer.poll(...).
A remark on your question: "I just want to call the WakeupException so that I get line coverage" - This should never be the reason to write a test. A test should test behaviour. If there is no behaviour to test (which is rarely the case), do not write a test.
OP edited the question and added some additional information. I am quite confident that the first paragraph of this post should answer the question. The other paragraphs were written before OP added the relevant code in the try-block. They are written on a more abstract level. The interested reader may read them, but this is not necessary to understand the answer.
お楽しみください! - Please enjoy!
We want to verify the behaviour of the catch-block. In productive code, something in the try-block would throw the corresponding Exception triggering the catch-block. Thus, in order to test the catch-block, we should mock something in the try-block to throw said Exception.
If mocking a call within the block seems impossible, that may be due to the fact that the code was not developed test-driven. You see, an upside of Test-Driven Development is that you intrinsically generate testable code. If we are stuck with untestable/hard to test code, w ehave two (or maybe three) options:
Leave it as is, do not test it. This can be a valid answer if there is no behaviour to test.
Rewrite the code, make it testable. Depending on the structure of your project this could take from five minutes up to 2 weeks or more. Hard to say without knowing the codebase
Use unconventional tools. Normal mocking frameworks like Mockito have certain limittations, e.g. for Mockito mocking of static or final methods is not supported. Other tools, like PowerMock, aim to eliminate those limitations. But be warned: PowerMock operates on bytecode level. This means that
we are not necessarily testing the bytecode we use in production
this can screw with other tools, e.g. JaCoCo.
Those tools should be your last resort only and used sparsely.

Is there a way to make integration tests fail quickly when middleware fails?

Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.

Adding custom messages to TestNG failures

I'm in the process of migrating a test framework from JUnit to TestNG. This framework is used to perform large end-to-end integration tests with Selenium that take several minutes to run and consist of several hundred steps across dozens of browser pages.
DISCLAIMER: I understand that this makes unit testing idealists very uneasy, but this sort of testing is required at most large service oriented companies and using unit testing tools to manage these integration tests is currently the most widespread solution. It wasn't my decision. It's what I've been asked to work on and I'm attempting to make the best of it.
At any rate, these tests fail very frequently (surprise) and making them easy to debug is of high importance. For this reason we like to detect test failures before they're reported, append some information about the failure, and then allow JUnit to fail with this extra information. For instance, without this information a failure may look like:
java.lang.<'SomeObscureException'>: <'Some obscure message'> at <'StackTrace'>
But with the added information it will look like:
java.lang.AssertionError:
Reproduction Seed: <'Random number used to generate test case'>
Country: <'Country for which test was set to run'>
Language: <'Localized language used by test'>
Step: <'Test step where the exception occurred'>
Exception Message: <'Message explaining probable cause of failure'>
Associated Exception Type: <'SomeObscureException'>
Associated Exception Message: <'Some obscure message'>
Associated Exception StackTrace: <'StackTrace'>
Exception StackTrace: <'StackTrace where we appended this information'>
It's important to note that we add this information before the test actually fails. Because our reporting tool is based entirely on the exceptions thrown by JUnit this ensures that the information we need is present in those exceptions. Ideally I'd like to add this information to an HTML or XML document using a reporter class after the test fails but before teardown is performed and then modify our reporting tool to pick up this extra information and append it to our e-mail reports. However, this has been a hard sell at our sprint planning meetings and I have not been allotted any time to work on it (running endless regressions for the developers is given higher priority than working on the test framework itself. Such is the life of the modern SDET). I also believe strongly in balance and refuse to cut into other parts of my life to get this done outside of tracked time.
What we're currently doing is this:
public class SomeTests extends TestBase {
#Test
public void someTest() {
// Test code
}
// More tests
}
public abstract class TestBase {
#Rule
public MyWatcher watcher = new MyWatcher();
// More rules and variables
#Before
public final void setup() {
// Read config, generate test data, create Selenium WebDriver, etc.
// Send references to all test objects to MyWatcher
}
}
public class MyWatcher extends TestWatcher {
// Test object references
#Override
public void failed(Throwable throwable, Description description) {
StringBuilder sb = new StringBuilder();
// Append custom test information to sb.
String exceptionSummary = sb.toString();
Assert.fail(exceptionSummary);
}
#Override
public void finished(Description description) {
// Shut down Selenium WebDriver, kill proxy server, etc.
}
// Miscellaneous teardown and logging methods
}
JUnit starts.
SomeTests inherits from TestBase class. TestBase instantiates our own instance of a TestWatcher via #Rule annotation (MyWatcher).
Test setup is run in TestBase class.
References to test objects are sent to MyWatcher.
JUnit begins someTest() method.
someTest fails at some point.
JUnit calls overridden failed() method in MyWatcher.
failed() method appends custom test information to new message using references passed by TestBase.
failed() method calls JUnit's Assert.fail() method with the customized message.
JUnit throws a java.lang.Assertion error for this new failure with the customized message. This is the exception that actually gets recorded in the test results.
JUnit calls overridden finished() method.
finished() method performs test teardown.
Our reporting tool picks up the summarized errors thrown by JUnit, and includes them in the e-mails we receive. This makes life easier than debugging the original exceptions would be without any of the extra information added by MyWatcher after the original failure.
I'd now like to implement a similar mechanism using TestNG. I first tried adding an IInvokedMethodListener in a #Listener annotation to our TestBase class as a way of replacing the TestWatcher that we were using in JUnit. Unfortunately the methods in this listener were getting called after every #BeforeMethod and #AfterMethod call as well as for the actual tests. This was causing quite a mess when I called Assert.fail from inside the IInvokedMethodListener so I opted to scrap this approach and insert the code directly into an #AfterMethod call in our TestBase class.
Unfortunately TestNG does not appear to handle the 'failing twice' approach that we were using in JUnit. When I call Assert.fail in the #AfterMethod of a test that has already failed it gets reported as an additional failure. It seems like we're going to have to come up with another way of doing this until I can get authorization to write a proper test reporter that includes the information we need for debugging.
In the meantime, we still need to dress up the exceptions that get thrown by TestNG so that the debugging information will appear in our e-mail reports. One idea I have for doing this is to wrap every single test in a try/catch block. If the test fails (an exception gets thrown), then we can catch that exception, dress it up in a summary exception with the debugging information added to that exception's message, and call Assert.fail with our new summarized exception. That way TestNG only ever sees that one exception and should only report one failure. This feels like a kludge on top of a kludge though, and I can't help but feel that there's a better way of doing this.
Does anybody know of a better method for modifying what gets reported by TestNG? Is there some kind of trick I can use for replacing the original exception with my own using ITestContext or ITestResult? Can I dive in somewhere and remove the original failure from some list, or is it already too late to stop TestNG's internal reporting by the time I get to the #AfterMethod functions?
Do you have any other advice regarding this sort of testing or exception handling in general? I don't have many knowledgeable co-workers to help with this stuff so I'm pretty much just winging it.
Implement IInvokedMethodListener:
public class InvokedMethodListener implements IInvokedMethodListener {
#Override
public void beforeInvocation(IInvokedMethod method, ITestResult testResult) {
}
#Override
public void afterInvocation(IInvokedMethod method, ITestResult result) {
if (method.isTestMethod() && ITestResult.FAILURE == result.getStatus()) {
Throwable throwable = result.getThrowable();
String originalMessage = throwable.getMessage();
String newMessage = originalMessage + "\nReproduction Seed: ...\nCountry: ...";
try {
FieldUtils.writeField(throwable, "detailMessage", newMessage, true);
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
Register it in your test:
#Listeners(InvokedMethodListener.class)
public class YourTest {
#Test
public void test() {
Assert.fail("some message");
}
}
or in testng.xml.
If you execute it, you should get:
java.lang.AssertionError: some message
Reproduction Seed: ...
Country: ...
You can user SoftAssert Class in testNG for implementing above scenario. SoftAssert Class has an hash map array which stores all the error message from Asserts in test cases and prints them in the end of the test case. you can also extend Assertion class to implement methods as per your requirement.
More information regarding SoftAssert class and its implementation can be found here

Placement of success and failure test cases of the same method

Suppose I have a method named foo which, for certain set of input values, is expected to complete successfully and return a result, and for some other set of values, is expected to throw a certain exception. This method requires some things to have set up before it can be tested.
Given these conditions, is it better to club success and failure tests in one test, or should I maintain these cases in separate test methods?
In other words, which of the following two approaches is preferable?
Approach 1:
#Test
public void testFoo() {
setUpThings();
// testing success case
assertEquals(foo(s), y);
// testing failure case
try {
foo(f);
fail("Expected an exception.")
} catch (FooException ex) {
}
}
Approach 2:
#Test
public void testFooSuccess() {
setUpThings();
assertEquals(foo(s), y);
}
#Test
public void testFooFailure() {
setUpThings();
try {
foo(f);
fail("Expected an exception.")
} catch (FooException ex) {
}
}
Best you go for approach #2.
Why:
Well when an asserts fails the rest of the method is not evaluated...
so by putting the tests in 2 separate methods you are sure to at least execute both tests.. failure or not.
Not only should a unit test focus on one specific unit, it should focus on one specific behaviour of that unit. Testing multiple behaviours at once only muddies the water.
Take the time to separate each behaviour into its own unit test.
Approach 3 (extension of 2)
#Before
public void setUpThings() {
...
}
#Test
public void testFooSuccess() {
assertEquals(foo(s), y);
}
#Test(expected=FooException.class)
public void testFooFailure() {
foo(f);
}
I's good to have focused tests that exercise just one condition at a time, so that a failed test can only mean one thing (Approach 2). And if they all use the same setup, you can move that to a common setup method (#Before). If not, maybe it's better to think about separating related cases into different classes, so that you have not only more focused cases (methods) but also more focused fixtures (classes).
I like approach #2. Separate tests are better.
I don't like how you did test 2. Here's what I'd do:
#Test(expected = FooException.class)
public void testFooFailure() {
setUpThings();
foo(f);
}
For me Approach 2 is preferable. Because you first test happy path and then fail condition .
if some one need to test happy scenarios only you will have it.
Separate test cases as better for two reasons:
You test case should be atomic
If first assert condition fails, it will not evaluate the second one.

JUnit4 fail() is here, but where is pass()?

There is a fail() method in JUnit4 library. I like it, but experiencing a lack of pass() method which is not present in the library. Why is it so?
I've found out that I can use assertTrue(true) instead but still looks unlogical.
#Test
public void testSetterForeignWord(){
try {
card.setForeignWord("");
fail();
} catch (IncorrectArgumentForSetter ex){
}
// assertTrue(true);
}
Call return statement anytime your test is finished and passed.
As long as the test doesn't throw an exception, it passes, unless your #Test annotation specifies an expected exception. I suppose a pass() could throw a special exception that JUnit always interprets as passing, so as to short circuit the test, but that would go against the usual design of tests (i.e. assume success and only fail if an assertion fails) and, if people got the idea that it was preferable to use pass(), it would significantly slow down a large suite of passing tests (due to the overhead of exception creation). Failing tests should not be the norm, so it's not a big deal if they have that overhead.
Note that your example could be rewritten like this:
#Test(expected=IncorrectArgumentForSetter.class)
public void testSetterForeignWord("") throws Exception {
card.setForeignWord("");
}
Also, you should favor the use of standard Java exceptions. Your IncorrectArgumentForSetter should probably be an IllegalArgumentException.
I think this question needs an updated answer, since most of the answers here are fairly outdated.
Firstly to the OP's question:
I think its pretty well accepted that introducing the "expected excepetion" concept into JUnit was a bad move, since that exception could be raised anywhere, and it will pass the test. It works if your throwing (and asserting on) very domain specific exceptions, but I only throw those kinds of exceptions when I'm working on code that needs to be absolutely immaculate, --most APIS will simply throw the built in exceptions like IllegalArgumentException or IllegalStateException. If two calls your making could potentitally throw these exceptions, then the #ExpectedException annotation will green-bar your test even if its the wrong line that throws the exception!
For this situation I've written a class that I'm sure many others here have written, that's an assertThrows method:
public class Exceptions {
private Exceptions(){}
public static void assertThrows(Class<? extends Exception> expectedException, Runnable actionThatShouldThrow){
try{
actionThatShouldThrow.run();
fail("expected action to throw " + expectedException.getSimpleName() + " but it did not.");
}
catch(Exception e){
if ( ! expectedException.isInstance(e)) {
throw e;
}
}
}
}
this method simply returns if the exception is thrown, allowing you to do further assertions/verification in your test.
with java 8 syntax your test looks really nice. Below is one of the simpler tests on our model that uses the method:
#Test
public void when_input_lower_bound_is_greater_than_upper_bound_axis_should_throw_illegal_arg() {
//setup
AxisRange range = new AxisRange(0,100);
//act
Runnable act = () -> range.setLowerBound(200);
//assert
assertThrows(IllegalArgumentException.class, act);
}
these tests are a little wonky because the "act" step doesn't actually perform any action, but I think the meaning is still fairly clear.
there's also a tiny little library on maven called catch-exception that uses the mockito-style syntax to verify that exceptions get thrown. It looks pretty, but I'm not a fan of dynamic proxies. That said, there syntax is so slick it remains tempting:
// given: an empty list
List myList = new ArrayList();
// when: we try to get the first element of the list
// then: catch the exception if any is thrown
catchException(myList).get(1);
// then: we expect an IndexOutOfBoundsException
assert caughtException() instanceof IndexOutOfBoundsException;
Lastly, for the situation that I ran into to get to this thread, there is a way to ignore tests if some conidition is met.
Right now I'm working on getting some DLLs called through a java native-library-loading-library called JNA, but our build server is in ubuntu. I like to try to drive this kind of development with JUnit tests --even though they're far from "units" at this point--. What I want to do is run the test if I'm on a local machine, but ignore the test if we're on ubuntu. JUnit 4 does have a provision for this, called Assume:
#Test
public void when_asking_JNA_to_load_a_dll() throws URISyntaxException {
//this line will cause the test to be branded as "ignored" when "isCircleCI"
//(the machine running ubuntu is running this test) is true.
Assume.assumeFalse(BootstrappingUtilities.isCircleCI());
//an ignored test will typically result in some qualifier being put on the results,
//but will also not typically prevent a green-ton most platforms.
//setup
URL url = DLLTestFixture.class.getResource("USERDLL.dll");
String path = url.toURI().getPath();
path = path.substring(0, path.lastIndexOf("/"));
//act
NativeLibrary.addSearchPath("USERDLL", path);
Object dll = Native.loadLibrary("USERDLL", NativeCallbacks.EmptyInterface.class);
//assert
assertThat(dll).isNotNull();
}
I was looking for pass method for JUnit as well, so that I could short-circuit some tests that were not applicable in some scenarios (there are integration tests, rather than pure unit tests). So too bad it is not there.
Fortunately, there is a way to have a test ignored conditionally, which actually fits even better in my case using assumeTrue method:
Assume.assumeTrue(isTestApplicable);
So here the test will be executed only if isTestApplicable is true, otherwise test will be ignored.
There is no need for the pass method because when no AssertionFailedException is thrown from the test code the unit test case will pass.
The fail() method actually throws an AssertionFailedException to fail the testCase if control comes to that point.
I think that this question is a result of a little misunderstanding of the test execution process. In JUnit (and other testing tools) results are counted per method, not per assert call. There is not a counter, which keeps track of how many passed/failured assertX was executed.
JUnit executes each test method separately. If the method returns successfully, then the test registered as "passed". If an exception occurs, then the test registered as "failed". In the latter case two subcase are possible: 1) a JUnit assertion exception, 2) any other kind of exceptions. Status will be "failed" in the first case, and "error" in the second case.
In the Assert class many shorthand methods are avaiable for throwing assertion exceptions. In other words, Assert is an abstraction layer over JUnit's exceptions.
For example, this is the source code of assertEquals on GitHub:
/**
* Asserts that two Strings are equal.
*/
static public void assertEquals(String message, String expected, String actual) {
if (expected == null && actual == null) {
return;
}
if (expected != null && expected.equals(actual)) {
return;
}
String cleanMessage = message == null ? "" : message;
throw new ComparisonFailure(cleanMessage, expected, actual);
}
As you can see, in case of equality nothing happens, otherwise an excepion will be thrown.
So:
assertEqual("Oh!", "Some string", "Another string!");
simply throws a ComparisonFailure exception, which will be catched by JUnit, and
assertEqual("Oh?", "Same string", "Same string");
does NOTHING.
In sum, something like pass() would not make any sense, because it did not do anything.

Categories

Resources