I have a test that does a cycle to run the same test but with different inputs. The problem is that when one assert fails, then the test stops, and it is marked as if one test failed.
Here is the code:
#Test
public void test1() throws JsonProcessingException {
this.bookingsTest(Product.ONE);
}
#Test
public void test2() throws JsonProcessingException {
this.bookingsTest(Product.TWO);
}
public <B extends Sale> void bookingsTest(Product product) {
List<Booking> bookings = this.prodConnector.getBookings(product, 100);
bookings.stream().map(Booking::getBookingId).forEach((bookingId) -> {
this.bookingTest(bookingId);
});
}
public <B extends Sale> void bookingTest(String bookingId) {
...
// Do some assert:
Assert.assertEquals("XXX", "XXX");
}
In that case, the methods test1 and test2, execute as two different tests, but inside them, I do a cycle to check some stuff on every item that is returned by the collection. But what I want is to make that test on each item to be treated as a different one, so that if one fails, the others continue executing and I can see which one failed and how many failed over the total.
what you described is parameterized testing. there is plenty of frameworks that may help you. they just have different simplicity to power ratio. just pick the one that fits your needs.
junit's native way is #Parameterized but it's very verbose. other junit plugins that you may find useful are zohhak or junit-dataprovider. if you are not enforced to use junit and/or plain java, you can try spock or testng.
Related
Our test environment has a variety of integration tests that rely on middleware (CMS platform, underlying DB, Elasticsearch index).
They're automated and we manage our middleware with Docker, so we don't have issues with unreliable networks. However, sometimes our DB crashes and our test fails.
The problem is that the detection of this failure is through a litany of org.hibernate.exception.JDBCConnectionException messages. These come about via a timeout. When that happens, we end up with hundreds of tests failing with this exception, each one taking many seconds to fail. As a result, it takes an age for our tests to complete. Indeed, we generally just kill these builds manually when we realise they are done.
My question: In a Maven-driven Java testing environment, is there a way to direct the build system to watch out for specific kinds of Exceptions and kill the whole process, should they arrive (or reach some kind of threshold)?
We could watchdog our containers and kill the build process that way, but I'm hoping there's a cleaner way to do it with maven.
If you use TestNG instead of JUnit, there are other possibilities to define tests as dependent on other tests.
For example, like others mentioned above, you can have a method to check your database connection and declare all other tests as dependent on this method.
#Test
public void serverIsReachable() {}
#Test(dependsOnMethods = { "serverIsReachable" })
public void queryTestOne() {}
With this, if the serverIsReachable test fails, all other tests which depends on this one will be skipped and not marked as failed. Skipped methods will be reported as such in the final report, which is important since skipped methods are not necessarily failures. But since your initial test serverIsReachable failed, the build should fail completely.
The positive effect is, that non of your other tests will be executed, which should fail very fast.
You could also extend this logic with groups. Let's say you're database queries are used by some domain logic tests afterwards, you can declare each database test with a group, like
#Test(groups = { "jdbc" })
public void queryTestOne() {}
and declare you domain logic tests as dependent on these tests, with
#Test(dependsOnGroups = { "jdbc.* })
public void domainTestOne() {}
TestNG will therefore guarantee the order of execution for your tests.
Hope this helps to make your tests a bit more structured. For more infos, have a look at the TestNG dependency documentation.
I realize this is not exactly what you are asking for, but could help none the less to speed up the build:
JUnit assumptions allow to let a test pass when an assumption fails. You could have an assumption like assumeThat(db.isReachable()) that would skip those tests when a timeout is reached.
In order to actually speed things up and to not repeat this over and over, you could put this in a #ClassRule:
A failing assumption in a #Before or #BeforeClass method will have the same effect as a failing assumption in each #Test method of the class.
Of cause you would then have to mark your build as unstable via another way, but that should be easily doable.
I don't know if you can fail-fast the build itself, or even want to - since the administrative aspects of the build may not then complete, but you could do this:
In all your test classes that depend on the database - or the parent classes, because something like this is inheritable - add this:
#BeforeClass
public void testJdbc() throws Exception {
Executors.newSingleThreadExecutor()
.submit(new Callable() {
public Object call() throws Exception {
// execute the simplest SQL you can, eg. "SELECT 1"
return null;
}
})
.get(100, TimeUnit.MILLISECONDS);
}
If the JDBC simple query fails to return within 100ms, the entire test class won't run and will show as a "fail" to the build.
Make the wait time as small as you can and still be reliable.
One thing you could do is to write a new Test Runner which will stop if such an error occurs. Here is an example of what that might look like:
import org.junit.internal.AssumptionViolatedException;
import org.junit.runner.Description;
import org.junit.runner.notification.RunNotifier;
import org.junit.runners.BlockJUnit4ClassRunner;
import org.junit.runners.model.FrameworkMethod;
import org.junit.runners.model.InitializationError;
import org.junit.runners.model.Statement;
public class StopAfterSpecialExceptionRunner extends BlockJUnit4ClassRunner {
private boolean failedWithSpecialException = false;
public StopAfterSpecialExceptionRunner(Class<?> klass) throws InitializationError {
super(klass);
}
#Override
protected void runChild(final FrameworkMethod method, RunNotifier notifier) {
Description description = describeChild(method);
if (failedWithSpecialException || isIgnored(method)) {
notifier.fireTestIgnored(description);
} else {
runLeaf(methodBlock(method), description, notifier);
}
}
#Override
protected Statement methodBlock(FrameworkMethod method) {
return new FeedbackIfSpecialExceptionOccurs(super.methodBlock(method));
}
private class FeedbackIfSpecialExceptionOccurs extends Statement {
private final Statement next;
public FeedbackIfSpecialExceptionOccurs(Statement next) {
super();
this.next = next;
}
#Override
public void evaluate() throws Throwable {
boolean complete = false;
try {
next.evaluate();
complete = true;
} catch (AssumptionViolatedException e) {
throw e;
} catch (SpecialException e) {
StopAfterSpecialExceptionRunner.this.failedWithSpecialException = true;
throw e;
}
}
}
}
Then annotate your test classes with #RunWith(StopAfterSpecialExceptionRunner.class).
Basically what this does is that it checks for a certain Exception (here it's SpecialException, an Exception I wrote myself) and if this occurs it will fail the test that threw that and skip all following Tests. You could of course limit that to tests annotated with a specific annotation if you liked.
It is also possible, that a similar behavior could be achieved with a Rule and if so that may be a lot cleaner.
I'm testing different parts of a miniature search engine, and some of the JUnit tests are leaving entries in the index that interfere with other tests. Is there a convention in JUnit/Maven for clearing objects between tests?
There are 2 particular annotations that can help you with this, and are intended to be used in cases such as yours:
#After defines that a certain method must be executed after every #Test, while #AfterClass is the method to execute once the entire test class has been executed. Think of the latter as a last cleanup method to purge any structures or records you've been using and sharing between tests so far.
Here is an example:
#After
public void cleanIndex() {
index.clear(); //Assuming you have a collection
}
#AfterClass
public void finalCleanup() {
//Clean both the index and, for example, a database record.
}
Note: They have their counterparts (#Before and #BeforeClass) that to exactly the opposite by invoking the related methods before a #Test method and before starting to execute the #Tests defined on that class. This ones are the setUp used in previous versions of JUnit.
If you can't use annotations, the alternative is to use the good old tearDown method:
public void tearDown() {
index.clear(); //Assuming you have a collection.
}
This is provided by the JUnit framework and behaves like a method annotated with #After.
You should make use of the #Before annotation to guarantee that each test is running from a clean state. Please see: Test Fixtures.
Inside of your junit testing class, you can override the methods setup and teardown. setup will run before every one of your tests while teardown will run after every single junit test that you have.
ex:
public class JunitTest1 {
private Collection collection;
//Will initialize the collection for every test you run
#Before
public void setUp() {
collection = new ArrayList();
System.out.println("#Before - setUp");
}
//Will clean up the collection for every test you run
#After
public void tearDown() {
collection.clear();
System.out.println("#After - tearDown");
}
//Your tests go here
}
This is useful for clearing out data inbetween tests, but also allows you to not have to reinitialize your fields inside of every single test.
Suppose I have a method named foo which, for certain set of input values, is expected to complete successfully and return a result, and for some other set of values, is expected to throw a certain exception. This method requires some things to have set up before it can be tested.
Given these conditions, is it better to club success and failure tests in one test, or should I maintain these cases in separate test methods?
In other words, which of the following two approaches is preferable?
Approach 1:
#Test
public void testFoo() {
setUpThings();
// testing success case
assertEquals(foo(s), y);
// testing failure case
try {
foo(f);
fail("Expected an exception.")
} catch (FooException ex) {
}
}
Approach 2:
#Test
public void testFooSuccess() {
setUpThings();
assertEquals(foo(s), y);
}
#Test
public void testFooFailure() {
setUpThings();
try {
foo(f);
fail("Expected an exception.")
} catch (FooException ex) {
}
}
Best you go for approach #2.
Why:
Well when an asserts fails the rest of the method is not evaluated...
so by putting the tests in 2 separate methods you are sure to at least execute both tests.. failure or not.
Not only should a unit test focus on one specific unit, it should focus on one specific behaviour of that unit. Testing multiple behaviours at once only muddies the water.
Take the time to separate each behaviour into its own unit test.
Approach 3 (extension of 2)
#Before
public void setUpThings() {
...
}
#Test
public void testFooSuccess() {
assertEquals(foo(s), y);
}
#Test(expected=FooException.class)
public void testFooFailure() {
foo(f);
}
I's good to have focused tests that exercise just one condition at a time, so that a failed test can only mean one thing (Approach 2). And if they all use the same setup, you can move that to a common setup method (#Before). If not, maybe it's better to think about separating related cases into different classes, so that you have not only more focused cases (methods) but also more focused fixtures (classes).
I like approach #2. Separate tests are better.
I don't like how you did test 2. Here's what I'd do:
#Test(expected = FooException.class)
public void testFooFailure() {
setUpThings();
foo(f);
}
For me Approach 2 is preferable. Because you first test happy path and then fail condition .
if some one need to test happy scenarios only you will have it.
Separate test cases as better for two reasons:
You test case should be atomic
If first assert condition fails, it will not evaluate the second one.
Is it possible in JUnit to add a brief description of the test for the future reader (e.g. what's being tested, some short explanation, expected result, ...)? I mean something like in ScalaTest, where I can write:
test("Testing if true holds") {
assert(true)
}
Ideal approach would be using some annotation, e.g.
#Test
#TestDescription("Testing if true holds")
public void testTrue() {
assert(true);
}
Therefore, if I run such annotated tests using Maven (or some similar tool), I could have similar output to the one I have in SBT when using ScalaTest:
- Testing if entity gets saved correctly
- Testing if saving fails when field Name is not specified
- ...
Currently I can either use terribly long method names or write javadoc comments, which are
not present in the build output.
Thank you.
In JUnit 5, there is #DisplayName annotation:
#DisplayName is used to declare a custom display name for the
annotated test class or test method. Display names are typically used
for test reporting in IDEs and build tools and may contain spaces,
special characters, and even emoji.
Example:
#Test
#DisplayName("Test if true holds")
public void checkTrue() {
assertEquals(true, true);
}
TestNG does it like this, which to me is the neatest solution:
#Test(description="My funky test")
public void testFunk() {
...
}
See http://testng.org/javadocs/org/testng/annotations/Test.html for more information.
Not exactly what you are looking for, but you can provide a description on any assert methods.
Something like:
#Test
public void testTrue() {
assertTrue("Testing if true holds", true);
}
I prefer to follow a standard format when testing in JUnit. The name of the test would be
test[method name]_[condition]_[outcome]
for Example:
#Test
public void testCreateObject_nullField_errorMessage(){}
#Test
public void testCreateObject_validObject_objectCreated(){}
I think this approach is helpful when doing TDD, because you can just start writing all the test names, so you know what you need to test / develop.
Still I would welcome a test description functionality from JUnit.
And this is certainly better than other tests I have seen in the past like:
#Test public void testCreateObject1(){}
#Test public void testCreateObject2(){}
#Test public void testCreateObject3(){}
or
#Test public void testCreateObjectWithNullFirstNameAndSecondNameTooLong(){}
You can name the test method after the test:
public void testThatOnePlusOneEqualsTwo() {
assertEquals(2, 1 + 1);
}
This will show up in Eclipse, Surefire, and most other runners.
The detailed solution would be: You could add a Logger to your test, to log the results to a File. See log4j, for example. Then you can read the results in the File and also print successfull statements, what assertstatemens cannot.
The simple solution: You can add a JDoc description to every test method, this will be outlined, if you generate the JavaDoc.
Also every assertstatement can provide a Message, that will be printed, whenever the assert fails.
/**
* test the List#size() increasement after adding an Object to a List.
*/
public void testAdd(){
List<Object> list = new LinkedList<>();
list.add(new Object());
assertEquals("size should be 1, because of adding an Object", 1, list.size());
}
Do NOT use System.out.println("your message"); because you don't know how the tests will be executed and if the environment does not provide a console, your messages will not be displayed.
The selenium tests I'm gonna be doing are basically based on three main steps, with different parameters. These parameters are passed in from a text file to the test. this allows easy completion of a test such as create three of "X" without writing the code to do the create three times in one test.
Imagine i have a test involving creating two of "X" and one of "Y". CreateX and CreateY are already defined in separate tests. Is there a nice way of calling the code contained in createX and createY from say, Test1?
I tried creating a class with the creates as seperate methods, but got errors on all the selenium.-anything-, ie every damn line. it goes away if i extend seleneseTestCase, but it seems that my other test classes wont import from a class that extends seleneseTestCase. I'm probably doing something idiotic but i might as well ask!
EDIT:
well for example, its gonna be the same setUp method for every test, so id like to only write that once... instead of a few hundred times...
public void ready() throws Exception
{
selenium = new DefaultSelenium("localhost", 4444, "*chrome", "https://localhost:9443/");
selenium.start();
selenium.setSpeed("1000");
selenium.setTimeout("999999");
selenium.windowMaximize();
}
thats gonna be used EVERYWHERE.
its in a class called reuseable. Id like to just call reuseable.ready(); from the tests SetUp... but it wont let me....
public class ExampleTest {
#Before
public void setup() {
System.out.println("setup");
}
public void someSharedFunction() {
System.out.println("shared function");
}
#Test
public void test1() {
System.out.println("test1");
someSharedFunction();
}
#Test
public void test2() {
System.out.println("test2");
someSharedFunction();
}
}
The contents of the function after the #Before annotation is what will be executed before every test. someSharedFunction() is an example of a 'reusable' function. The code above will output the following:
setup
test1
shared function
setup
test2
shared function
I would recommend using JUnit and trying out some of the tutorials on junit.org. The problem you have described can be fixed using the #Before annotation on a method that performs this setup in a super class of your tests