I have a test class in java and there are several methods annotated by #Test in it, somehow, i want to Junit run method A before method B when i run the whole tests. Is it possible or necessary?
This sort of dependency on test methods is bad design and should be avoided. If there is initialization code in one test method that needs to be done for the next, it should be factored out into a setUp method.
The problem I have with this is reporting. If you WANT/NEED to see if each test method fails or passes then you're SCREWED.
I understand that you don't want one test to build upon previous tests, But regardless of that, there may be situations that you need it to do this (or you'll increase the complexity of the test by an order of magnitude).
Should the flow of tests in the code be up to the developer of the tests or the developer of the framework ?
Show JUnit test code to 10 java developers, and I'll be willing to bet most will assume that the tests (regardless of anything external) will be run in the order they appear in the test class.
Shouldn't THAT be the default behaviour of JUnit ? (Give me the option of telling it the order instead of JUnit figuring it out" on its own.)
Update: 2014-11-18
The newer version of JUnit supports method sorters
// This saves the tests in alphabetical order
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
I would think that you might be able to create your own method sorter if you "really" wanted do do your own specific order.
Tests should have independent order, but some times we have not what we want.
If you have a large legacy project with thousands of tests, and they depends on their execution order, you will have many problems, when, for example you will try to migrate on java 7, because it will shuffle all tests.
You can read more about this problem here:
junit test ordering and java 7
If it's only two methods then you'd need to wrap it in a single unit test that truly isn't order-dependent.
#Test
public void testInOrder() throws Exception {
testA();
testB();
}
use the following to setup thing before and after tests
#Before
public void setUp() throws MWException {
}
#After
public void tearDown() {
}
Related
I have some unit test built with Junit in a Java project.
The only issue is that I want to define the order/priority of the tests.
I mean... I'm using #BeforeAll to execute first test, which is the login process and to get the access for the postlogin functionalities.
But right after that, I want to run another specific test.
And then the rest...
I checked and there is an option to use #Order() annotation, but my idea is not to be ordering every test like this... I just wanna run first the login, and then a test that I need to run all the others. So after the first two test, the others doesn't matter in which order they are.
First, having unit tests depend on other unit tests is typically bad design. The JUnit 5 User Guide mentions this:
Although true unit tests typically should not rely on the order in which they are executed, there are times when it is necessary to enforce a specific test method execution order — for example, when writing integration tests or functional tests where the sequence of the tests is important, especially in conjunction with #TestInstance(Lifecycle.PER_CLASS).
Consider if what you're doing is testing or set-up. If the action of logging in is just set-up for your tests then you should do this in a method annotated with #BeforeAll or #BeforeEach, whichever makes more sense in your context. Afterwards, you may need to clean-up using #AfterAll or #AfterEach, depending on which "before annotation" you used.
If you're actually trying to test the login code then try to separate these tests from the others. You could do this by moving them into a separate class file or even leveraging #Nested classes (if appropriate). Instead of having your later tests require a real login you should use a fake login. In other words, mock the dependencies needed for the later tests. This will remove the inter-test dependency situation. And don't be afraid to re-"login" for each test (e.g. by using #BeforeEach); if you're using mocks this shouldn't be too expensive.
Note: As of JUnit 5.4 you might even be able to abort some tests if previous tests fail using a TestWatcher extension, as mentioned in this Q&A. However, using such an approach seems better suited for integration tests rather than unit tests.
That said, what you want should be possible. You mention #Order but then say you're hesitant to use it because you don't want to order every method, only make sure that two tests run before all the others. You don't have to add the annotation to every method. If you look at the documentation of MethodOrderer.OrderAnnotation, you'll see:
MethodOrderer that sorts methods based on the #Order annotation.
Any methods that are assigned the same order value will be sorted arbitrarily adjacent to each other.
Any methods not annotated with #Order will be assigned a default order value of Integer.MAX_VALUE which will effectively cause them to appear at the end of the sorted list.
And from Order.value():
Elements are ordered based on priority where a lower value has greater priority than a higher value. For example, Integer.MAX_VALUE has the lowest priority.
This means you only need to annotate your two tests with #Order and leave the rest alone. All the test methods without the #Order annotation will run in any order, but after the first two tests.
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
class MyTests {
#Test
#Order(1)
void firstTest() {}
#Test
#Order(2)
void secondTest() {}
#Test
void testFoo() {}
#Test
void testBar() {}
// other tests...
}
I am using JUnit4.
I have a set of test methods in a test case.
Each test method inserts some records and verify a test result and finally delete the records inserted.
Since the JUnit run in parallel, test methods fail because of some
records present during the execution of previous test method. This
happen only in my colleague machine(Windows 7), not in my machine(Cent
OS 6).
What we need is that the test methods have to pass in all our machines.
I have tried clearing the records in the Setup() method but again it works only on my machine. Is there any option available in JUnit to make the test methods to run in a uniform sequential order ?
Thanks,
MethodSorters is a new class introduced after Junit 4.6 release. This class declared three types of execution order, which can be used in your test cases while executing them.
NAME_ASCENDING(MethodSorters.NAME_ASCENDING) - Sorts the test methods
by the method name, in lexicographic order.
JVM(null) - Leaves the test methods in the order returned by the
JVM. Note that the order from the JVM my vary from run to run.
DEFAULT(MethodSorter.DEFAULT) - Sorts the test methods in a
deterministic, but not predictable, order.
.
import org.junit.FixMethodOrder;
import org.junit.Test;
import org.junit.runners.MethodSorters;
//Running test cases in order of method names in ascending order
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class OrderedTestCasesExecution {
#Test
public void secondTest() {
System.out.println("Executing second test");
}
#Test
public void firstTest() {
System.out.println("Executing first test");
}
#Test
public void thirdTest() {
System.out.println("Executing third test");
}
}
Output:
Executing first test
Executing second test
Executing third test
Reference: http://howtodoinjava.com/2012/11/24/ordered-testcases-execution-in-junit-4/
JUnit 4.11 now supports specifying execution order using #FixMethodOrder annotation.
Ordering of tests is not guaranteed in JUnit.
The reason for this is that unit tests are meant to be atomic - all of the setup should happen in the setup / tear down methods, but not by other tests.
Consider moving the code that inserts data into another helper class that can be called by both the test that's inserting and the class that needs to verify, and calling that class in your #Before methods.
You should also consider a mocking solution (eg Mockito) as opposed to hitting the database directly if you can - mocking will go a long way to ensuring that your tests are nice and isolated, and, as a nice side benefit, usually help point out where you could use some refactoring.
Because you're running the tests in parallel, and you're hitting the database, you're highly likely to have problems, because the database won't necessarily be in a coherent state for each test.
Solution: don't run your tests in parallel. JUnit doesn't run the tests in parallel by default, so either you're setting the option in maven or using one of the parallel runners in JUnit.
If you're still having problems between tests failing on Windows but not on Cent OS, then it's maybe a problem with run order, which you'll need to fix. See my answer to Has JUnit4 begun supporting ordering of test? Is it intentional?.
The way around this (at least in JUnit terms) is to remove the dependencies between tests. Basically, JUnit doesn't support ordering and the tests should be able to be run in any order.
If you really need to have dependencies between tests, use TestNG, where you can have dependencies.
There is no problem running tests in parallel even if you have your data layer in it. But you need to have additional work to create MOCK UPs for your data so not it will not hit the database. You can use different mockup frameworks like Mockito, EasyMock and Arquillian.
Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".
I am fairly new to Java. I have constructed a single JUnit test class and inside this file are a number of test methods. When I run this class (in NetBeans) it runs each test method in the class in order.
Question 1: How can I run only a specific sub-set of the test methods in this class?
(Potential answer: Write #Ignore above #Test for the tests I wish to ignore. However, if I want to indicate which test methods I want to run rather than those I want to ignore, is there a more convenient way of doing this?)
Question 2: Is there an easy way to change the order in which the various test methods are run?
Thanks.
You should read about TestSuite's. They allow to group & order your unit test methods. Here's an extract form this article
"JUnit test classes can be rolled up
to run in a specific order by creating
a Test Suite.
EDIT: Here's an example showing how simple it is:
public static Test suite() {
TestSuite suite = new TestSuite("Sample Tests");
suite.addTest(new SampleTest("testmethod3"));
suite.addTest(new SampleTest("testmethod5"));
return suite;
}
This answer tells you how to do it. Randomizing the order the tests run is a good idea!
Like the comment from dom farr states, each unit test should be able to run in isolation. There should be no residuals and no given requirements after or before a test run. All your unit tests should pass run in any order, or any subset.
It's not a terrible idea to have or generate a map of Test Case --> List of Test and then randomly execute all the tests.
There are a number of approaches to this, but it depends on your specific needs. For example, you could split up each of your test methods into separate test classes, and then arrange them in different test suites (which would allow for overlap of methods in the suites if desired). Or, a simpler solution would be to make your test methods normal class methods with one test method in your class that calls them in your specific order. Do you want them to be dynamcially called?
I've not been using Java that long either, but as far as I've seen there isn't a convenient method of marking methods to execute rather than ignore. Instead I think this could be achieved using the IDE. When I want to do that in Eclipse I can use the junit view to run individual tests by clicking on them. I imagine there is something similar in Netbeans.
I don't know of an easy way to reorder the test execution. Eclipse has a button to rerun tests with the failing tests first, but it sounds like you want something more versatile.
I have a problematic situation with some quite advanced unit tests (using PowerMock for mocking and JUnit 4.5). Without going into too much detail, the first test case of a test class will always succeed, but any following test cases in the same test class fails. However, if I select to only run test case 5 out of 10, for example, it will pass. So all tests pass when being run individually. Is there any way to force JUnit to run one test case at a time? I call JUnit from an ant-script.
I am aware of the problem of dependant test cases, but I can't pinpoint why this is. There are no saved variables across the test cases, so nothing to do at #Before annotation. That's why I'm looking for an emergency solution like forcing JUnit to run tests individually.
I am aware of all the recommendations, but to finally answer your question here is a simple way to achieve what you want. Just put this code inside your test case:
Lock sequential = new ReentrantLock();
#Override
protected void setUp() throws Exception {
super.setUp();
sequential.lock();
}
#Override
protected void tearDown() throws Exception {
sequential.unlock();
super.tearDown();
}
With this, no test can start until the lock is acquired, and only one lock can be acquired at a time.
It seems that your test cases are dependent, that is: the execution of case-X affects the execution of case-Y. Such a testing system should be avoided (for instance: there's no guarantee on the order at which JUnit will run your cases).
You should refactor your cases to make them independent of each other. Many times the use of #Before and #After methods can help you untangle such dependencies.
Your problem is not that JUnit runs all the tests at once, you problem is that you don't see why a test fails. Solutions:
Add more asserts to the tests to make sure that every variable actually contains what you think
Download an IDE from the Internet and use the built-in debugger to look at the various variables
Dump the state of your objects just before the point where the test fails.
Use the "message" part of the asserts to output more information why it fails (see below)
Disable all but a handful of tests (in JUnit 3: replace all strings "void test" with "void dtest" in your source; in JUnit 4: Replace "#Test" with "//D#TEST").
Example:
assertEquals(list.toString(), 5, list.size());
Congratulations. You have found a bug. ;-)
If the tests "shouldn't" effect each other, then you may have uncovered a situation where your code can enter a broken state. Try adding asserts and logging to figure out where the code goes wrong. You may even need to run the tests in a debugger and check your code's internal values after the first test.
Excuse me if I dont answer your question directly, but isn't your problem exactly what TestCase.setUp() and TestCase.tearDown() are supposed to solve? These are methods that the JUnit framework will always call before and after each test case, and are typically used to ensure you begin each test case in the same state.
See also the JavaDoc for TestCase.
You should check your whole codebase that there are no static variables which refer to mutable state. Ideally the program should have no static mutable state (or at least they should be documented like I did here). Also you should be very careful about cleaning up what you write, if the tests write to the file system or database. Otherwise running the tests may leak some side-effects, which makes it hard to make the tests independent and repeatable.
Maven and Ant contain a "forkmode" parameter for running JUnit tests, which specifies whether each test class gets its own JVM or all tests are run in the same JVM. But they do not have an option for running each test method in its own JVM.
I am aware of the problem of dependant
test cases, but I can't pinpoint why
this is. There are no saved variables
across the test cases, so nothing to
do at #Before annotation. That's why
I'm looking for an emergency solution
like forcing JUnit to run tests
individually.
The #Before statement is harmless, because it is called for every test case. The #BeforeClass is dangerous, because it has to be static.
It sounds to me that perhaps it isn't that you are not setting up or tearing down your tests properly (although additional setup/teardown may be part of the solution), but that perhaps you have shared state in your code that you are not aware of. If an early test is setting a static / singleton / shared variable that you are unaware of, the later tests will fail if they are not expecting this. Even with Mocks this is very possible. You need to find this cause. I agree with the other answers in that your tests have exposed a problem that should not be solved by trying to run the tests differently.
Your description shows me, that your unit tests depend each other. That is strongly not recommended in unit tests.
Unit test must be independent and isolated. You have to be able to execute them alone, all of them (in which order, it does not matter).
I know, that does not help you. The problem will be in your #BeforeClass or #Before statements. There will be dependencies. So refactor them and try to isolate the problem.
Probably your mocks are created in your #BeforeClass. Consider to put it into the #Before statement. So there's no instance that last longer than a test case.