I had some integration testing cases, they ran as Junit test cases with a special category:
#Category(IntegrationTest.class)
Because they are integration testing cases, so the cost of every steps is high.
Usually I will re-use some results from previous steps to reduce this cost.
To make it works, I added this into them:
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
Some samples like this:
#Category(IntegrationTest.class)
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TestAllocationPlanApi {
#Test
public void testStep01_verifyOrigProgram22275() {...}
#Test
public void testStep02_CopyProgram() {...}
}
They work well except the failure process:
If step01 failed, we don't need to run step02, but Junit still go to step02.
It is a waste and makes the test cases more complicated because you need to carefully handle those variables which are passed into step02.
I tried
-Dsurefire.skipAfterFailureCount=1
Which is discussed in another thread, but it doesn't work, the test cases still go to next step if previous steps fails.
Another annoying thing about the test cases is that Junit always resets all instance variables before every step. This forces me to use static variable to pass previous result into next step:
private static Integer contractAId;
And I have no way to run them in multiple threads.
Does anybody have good ideas to handle those things?
Thanks!
Happy new year!
You have written these as distinct tests but there are some dependencies between these tests. So, it sounds like you have split a single logical test flow across multiple test methods.
To cater for these dependencies you adopted a naming convention for the tests and instructed JUnit to run these tests in the order implied by the naming convention. In addition, you have some shared state within your test case which is being 'passed from' step to step.
This approach sounds brittle and, probably, makes the following quite difficult:
Diagnosing failures, issues
Maintaining existing steps
Adding new steps
Instructing JUnit to - somehow - stop executing subsequent tests within a test case if a prior test failed and the use of a static variable to pass previous results into the next step are both symptoms of the decision to split a single logical test across multiple #Test methods.
JUnit has no formal concept of subsequent or prior tests within a test case. This is deliberate since #Test methods are expected to be independent of each other.
So, rather than trying to implement his behaviour: stop executing subsequent tests within a test case if a prior test failed I would suggest revisiting your tests to reduce their run time, reduce costly setup time and remove this approach of splitting a single logical test flow across multiple test methods. Instead each test should be self contained; its scope should cover (a) set up, (b) execution; (c) assertion; (d) tear down.
I can see from your question that this is an integration test so it's likely that the setup, dependency management, execution etc are not simple so perhaps this approach of splitting a single logical test flow across multiple test methods is an effort to decompose a complex test flow into more digestible units. If so, then I'd recommend breaking each of these 'steps' into private methods and orchestrating them from within a single #Test method. For example:
#Test
public void test_verifyOrigProgram22275() {
// you'll probably want to return some sort of context object from each step
// i.e. something which allows you to (a) test whether a step has succeeded
// and abort if not and (b) pass state between steps
step01_verifyOrigProgram22275();
step02_verifyOrigProgram22275();
...
}
private void step01_verifyOrigProgram22275() {...}
private void step02_CopyProgram() {...}
For my integration tests, I add the following (JUnit5, tests are ordered).
private static boolean soFarSoGood = true;
private static String failingMethod = null;
void testSoFarSoGood() throws Exception {
Assertions.assertTrue(soFarSoGood, "Failed at method " + failingMethod);
failingMethod = new Throwable()
.getStackTrace()[1]
.getMethodName();
soFarSoGood = false;
logger.info("Starting {}()", failingMethod);
}
void soFarSoGood() {
soFarSoGood = true;
logger.info("End of {}()", failingMethod);
}
#Test
#Order(10)
void first() throws Exception {
testSoFarSoGood();
... test code ...
soFarSoGood();
}
#Test
#Order(20)
void second() throws Exception {
testSoFarSoGood();
... test code ...
soFarSoGood();
}
and so on...
I couldn't make an implementation using #BeforeEach / #AfterEach work (ok... I didn't try much) but I would welcome one
Related
I'm having doubts about if I should create tests that have many mock objects or not.
I recently read When should I mock? and I'm feeling confused.
Let's take a look at a method I have (it is just to illustrate the problem)
#Override
protected void validate() throws WTException {
Either<ImportError, RootFinderResult> rootPart = getDataValidator().getRootPart();
if (rootPart.isLeft()) {
addValidationMessage(ROOT_PART_NOT_FOUND);
} else if (rootPart.isRight()) {
getObjectsToValidate().forEach(Lambda.uncheckedBiConsumer((part, epmDocuments) -> {
LocalizableMessage rootRevision = getRevision(part);
Optional<EPMDocument> wrongRevisionEPM = epmDocuments.stream()
.filter(epmDocument -> !isSameRevision(rootRevision, epmDocument))
.findAny();
wrongRevisionEPM.ifPresent(epmDocument -> addValidationMessage("blabla"));
}));
}
}
All of the following methods need to have a connection to a server in order to work, otherwise they will throw errors
getDataValidator().getRootPart();
getRevision(part)
!isSameRevision(rootRevision, epmDocument))
In addition I can't create 'real' objects of part or epm documents. This also requires to have a connection to a server.
So at this point, what I really want to test is actually logic of this part of code
Optional<EPMDocument> wrongRevisionEPM = epmDocuments.stream()
.filter(epmDocument -> !isSameRevision(rootRevision, epmDocument))
.findAny();
wrongRevisionEPM.ifPresent(epmDocument -> addValidationMessage("blabla"));
But to test it I need to mock really many objects
#Spy
#InjectMocks
private SameRevision sameRevision;
#Mock
private WTPartRelatedObjectDataValidator wTPartRelatedObjectDataValidator;
#Mock
private ValidationEntry validationEntry;
#Mock
private WTPart rootPart1, rootPart2;
#Mock
private EPMDocument epmDocument1, epmDocument2, epmDocument3;
#Mock
private Either<ImportError, RootFinderResult> rootPart;
#Mock
private LocalizableMessage rootPartRevisionOne, rootPartRevisionTwo;
so finally I can test the logic:
#Test
#DisplayName("Should contain error message when part -> epms revisions are not the same")
void shoulHaveErrorMessagesWhenDifferentRevisions() throws Exception {
doReturn(getMockObjectsToValidate()).when(sameRevision).getObjectsToValidate();
doReturn(rootPart).when(liebherrWTPartRelatedObjectDataValidator).getRootPart();
doReturn(false).when(rootPart).isLeft();
doReturn(true).when(rootPart).isRight();
doReturn(rootPartRevisionOne).when(sameRevision).getRevision(rootPart1);
doReturn(rootPartRevisionTwo).when(sameRevision).getRevision(rootPart2);
doReturn(true).when(sameRevision).isSameRevision(rootPartRevisionOne, epmDocument1);
doReturn(false).when(sameRevision).isSameRevision(rootPartRevisionOne, epmDocument2);
doReturn(true).when(sameRevision).isSameRevision(rootPartRevisionTwo, epmDocument3);
validationEntry = sameRevision.call();
assertEquals(1, validationEntry.getValidationMessageSet().size());
}
where
doReturn(rootPart).when(liebherrWTPartRelatedObjectDataValidator).getRootPart();
doReturn(false).when(rootPart).isLeft();
doReturn(true).when(rootPart).isRight();
doReturn(rootPartRevisionOne).when(sameRevision).getRevision(rootPart1);
doReturn(rootPartRevisionTwo).when(sameRevision).getRevision(rootPart2);
can be moved to #BeforeEach.
At last, I have my test and it works. It validates what I wanted to be validated but in order to come to this point I had to put a lot of effort to come through the whole API which needs interactions with a server.
What do you guys think, is it worth it to create tests like this? I guess this is a wide-open topic 'cause many newbies that try to come into the 'test world' will have similar a problem, so please do not close the topic because of opinion-based judgement and give your feedback on this topic.
You are right. It is a big effort to mock all those dependencies. Let me get over a few points that may make things clearer:
Treat writing tests like an investment: So yes, sometimes it is more effort to write the test than to write the actual code. However, you will thank yourself later when you introduce a bug and the tests can catch it. Having good tests gives you confidence when modifying your code that you didn't break anything, and if you did, your tests will find the issue. It pays off over time.
Keep your test focused on a specific class. Mock the rest: When you mock everything except the class under test, you can be sure that when a problem occurs that it is from the class under test, and not from one of its dependencies. This makes troubleshooting a lot easier.
Think of testability when writing new code: Sometimes it may not be avoidable to have a complicated piece of code which is hard to test. However, generally, this situation can be avoided by keeping the number of dependencies you have to a minimum and writing testable code. For example, if a method needs 5 or 6 more dependencies to do its job, then probably that method is doing too much and could be broken down. Same thing can be said on a class level, modules, etc..
You should mock other dependencies on which your class which going to be tested relies and set up behavior which you need .
This needs to be done to test your method isolated and not dependent on thrirdparty classes
You can write private void methods which can contain your mock behavior and use them in tests ,
In #BeforeEach annotated method you could mock behavior which wil be same in all tests or mock the same mocking behavior for across all tests
In your method which is void you can have spy objects which can be veereified if they were called like Mockito.verify()
Yes, it is quite time investing when you have to mock so many things.
In my opinion, if you add some value when testing something, it is worth testing, the problem could be of course how much time would you consume.
In your specific case, I would test on different "layers".
For example, the methods:
getDataValidator().getRootPart();
getRevision(part)
!isSameRevision(rootRevision, epmDocument))
They can be independently tested and in your case just mock their result, meaning that you don't really care about the parameters there, you just care what happens in case of a certain return value.
So, on one layer you really test the functionality, on the next layer you just mock the result you need, in order to test the other functionality.
I hope it is more clear now...
I have a flaky junit test that only fails if I run all my tests. I think that one test is causing another test to fail, I want to prove it before I try to fix it.
If I run all tests, it runs the "bad setup" then it runs the "test that fails after bad setup". It also runs a lot of irrelevant, slow tests in between. But if I use a pattern to only run these two, it runs "test that fails after bad setup" then "bad setup". As a result, both pass.
How do I only run "bad setup" and "test that fails after bad setup", in that order?
According to JUnit's wiki:
By design, JUnit does not specify the execution order of test method
invocations. Until now, the methods were simply invoked in the order
returned by the reflection API. However, using the JVM order is unwise
since the Java platform does not specify any particular order, and in
fact JDK 7 returns a more or less random order. Of course,
well-written test code would not assume any order, but some do, and a
predictable failure is better than a random failure on certain
platforms.
From version 4.11, JUnit will by default use a deterministic, but not
predictable, order (MethodSorters.DEFAULT). To change the test
execution order simply annotate your test class using #FixMethodOrder
and specify one of the available MethodSorters:
#FixMethodOrder(MethodSorters.JVM): Leaves the test methods in the
order returned by the JVM. This order may vary from run to run.
#FixMethodOrder(MethodSorters.NAME_ASCENDING): Sorts the test methods
by method name, in lexicographic order.
You could use MethodSorters.NAME_ASCENDING and change your method names to match with your specific order. I know you're using this just for debugging sake but it's a Test Smell to rely on your test methods execution order and JUnit does not provide more finer grain control over test methods execution order
As said by Ali Dehghani, You can order the test method execution by
#FixMethodOrder(MethodSorters.NAME_ASCENDING): Sorts the test methods
by method name, in lexicographic order.
Code:
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class ApplicationTest extends ActivityInstrumentationTestCase2<MainActivity> {
public ApplicationTest() {
super(MainActivity.class);
}
#Rule
public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class);
#Test
void t1AttachUI(){
// testing code goes here
}
#Test
void t2InitializeViews(){
// testing code goes here
};
#Test
void t3SettingValues(){
// testing code goes here
};
#Test
void t4Validation(){
// testing code goes here
};
#Test
void t3AfterButtonPress(){
// testing code goes here
};
}
Unit tests ought to be independent so most frameworks don't guarantee or enforce the order in which they are run. But since you want to enforce an order the easiest way I've done it in the past it to create a "throw away" test suite or test method that calls the tests in whatever order I want them to run in. Unit tests are methods, just call them. This is easy to do if you're dealing with tens of tests, not at all appealing if you're dealing with hundreds or thousands.
Try to isolate the flaky interaction as much as possible, then swap around the order of the poorly interacting tests within the throwaway calling method.
I have a Parametrized test class with bunch of unit tests that generally control the creation of custom email messages. Right now class has a lot of test which depend on factor(s) used in parametrized class, the flow of the tests is the same for every test. The example of a test:
#Test
public void testRecipientsCount() {
assertEquals(3, recipientsCount);
}
I had to add extra funcionality to my email class that adds some extra internal emails to the list of recipients, and that only happens for some of the cases and that leads to my problem.
Lets say I want to assert the amount of messages created. For the old test it was the same for each case, but now its different depending on cases. The most intuitive way for me was to add if statements:
#Test
public void testRecipientsCount(){
if(something) {
assertEquals(3, recipientsCount);
}
else {
assertEquals(4, recipientsCount);
}
}
...
but my more experienced co-worker says we should avoid ifs in test classes (and I kinda agree on that).
I thought that splitting test on two test classess may work, but that would lead to redundant code in both classes (I still have to check if non-iternal messages were created, their size, content etc.), and a few lines added for one of them.
My question is: how do I do this so I don't use if's or loads of redundant code (not using parametrized class would produce even more redundant code)?
In my opinion a Junit should be read like a protocol.
That means you can write redundant code to make the test case better readable.
Write a testcase for each possible if-statement in your business logic as well as the negative cases. Thats the only way to get a 100% test coverage.
I use the structure:
- testdata preparation
- executing logic
- check results
- clear data
Furthermore you should write complex asserts of big objects in own abstract classes:
abstract class YourBusinessObjectAssert{
public static void assertYourBussinessObjectIsValid(YourBusinessObject pYourBusinessObject, Collection<YourBusinessObject> pAllYourBusinessObject) {
for (YourBusinessObject lYourBusinessObject : pAllYourBusinessObject) {
if (lYourBusinessObject.isTechnicalEqual(pYourBusinessObject)) {
return;
}
}
assertFalse("Could not find requested YourBusinessObject in List<YourBusinessObject>!", true);
}
}
it will reduce the complexity of your code and you're making it available to other developers.
A unit test should, in my opinion, test only one thing if possible. As such I'd say that if you need an if statement then you probably need more than one unit test - one for each block in the if/else code.
If possible I'd say a test should read like a story - my preferred layout (and its not my idea :-) - its fairly widely used) is:
- given: do setup etc
- when: the place you actually execute/call the thing under test
- expect: verify the result
Another advantage of a unit test testing only one thing is that when a failure occurs its unambiguous what the cause was - if you have a long test with many possible outcomes it becomes much harder to reason why a test has failed.
I'm not sure if it's possible to cleanly do what you're after in a parametrized test. If you need different test case behavior based on which parameter for some features, you might just be better off testing those features separately - in different test classes that are not parametrized.
If you really do want to keep everything in the parametrized test classes, I would be inclined to make a helper function so that your example test at least reads as a simple assertion:
#Test
public void testRecipientsCount(){
assertEquals(expectedCount(something), recipientsCount)
}
private int expectedCount(boolean which) {
if (something){
return 3;
}
else {
return 4;
}
}
Why not have a private method that tests the things that are common for each method? Something like (but probably with some input parameter for the testCommonStuff() method):
#Test
public void testRecipientsCountA(){
testCommonStuff();
// Assert stuff for test A
}
#Test
public void testRecipientsCountB(){
testCommonStuff();
// Assert stuff for test B
}
private void testCommonStuff() {
// Assert common stuff
}
This way you don't get redundant code and you can split your test into smaller tests. Also you make your tests less error prone IF they should actually test the same things. You will still know which test that failed, so traceability should be no problem.
I'm test-driving some code for practice and spotted strange situation.
There is a ChannelRegistry that contains all communication channels references, and PrimaryConsumer who needs to attach itself to one of those channels choosen in runtime when initialize() called.
So I've done my first test as follows:
#RunWith(MockitoJUnitRunner.class)
public class PrimaryConsumerTest {
private #Mock ChannelsRegistry communicationRegistry;
private PrimaryConsumer consumer;
#Before
public void setup() {
consumer = new PrimaryConsumer(communicationRegistry);
}
#Test
public void shouldAttachToChannel() throws Exception {
consumer.initialize();
verify(communicationRegistry).attachToChannel("channel", consumer);
}
}
I'm checking if attaching method is called. To get it green I put impl like that:
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
Now next test: get channel id by name and attach to this specific channel. I want my test to describe class' behavior instead of its internals so I don't want my test to be "shouldGetSpecificChannel". Instead I check if it can attach to channel selected in runtime:
#Test
public void shouldAttachToSpecificChannel() throws Exception {
String channelName = "channel";
when(communicationRegistry.getChannel("channel_name")).thenReturn(channelName);
consumer.initialize();
verify(communicationRegistry).attachToChannel(channelName, consumer);
}
This test passes immediately, but implementation is screwed ("channel" hardcoded).
2 questions here:
is it ok to have 2 tests for such behavior? Maybe I should stub getting channel immediately in first test? If so, how does it map to testing single thing in single test?
how to cope with such situation: tests green, impl "hardcoded"? Should I write another test with different channel's name? If so, should I remove it after correcting impl (as it gets useless?)
UPDATE:
Just some clarifications.
I've hardcoded "channel" here
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
just to make first test pass quickly. But then, when running second test it passes immediately. I don't verify if stubbed method was called as I think stubs should not be verified explicitly.
Is this what you Rodney mean saying test are redundant? If yes shoud I make stub at the very beginning in the first test?
More tests is usually preferred to too few, so two tests is fine. A better question is whether the two tests are redundant: is there any situation or combination of inputs that would make one of the tests fail, but not the other? Then the two tests are both needed. If they always fail or succeed together, then you probably need only one of them.
When would you need a different value for channelName? It sounds like this is a configuration setting that is irrelevant to these particular tests. That's fine, perhaps you would test that configuration at a higher level, in your integration tests. A bigger concern I would have is why it's hard-coded in the first place: it should be injected into your class (probably via the constructor). Then you can test different channel names -- or not. Either way, you don't want to be changing your code just for testing if it means changing it back when you're done.
Basically ditto Rodney for the question of multiple tests. I would suggest, based on your update, one or two things.
First off, you have used the same data for both tests. In the Kent Beck book on TDD he mentions the use of "Triangulation". If you used different reference data in the second case then your code would not have passed without any additional work on your part.
On the other hand, he also mentions removing all duplication, and duplication includes duplication between the code and the tests. In this scenario you could have left both of your tests as is, and refactored out the duplication between the string "channel" in the code and the same in the test by replacing the literal in your class under test with the call to your communicationRegistry.getChannel(). After this refactoring you now have the string literal in one and only one place: The test.
Two different approaches, same result. Which one you use comes down to personal preference. In this scenario I would have taken the second approach, but that's just me.
Reminder to check out Rodney's answer to the question of multiple tests or not. I'm guessing you could delete the first one.
Thanks!
Brandon
Each of my Junit test cases will have multiple scenarios and an ETL process(takes 20 min) has to be run for verification between each scenario. Suppose i have a class with 4 Junit tests in the format:
First test - one scenario
Second Test - two scenarios
Third Test - three scenarios
Fourth Test - four scenarios
Is it possible to run the first scenario alone on all the test methods, hold the session somewhere then returning back to the class to run the second scenarios if available and so on. I would just like to know if this is possible using Junit. I searched in few places and i didn't find any luck.
No. The JUnit FAQ says, "The ordering of test-method invocations is not guaranteed..." See also How to run test methods in spec order in JUnit4?
Another hindrance to your plan as I understand it is that JUnit instantiates a new instance of the test class for each method in the class. (Here is the explanation why) Each of your scenarios will run in a different instance of the test class.
Your question wasn't very clear; perhaps if you gave more detail about what you're trying to do you'll get some suggestions.
If you're using a database, can you have four databases?
There isn't anything specific in JUnit for saving/restoring sessions, however, you could look at categories to run only specific tests on the CI server.
If at runtime you can tell whether a certain scenario is applicable / available, you can use conditional (re)runs as many times as there are scenarios:
#Before
public void seeIfScenarioIsApplicable() {
org.junit.Assume.assumeTrue( isScenarioIsApplicable( ... ) );
}
#Test
first (){ // do magic }
#Test
second (){ // do magic }
#Test
third (){ // do magic }
#Test
fourth (){ // do magic }
In case "isScenarioIsApplicable" returns false, the test methods are skipped.