If statements in tests - java

I have a Parametrized test class with bunch of unit tests that generally control the creation of custom email messages. Right now class has a lot of test which depend on factor(s) used in parametrized class, the flow of the tests is the same for every test. The example of a test:
#Test
public void testRecipientsCount() {
assertEquals(3, recipientsCount);
}
I had to add extra funcionality to my email class that adds some extra internal emails to the list of recipients, and that only happens for some of the cases and that leads to my problem.
Lets say I want to assert the amount of messages created. For the old test it was the same for each case, but now its different depending on cases. The most intuitive way for me was to add if statements:
#Test
public void testRecipientsCount(){
if(something) {
assertEquals(3, recipientsCount);
}
else {
assertEquals(4, recipientsCount);
}
}
...
but my more experienced co-worker says we should avoid ifs in test classes (and I kinda agree on that).
I thought that splitting test on two test classess may work, but that would lead to redundant code in both classes (I still have to check if non-iternal messages were created, their size, content etc.), and a few lines added for one of them.
My question is: how do I do this so I don't use if's or loads of redundant code (not using parametrized class would produce even more redundant code)?

In my opinion a Junit should be read like a protocol.
That means you can write redundant code to make the test case better readable.
Write a testcase for each possible if-statement in your business logic as well as the negative cases. Thats the only way to get a 100% test coverage.
I use the structure:
- testdata preparation
- executing logic
- check results
- clear data
Furthermore you should write complex asserts of big objects in own abstract classes:
abstract class YourBusinessObjectAssert{
public static void assertYourBussinessObjectIsValid(YourBusinessObject pYourBusinessObject, Collection<YourBusinessObject> pAllYourBusinessObject) {
for (YourBusinessObject lYourBusinessObject : pAllYourBusinessObject) {
if (lYourBusinessObject.isTechnicalEqual(pYourBusinessObject)) {
return;
}
}
assertFalse("Could not find requested YourBusinessObject in List<YourBusinessObject>!", true);
}
}
it will reduce the complexity of your code and you're making it available to other developers.

A unit test should, in my opinion, test only one thing if possible. As such I'd say that if you need an if statement then you probably need more than one unit test - one for each block in the if/else code.
If possible I'd say a test should read like a story - my preferred layout (and its not my idea :-) - its fairly widely used) is:
- given: do setup etc
- when: the place you actually execute/call the thing under test
- expect: verify the result
Another advantage of a unit test testing only one thing is that when a failure occurs its unambiguous what the cause was - if you have a long test with many possible outcomes it becomes much harder to reason why a test has failed.

I'm not sure if it's possible to cleanly do what you're after in a parametrized test. If you need different test case behavior based on which parameter for some features, you might just be better off testing those features separately - in different test classes that are not parametrized.
If you really do want to keep everything in the parametrized test classes, I would be inclined to make a helper function so that your example test at least reads as a simple assertion:
#Test
public void testRecipientsCount(){
assertEquals(expectedCount(something), recipientsCount)
}
private int expectedCount(boolean which) {
if (something){
return 3;
}
else {
return 4;
}
}

Why not have a private method that tests the things that are common for each method? Something like (but probably with some input parameter for the testCommonStuff() method):
#Test
public void testRecipientsCountA(){
testCommonStuff();
// Assert stuff for test A
}
#Test
public void testRecipientsCountB(){
testCommonStuff();
// Assert stuff for test B
}
private void testCommonStuff() {
// Assert common stuff
}
This way you don't get redundant code and you can split your test into smaller tests. Also you make your tests less error prone IF they should actually test the same things. You will still know which test that failed, so traceability should be no problem.

Related

How to test method that wrapping another method?

Let's imagine having class (written in Java-like pseudocode):
class MyClass {
...
public List<Element> getElementsThatContains(String str) {
return this.getElementsThatContains(new Set<String> { str });
}
public List<Element> getElementsThatContains(Set<String> strs) {
...
}
}
First of all - I have getElementsThatContains(Set<String> strs) properly 100% covered.
How should I cover getElementsThatContains(String str):
Should I copy (almost) all the tests but with call to getElementsThatContains(String str)?
Should I just make one test
method that check if results from first and second methods are same
(with same incoming data)?
Should I refactor my code so I do not have
such a situation? (If yes, how?)
Yes, you should cover both methods. The reason for having unit tests is the safety net, when the code is refactored. For example, Someone might refactor the implementation of 'getElementsThatContains(String str)' and it will always return an empty List. Despite getElementsThatContains(Set strs) has 100% coverage those tests won't catch this.
No, you should not make one test method that check if results from first and second methods are same. This is generally considered a bad practice. Moreover, if there is a bug in one method, your test would just check the other method returns same incorrect result.
No, you should not copy all the tests, because the test cases for each method would be different. The arguments for the methods are different. So you will have different test cases for each, despite that underneath the same method is called.
Yes you should test both methods, and you should use distinct test cases for each method.
But you should care less for your line coverage.
Don't get me wrong here! It is important to keep the line coverage high. But it is more important to have 100% behavior coverage. And if you come across untested lines your question should be: "Is this untested code needed (i.e. what requirement does it implement) or is it obsolete?".
When we write our tests with line coverage in mind we tend to focus on the implementation details of our code under test. In consequence our tests are likely to fail when we change this implementation details (e.g. during refactoring). But our tests should only fail if the tested behavior changes and not when we change the way this behavior is achieved.

How to skip remaining steps if previous steps failed in Junit

I had some integration testing cases, they ran as Junit test cases with a special category:
#Category(IntegrationTest.class)
Because they are integration testing cases, so the cost of every steps is high.
Usually I will re-use some results from previous steps to reduce this cost.
To make it works, I added this into them:
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
Some samples like this:
#Category(IntegrationTest.class)
#FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class TestAllocationPlanApi {
#Test
public void testStep01_verifyOrigProgram22275() {...}
#Test
public void testStep02_CopyProgram() {...}
}
They work well except the failure process:
If step01 failed, we don't need to run step02, but Junit still go to step02.
It is a waste and makes the test cases more complicated because you need to carefully handle those variables which are passed into step02.
I tried
-Dsurefire.skipAfterFailureCount=1
Which is discussed in another thread, but it doesn't work, the test cases still go to next step if previous steps fails.
Another annoying thing about the test cases is that Junit always resets all instance variables before every step. This forces me to use static variable to pass previous result into next step:
private static Integer contractAId;
And I have no way to run them in multiple threads.
Does anybody have good ideas to handle those things?
Thanks!
Happy new year!
You have written these as distinct tests but there are some dependencies between these tests. So, it sounds like you have split a single logical test flow across multiple test methods.
To cater for these dependencies you adopted a naming convention for the tests and instructed JUnit to run these tests in the order implied by the naming convention. In addition, you have some shared state within your test case which is being 'passed from' step to step.
This approach sounds brittle and, probably, makes the following quite difficult:
Diagnosing failures, issues
Maintaining existing steps
Adding new steps
Instructing JUnit to - somehow - stop executing subsequent tests within a test case if a prior test failed and the use of a static variable to pass previous results into the next step are both symptoms of the decision to split a single logical test across multiple #Test methods.
JUnit has no formal concept of subsequent or prior tests within a test case. This is deliberate since #Test methods are expected to be independent of each other.
So, rather than trying to implement his behaviour: stop executing subsequent tests within a test case if a prior test failed I would suggest revisiting your tests to reduce their run time, reduce costly setup time and remove this approach of splitting a single logical test flow across multiple test methods. Instead each test should be self contained; its scope should cover (a) set up, (b) execution; (c) assertion; (d) tear down.
I can see from your question that this is an integration test so it's likely that the setup, dependency management, execution etc are not simple so perhaps this approach of splitting a single logical test flow across multiple test methods is an effort to decompose a complex test flow into more digestible units. If so, then I'd recommend breaking each of these 'steps' into private methods and orchestrating them from within a single #Test method. For example:
#Test
public void test_verifyOrigProgram22275() {
// you'll probably want to return some sort of context object from each step
// i.e. something which allows you to (a) test whether a step has succeeded
// and abort if not and (b) pass state between steps
step01_verifyOrigProgram22275();
step02_verifyOrigProgram22275();
...
}
private void step01_verifyOrigProgram22275() {...}
private void step02_CopyProgram() {...}
For my integration tests, I add the following (JUnit5, tests are ordered).
private static boolean soFarSoGood = true;
private static String failingMethod = null;
void testSoFarSoGood() throws Exception {
Assertions.assertTrue(soFarSoGood, "Failed at method " + failingMethod);
failingMethod = new Throwable()
.getStackTrace()[1]
.getMethodName();
soFarSoGood = false;
logger.info("Starting {}()", failingMethod);
}
void soFarSoGood() {
soFarSoGood = true;
logger.info("End of {}()", failingMethod);
}
#Test
#Order(10)
void first() throws Exception {
testSoFarSoGood();
... test code ...
soFarSoGood();
}
#Test
#Order(20)
void second() throws Exception {
testSoFarSoGood();
... test code ...
soFarSoGood();
}
and so on...
I couldn't make an implementation using #BeforeEach / #AfterEach work (ok... I didn't try much) but I would welcome one

How to divide test cases effectively to automate them in a mobile app using appium

I'm trying to automate test cases of an app containing about 5 big sections
I have a lot of test cases more that 100 in each one
What is the best way to divide test cases in order to automate them?
I create separate 5 classes, and in each one i put all tests ?
For now i'm writing my test cases using dependence like the following example
#Test(dependsOnMethods = { "method1" })
public void method2() {
System.out.println("This is method 2");
}
But my problem if there is no methods dependence how should i proceed in order to make all tests cases executed automatically ?
As I answered your another question, you can always use Page Object Pattern to make your easier to read and what is the most important easier to fix, when it will be needed. Then, if needed you could change the variables in one place - not everywhere.
Regarding your question about using annotations considering different class usage and methods inside them, please check below example:
#Test
public void checkSampleScreen() throws InterruptedException{
SampleScreen ss = new SampleScreen(driver);
ss.launchStartScreen();
}

JUnit - testing with re-usable functions; validity

What I am trying to do is to test some Lucene code I wrote and would like some information on best practices when using JUnit for testing. Lucene, BTW, is a search engine which you can use to create a flat file to index a bunch of data.
So what I would like to test is the creation of this inverted index, then search through the index to verify that some data is present.
My question is in the code:
public class IndexTest {
#Test
public void testWriteIndexFromDB() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testWriteIndexFromExcelFile() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testRefreshIndexWithNewData() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
// this function checks that data is returned after writing an index
public void checkSomeDataIsReturned(){ // not a test but does a check anyways
results = myIndex.searchForStuff(some input);
assertTrue(results.length > 0); // if length is zero, there is no data and something went wrong
}
}
To summarize, I have three options to write an index, I am testing that each of them writes. Is the re-usable function that is not a test the proper way to write a test? Or is there a better practice?
Is off course a good thing write reusable code in test, but more important than that is writing that are easy to understand. In general the asserts are in the test method itself, moving the asserts to helpers methods can make your test difficult to understand.
One way to write reusable code for checking expectations its using Hamcrest (https://code.google.com/p/hamcrest/wiki/Tutorial) and build a matchers (the library also comes with some very useful matchers for collections and stuff like that).
for example, you can write something like that:
public void
test_can_index_from_database() {
// create your index from database
assertThat(myIndex, containsWord('expected_word_in_index'));
}
The matchers "containsWord(String)" its a matcher that you write using hamcrest and you can re-use this logic in all your test. And with hamcrest you can write really easy to understand test.
Well, good practices such as reusable code are to be used in unit tests, too.
However, please consider that if you are in need to repeat code in unit tests, it may (and often does) mean that your tested methods take too many responsibilities.
I don't know if it really is your case but think about refactoring your code (here splitting your tested methods into more smaller methods) so you don't feel the need to repeat the same tests all over.
When each method takes a single responsibility only, and delegate the shared code to another method/class, you test this functionality somewhere else and here just test (using mocking and spying) if your methods calls the corresponding method/object.

How focused JUnit tests in BDD way should be

I'm test-driving some code for practice and spotted strange situation.
There is a ChannelRegistry that contains all communication channels references, and PrimaryConsumer who needs to attach itself to one of those channels choosen in runtime when initialize() called.
So I've done my first test as follows:
#RunWith(MockitoJUnitRunner.class)
public class PrimaryConsumerTest {
private #Mock ChannelsRegistry communicationRegistry;
private PrimaryConsumer consumer;
#Before
public void setup() {
consumer = new PrimaryConsumer(communicationRegistry);
}
#Test
public void shouldAttachToChannel() throws Exception {
consumer.initialize();
verify(communicationRegistry).attachToChannel("channel", consumer);
}
}
I'm checking if attaching method is called. To get it green I put impl like that:
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
Now next test: get channel id by name and attach to this specific channel. I want my test to describe class' behavior instead of its internals so I don't want my test to be "shouldGetSpecificChannel". Instead I check if it can attach to channel selected in runtime:
#Test
public void shouldAttachToSpecificChannel() throws Exception {
String channelName = "channel";
when(communicationRegistry.getChannel("channel_name")).thenReturn(channelName);
consumer.initialize();
verify(communicationRegistry).attachToChannel(channelName, consumer);
}
This test passes immediately, but implementation is screwed ("channel" hardcoded).
2 questions here:
is it ok to have 2 tests for such behavior? Maybe I should stub getting channel immediately in first test? If so, how does it map to testing single thing in single test?
how to cope with such situation: tests green, impl "hardcoded"? Should I write another test with different channel's name? If so, should I remove it after correcting impl (as it gets useless?)
UPDATE:
Just some clarifications.
I've hardcoded "channel" here
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
just to make first test pass quickly. But then, when running second test it passes immediately. I don't verify if stubbed method was called as I think stubs should not be verified explicitly.
Is this what you Rodney mean saying test are redundant? If yes shoud I make stub at the very beginning in the first test?
More tests is usually preferred to too few, so two tests is fine. A better question is whether the two tests are redundant: is there any situation or combination of inputs that would make one of the tests fail, but not the other? Then the two tests are both needed. If they always fail or succeed together, then you probably need only one of them.
When would you need a different value for channelName? It sounds like this is a configuration setting that is irrelevant to these particular tests. That's fine, perhaps you would test that configuration at a higher level, in your integration tests. A bigger concern I would have is why it's hard-coded in the first place: it should be injected into your class (probably via the constructor). Then you can test different channel names -- or not. Either way, you don't want to be changing your code just for testing if it means changing it back when you're done.
Basically ditto Rodney for the question of multiple tests. I would suggest, based on your update, one or two things.
First off, you have used the same data for both tests. In the Kent Beck book on TDD he mentions the use of "Triangulation". If you used different reference data in the second case then your code would not have passed without any additional work on your part.
On the other hand, he also mentions removing all duplication, and duplication includes duplication between the code and the tests. In this scenario you could have left both of your tests as is, and refactored out the duplication between the string "channel" in the code and the same in the test by replacing the literal in your class under test with the call to your communicationRegistry.getChannel(). After this refactoring you now have the string literal in one and only one place: The test.
Two different approaches, same result. Which one you use comes down to personal preference. In this scenario I would have taken the second approach, but that's just me.
Reminder to check out Rodney's answer to the question of multiple tests or not. I'm guessing you could delete the first one.
Thanks!
Brandon

Categories

Resources