I'm test-driving some code for practice and spotted strange situation.
There is a ChannelRegistry that contains all communication channels references, and PrimaryConsumer who needs to attach itself to one of those channels choosen in runtime when initialize() called.
So I've done my first test as follows:
#RunWith(MockitoJUnitRunner.class)
public class PrimaryConsumerTest {
private #Mock ChannelsRegistry communicationRegistry;
private PrimaryConsumer consumer;
#Before
public void setup() {
consumer = new PrimaryConsumer(communicationRegistry);
}
#Test
public void shouldAttachToChannel() throws Exception {
consumer.initialize();
verify(communicationRegistry).attachToChannel("channel", consumer);
}
}
I'm checking if attaching method is called. To get it green I put impl like that:
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
Now next test: get channel id by name and attach to this specific channel. I want my test to describe class' behavior instead of its internals so I don't want my test to be "shouldGetSpecificChannel". Instead I check if it can attach to channel selected in runtime:
#Test
public void shouldAttachToSpecificChannel() throws Exception {
String channelName = "channel";
when(communicationRegistry.getChannel("channel_name")).thenReturn(channelName);
consumer.initialize();
verify(communicationRegistry).attachToChannel(channelName, consumer);
}
This test passes immediately, but implementation is screwed ("channel" hardcoded).
2 questions here:
is it ok to have 2 tests for such behavior? Maybe I should stub getting channel immediately in first test? If so, how does it map to testing single thing in single test?
how to cope with such situation: tests green, impl "hardcoded"? Should I write another test with different channel's name? If so, should I remove it after correcting impl (as it gets useless?)
UPDATE:
Just some clarifications.
I've hardcoded "channel" here
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
just to make first test pass quickly. But then, when running second test it passes immediately. I don't verify if stubbed method was called as I think stubs should not be verified explicitly.
Is this what you Rodney mean saying test are redundant? If yes shoud I make stub at the very beginning in the first test?
More tests is usually preferred to too few, so two tests is fine. A better question is whether the two tests are redundant: is there any situation or combination of inputs that would make one of the tests fail, but not the other? Then the two tests are both needed. If they always fail or succeed together, then you probably need only one of them.
When would you need a different value for channelName? It sounds like this is a configuration setting that is irrelevant to these particular tests. That's fine, perhaps you would test that configuration at a higher level, in your integration tests. A bigger concern I would have is why it's hard-coded in the first place: it should be injected into your class (probably via the constructor). Then you can test different channel names -- or not. Either way, you don't want to be changing your code just for testing if it means changing it back when you're done.
Basically ditto Rodney for the question of multiple tests. I would suggest, based on your update, one or two things.
First off, you have used the same data for both tests. In the Kent Beck book on TDD he mentions the use of "Triangulation". If you used different reference data in the second case then your code would not have passed without any additional work on your part.
On the other hand, he also mentions removing all duplication, and duplication includes duplication between the code and the tests. In this scenario you could have left both of your tests as is, and refactored out the duplication between the string "channel" in the code and the same in the test by replacing the literal in your class under test with the call to your communicationRegistry.getChannel(). After this refactoring you now have the string literal in one and only one place: The test.
Two different approaches, same result. Which one you use comes down to personal preference. In this scenario I would have taken the second approach, but that's just me.
Reminder to check out Rodney's answer to the question of multiple tests or not. I'm guessing you could delete the first one.
Thanks!
Brandon
Related
I'm having doubts about if I should create tests that have many mock objects or not.
I recently read When should I mock? and I'm feeling confused.
Let's take a look at a method I have (it is just to illustrate the problem)
#Override
protected void validate() throws WTException {
Either<ImportError, RootFinderResult> rootPart = getDataValidator().getRootPart();
if (rootPart.isLeft()) {
addValidationMessage(ROOT_PART_NOT_FOUND);
} else if (rootPart.isRight()) {
getObjectsToValidate().forEach(Lambda.uncheckedBiConsumer((part, epmDocuments) -> {
LocalizableMessage rootRevision = getRevision(part);
Optional<EPMDocument> wrongRevisionEPM = epmDocuments.stream()
.filter(epmDocument -> !isSameRevision(rootRevision, epmDocument))
.findAny();
wrongRevisionEPM.ifPresent(epmDocument -> addValidationMessage("blabla"));
}));
}
}
All of the following methods need to have a connection to a server in order to work, otherwise they will throw errors
getDataValidator().getRootPart();
getRevision(part)
!isSameRevision(rootRevision, epmDocument))
In addition I can't create 'real' objects of part or epm documents. This also requires to have a connection to a server.
So at this point, what I really want to test is actually logic of this part of code
Optional<EPMDocument> wrongRevisionEPM = epmDocuments.stream()
.filter(epmDocument -> !isSameRevision(rootRevision, epmDocument))
.findAny();
wrongRevisionEPM.ifPresent(epmDocument -> addValidationMessage("blabla"));
But to test it I need to mock really many objects
#Spy
#InjectMocks
private SameRevision sameRevision;
#Mock
private WTPartRelatedObjectDataValidator wTPartRelatedObjectDataValidator;
#Mock
private ValidationEntry validationEntry;
#Mock
private WTPart rootPart1, rootPart2;
#Mock
private EPMDocument epmDocument1, epmDocument2, epmDocument3;
#Mock
private Either<ImportError, RootFinderResult> rootPart;
#Mock
private LocalizableMessage rootPartRevisionOne, rootPartRevisionTwo;
so finally I can test the logic:
#Test
#DisplayName("Should contain error message when part -> epms revisions are not the same")
void shoulHaveErrorMessagesWhenDifferentRevisions() throws Exception {
doReturn(getMockObjectsToValidate()).when(sameRevision).getObjectsToValidate();
doReturn(rootPart).when(liebherrWTPartRelatedObjectDataValidator).getRootPart();
doReturn(false).when(rootPart).isLeft();
doReturn(true).when(rootPart).isRight();
doReturn(rootPartRevisionOne).when(sameRevision).getRevision(rootPart1);
doReturn(rootPartRevisionTwo).when(sameRevision).getRevision(rootPart2);
doReturn(true).when(sameRevision).isSameRevision(rootPartRevisionOne, epmDocument1);
doReturn(false).when(sameRevision).isSameRevision(rootPartRevisionOne, epmDocument2);
doReturn(true).when(sameRevision).isSameRevision(rootPartRevisionTwo, epmDocument3);
validationEntry = sameRevision.call();
assertEquals(1, validationEntry.getValidationMessageSet().size());
}
where
doReturn(rootPart).when(liebherrWTPartRelatedObjectDataValidator).getRootPart();
doReturn(false).when(rootPart).isLeft();
doReturn(true).when(rootPart).isRight();
doReturn(rootPartRevisionOne).when(sameRevision).getRevision(rootPart1);
doReturn(rootPartRevisionTwo).when(sameRevision).getRevision(rootPart2);
can be moved to #BeforeEach.
At last, I have my test and it works. It validates what I wanted to be validated but in order to come to this point I had to put a lot of effort to come through the whole API which needs interactions with a server.
What do you guys think, is it worth it to create tests like this? I guess this is a wide-open topic 'cause many newbies that try to come into the 'test world' will have similar a problem, so please do not close the topic because of opinion-based judgement and give your feedback on this topic.
You are right. It is a big effort to mock all those dependencies. Let me get over a few points that may make things clearer:
Treat writing tests like an investment: So yes, sometimes it is more effort to write the test than to write the actual code. However, you will thank yourself later when you introduce a bug and the tests can catch it. Having good tests gives you confidence when modifying your code that you didn't break anything, and if you did, your tests will find the issue. It pays off over time.
Keep your test focused on a specific class. Mock the rest: When you mock everything except the class under test, you can be sure that when a problem occurs that it is from the class under test, and not from one of its dependencies. This makes troubleshooting a lot easier.
Think of testability when writing new code: Sometimes it may not be avoidable to have a complicated piece of code which is hard to test. However, generally, this situation can be avoided by keeping the number of dependencies you have to a minimum and writing testable code. For example, if a method needs 5 or 6 more dependencies to do its job, then probably that method is doing too much and could be broken down. Same thing can be said on a class level, modules, etc..
You should mock other dependencies on which your class which going to be tested relies and set up behavior which you need .
This needs to be done to test your method isolated and not dependent on thrirdparty classes
You can write private void methods which can contain your mock behavior and use them in tests ,
In #BeforeEach annotated method you could mock behavior which wil be same in all tests or mock the same mocking behavior for across all tests
In your method which is void you can have spy objects which can be veereified if they were called like Mockito.verify()
Yes, it is quite time investing when you have to mock so many things.
In my opinion, if you add some value when testing something, it is worth testing, the problem could be of course how much time would you consume.
In your specific case, I would test on different "layers".
For example, the methods:
getDataValidator().getRootPart();
getRevision(part)
!isSameRevision(rootRevision, epmDocument))
They can be independently tested and in your case just mock their result, meaning that you don't really care about the parameters there, you just care what happens in case of a certain return value.
So, on one layer you really test the functionality, on the next layer you just mock the result you need, in order to test the other functionality.
I hope it is more clear now...
In my understanding, code testing is to test whether results are right, like a calculator, I need to write a test case to verify if the result of 1+1 is 2.
But I have read many test cases about verifying the number of times a method is called. I'm very confused about that. The best example is what I just saw in Spring in Action:
public class BraveKnight implements Knight {
private Quest quest;
public BraveKnight(Quest quest) {
this.quest = quest;
}
public void embarkOnQuest() {
quest.embark();
}
}
public class BraveKnightTest {
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
BraveKnight knight = new BraveKnight(mockQuest);
knight.embarkOnQuest();
verify(mockQuest, times(1)).embark();
}
}
I really have no idea about why they need to verify the embark() function is called one time. Don't you think that embark() will certainly be invoked after embarkOnQuest() is called? Or some errors will occur, and I will notice error messages in the logs, which show the error line number, that can help me quickly locate the wrong code.
So what's the point of verifying like above?
The need is simple: to verify that the correct number of invocations were made. There are scenarios in which method calls should not happen, and others in which they should happen more or less than the default.
Consider the following modified version of embarkOnQuest:
public void embarkOnQuest() {
quest.embark();
quest.embarkAgain();
}
And suppose you are testing error cases for quest.embark():
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
Mockito.doThrow(RuntimeException.class).when(mockQuest).embark();
...
}
In this case you want to make sure that quest.embarkAgain is NOT invoked (or is invoked 0 times):
verify(mockQuest, times(0)).embarkAgain(); //or verifyZeroInteractions
Of course this is one other simple example. There are many other examples that could be added:
A database connector that should cache entries on first fetch, one can make multiple calls and verify that the connection to the database was called just once (per test query)
A singleton object that does initialization on load (or lazily), one can test that initialization-related calls are made just once.
Consider the following code:
public void saveFooIfFlagTrue(Foo foo, boolean flag) {
if (flag) {
fooRepository.save(foo);
}
}
If you don't check the number of times that fooRepository.save() is invoked , then how can you know whether this method is doing what you want it to?
This applies to other void methods. If there is no return to a method, and therefore no response to validate, checking which other methods are called is a good way of validating that the method is behaving correctly.
Good question. You raise a good point that mocking can be overly circuitous when you can just check the results. However, there are contexts where this does lead to more robust tests.
For example, if a method needs to make a call to an external API, there are several problems with simply testing the result:
Network I/O is slow. If you have many checks like this, it will slow down your test case
Any round-trip like this would have to rely on the code making the request, the API, and the code interpreting the API's response all to work correctly. This is a lot of failure points for a single test.
If something stupid happens and you accidentally make multiple requests, this could cause performance issues with your program.
To address your sub-questions:
Don't you think that embark() will certainly be invoked after embarkOnQuest() called?
Tests also have value in letting you refactor without worry about breaking things. This is obvious now, yes. Will it be obvious in 6 months?
I really have no idea about why they need to verify the embark()
function is called one time
Verifying an invocation on a mock for a specific number of times is the standard way how Mockito works as you invoke Mockito.verify().
In fact this :
verify(mockQuest, times(1)).embark();
is just a verbose way to write :
verify(mockQuest).embark();
In a general way, the verification for a single call on the mock is what you need.
In some uncommon scenarios you may want to verify that a method was invoked a specific number of times (more than one).
But you want to avoid using so specific verifications.
In fact you even want to use verifying as few as possible.
If you need to use verifying and besides the number of invocation on the mock, it generally means two things : the mocked dependency is too much coupled to the class under
test and or the method under test performs too many unitary tasks that produce only side effects.
The test is so not necessary straight readable and maintainable. It is like if you coded the mock flow in the verifying invocations.
And as a consequence it also makes the tests more brittle as it checks invocation details not the overall logic and states.
In most of cases, a refactoring is the remedy and cancel the requirement to specify a number of invocation.
I don't tell that it is never required but use it only as it happens to be the single decent choice for the class under test.
I'm trying to automate test cases of an app containing about 5 big sections
I have a lot of test cases more that 100 in each one
What is the best way to divide test cases in order to automate them?
I create separate 5 classes, and in each one i put all tests ?
For now i'm writing my test cases using dependence like the following example
#Test(dependsOnMethods = { "method1" })
public void method2() {
System.out.println("This is method 2");
}
But my problem if there is no methods dependence how should i proceed in order to make all tests cases executed automatically ?
As I answered your another question, you can always use Page Object Pattern to make your easier to read and what is the most important easier to fix, when it will be needed. Then, if needed you could change the variables in one place - not everywhere.
Regarding your question about using annotations considering different class usage and methods inside them, please check below example:
#Test
public void checkSampleScreen() throws InterruptedException{
SampleScreen ss = new SampleScreen(driver);
ss.launchStartScreen();
}
I am trying to mock the following method:
public void add(Question question) {
String username = authenticationManager.getUsername();
Candidate candidate = userService.getByUsername(username);
if (!authenticationManager.hasPermission("ROLE_ADMIN")) {
question.setStatus(QuestionStatus.WAITING);
}
question.setCandidate(candidate);
questionRepository.add(question);
}
This is my attempt:
#Test
public void add_savesQuestionWithStatusWaiting_whenSubmittedAsUser() {
Candidate candidate = new Candidate();
Question question = mock(Question.class);
when(authenticationManager.getUsername()).thenReturn("andreas");
when(userService.getByUsername("andreas")).thenReturn(candidate);
when(authenticationManager.hasPermission("ROLE_ADMIN")).thenReturn(true);
questionService.add(question);
verify(question, times(0)).setStatus(any(QuestionStatus.class));
}
What I am trying to do is to test application logic. When the user does not have ROLE_ADMIN the question status will be set to waiting. Am I doing the mocking right?
In unit testing you mock every dependency that is not the part of the tested unit.
In your case your unit is questionRepository and you are trying to test whether all expected interaction on the objects occur but not on the real objects but on their mocked versions. That is perfectly fine and natural approach.
So in terms of how you use mockito you are doing pretty good. What is not ok is that questionService.add is doing a way too much. 'add' suggests that it will put some object in a container and nothing else. Instead it is also doing a complex question object setup. In other words it has side effects. The result is that the number of different boundary conditions you have to test is big. That will make your tests hard to maintain in future. Look how many mocks you had to create.
If you come back to your test after some time and you will try to figure out what is it doing will this be simple ?
I also think that the test name is not reflecting what is actually tested. For me 'add_savesQuestionWithStatusWaiting_whenSubmittedAsUser' implies that I should expect that at the end question should be saved with status set to 'waiting' instead you use verify to check if there was no call to setStatus().
I would try to refactor the add method code so all it does is element insertion into queryService. Then I would test different boundary conditions for questionService ( for example how it will behave when null will be provided ). I would also move setup of question to a different layer of your application and have that tested in a different unit.
I have a Parametrized test class with bunch of unit tests that generally control the creation of custom email messages. Right now class has a lot of test which depend on factor(s) used in parametrized class, the flow of the tests is the same for every test. The example of a test:
#Test
public void testRecipientsCount() {
assertEquals(3, recipientsCount);
}
I had to add extra funcionality to my email class that adds some extra internal emails to the list of recipients, and that only happens for some of the cases and that leads to my problem.
Lets say I want to assert the amount of messages created. For the old test it was the same for each case, but now its different depending on cases. The most intuitive way for me was to add if statements:
#Test
public void testRecipientsCount(){
if(something) {
assertEquals(3, recipientsCount);
}
else {
assertEquals(4, recipientsCount);
}
}
...
but my more experienced co-worker says we should avoid ifs in test classes (and I kinda agree on that).
I thought that splitting test on two test classess may work, but that would lead to redundant code in both classes (I still have to check if non-iternal messages were created, their size, content etc.), and a few lines added for one of them.
My question is: how do I do this so I don't use if's or loads of redundant code (not using parametrized class would produce even more redundant code)?
In my opinion a Junit should be read like a protocol.
That means you can write redundant code to make the test case better readable.
Write a testcase for each possible if-statement in your business logic as well as the negative cases. Thats the only way to get a 100% test coverage.
I use the structure:
- testdata preparation
- executing logic
- check results
- clear data
Furthermore you should write complex asserts of big objects in own abstract classes:
abstract class YourBusinessObjectAssert{
public static void assertYourBussinessObjectIsValid(YourBusinessObject pYourBusinessObject, Collection<YourBusinessObject> pAllYourBusinessObject) {
for (YourBusinessObject lYourBusinessObject : pAllYourBusinessObject) {
if (lYourBusinessObject.isTechnicalEqual(pYourBusinessObject)) {
return;
}
}
assertFalse("Could not find requested YourBusinessObject in List<YourBusinessObject>!", true);
}
}
it will reduce the complexity of your code and you're making it available to other developers.
A unit test should, in my opinion, test only one thing if possible. As such I'd say that if you need an if statement then you probably need more than one unit test - one for each block in the if/else code.
If possible I'd say a test should read like a story - my preferred layout (and its not my idea :-) - its fairly widely used) is:
- given: do setup etc
- when: the place you actually execute/call the thing under test
- expect: verify the result
Another advantage of a unit test testing only one thing is that when a failure occurs its unambiguous what the cause was - if you have a long test with many possible outcomes it becomes much harder to reason why a test has failed.
I'm not sure if it's possible to cleanly do what you're after in a parametrized test. If you need different test case behavior based on which parameter for some features, you might just be better off testing those features separately - in different test classes that are not parametrized.
If you really do want to keep everything in the parametrized test classes, I would be inclined to make a helper function so that your example test at least reads as a simple assertion:
#Test
public void testRecipientsCount(){
assertEquals(expectedCount(something), recipientsCount)
}
private int expectedCount(boolean which) {
if (something){
return 3;
}
else {
return 4;
}
}
Why not have a private method that tests the things that are common for each method? Something like (but probably with some input parameter for the testCommonStuff() method):
#Test
public void testRecipientsCountA(){
testCommonStuff();
// Assert stuff for test A
}
#Test
public void testRecipientsCountB(){
testCommonStuff();
// Assert stuff for test B
}
private void testCommonStuff() {
// Assert common stuff
}
This way you don't get redundant code and you can split your test into smaller tests. Also you make your tests less error prone IF they should actually test the same things. You will still know which test that failed, so traceability should be no problem.