I just happen to implement a method void followlink(obj page,obj link) which simply adds page and link to queue. I have unsuccessfully tried to test this kind of method.
All I want is to test that in the queue contains page and link received from followlink method. My test class already extends TestCase. So what is the best way to test such a method?
The JUnit FAQ has a section on testing methods that return void. In your case, you want to test a side effect of the method called.
The example given in the FAQ tests that the size of a Collection changes after an item is added.
#Test
public void testCollectionAdd() {
Collection collection = new ArrayList();
assertEquals(0, collection.size());
collection.add("itemA");
assertEquals(1, collection.size());
collection.add("itemB");
assertEquals(2, collection.size());
}
You could test the size if the queue before and after calling your method, something like:
int size = queue.length();
followLink(page, link);
assertEquals(size+1, queue.length()); // or maybe size+2?
another thing you might do is start off with an empty queue, call followLink, then dequeue the first item and test its values.
Most likely, your queue is private, so checking the size isn't going to work. The "design for testing" solution I've seen is to make use of package private methods and members instead. Since your junit tests are likely to be in the same package, they'll have access.
That by itself lets the "queue.length()" side effect test work.
But I'd go further: really, you should consider checking that your method has inserted the correct page and link to your queue. The details for that require more knowledge about how you're representing (and combining) page and link.
The jMock solution is also very good, though in truth I'm much more familiar with writing my own test harnesses.
So - check after the call of the method the queue, if the values passed to the method are added to the queue. You need access to the queue (something like getQueue()) for this.
Use jMock library and mock a storage that holds your queue, jMock allows to assert whether mocked method was called and with what parameters.
We have two ways to check the following :
Verify using the size of the Queue/Collection as in the program below :
(As pointed out by Patrick)
#Test
public void testfollowlinkAdd() {
int size = queue.length();
queue.add(pageLink); //Considering an Object of PageLink
Assert.assertTrue(size+1, queue.length());
}
OR
Make use of MockObjects. Frameworks like Mockito can be helpful here.
Related
I have a class that I am trying to write a few unit tests for, specifically, a void method within the class. Im not entirely sure whether or not the fact that the method is void is of any consequence here, but this is the basic structure of the class.
public class MyClass {
private Map<String,Object> myMap;
public void process(Bucket bucket){
//parses bucket, updates myMap
}
public int getAnswer(String input){
//searches map data
}
}
The basic usage of the class is that the process method is called for each bucket to ingest it’s data, and the getAnswer method returns a summation of sorts about the data that has been processed from the buckets.
I’ve written a few unit tests for the getAnswer method, but since the process method doesn’t return anything, but rather just updates the internal state of my object, is there a way to properly unit test it? Since it’s a public method, I figured that it needed to be unit tested, but haven’t figured out how to do so, independent of the getAnswer method.
We would need more code to answer you in details, but in short the answer is yes, you can and should definitely test it.
process is going to make some side-effects somewhere, right? Either you inject Mockito mocks of the classes/interfaces these side-effects are effected upon, or if you have an in-memory version of them you inject that instead.
In the case of mocks, you'll want to carefully assert that the side-effect methods were called exactly with the right parameters. If you're using an in-memory simulation, you check the state of your in-memory object and check it conforms to what you expected.
For unit testing of process method, I think you can do following:
Verify input bucket can be able to parse based on your method comments.
Exception handling -- if applicable -- if bucket is unable to parse.
For map update / add, you can always check if initial size is changed & valid bucket made successful attempt to add to map. If bucket key already present, you can also verify if size of map remains same.
All above points are based on information provided in original question & it is independent of getAnswer method.
In my understanding, code testing is to test whether results are right, like a calculator, I need to write a test case to verify if the result of 1+1 is 2.
But I have read many test cases about verifying the number of times a method is called. I'm very confused about that. The best example is what I just saw in Spring in Action:
public class BraveKnight implements Knight {
private Quest quest;
public BraveKnight(Quest quest) {
this.quest = quest;
}
public void embarkOnQuest() {
quest.embark();
}
}
public class BraveKnightTest {
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
BraveKnight knight = new BraveKnight(mockQuest);
knight.embarkOnQuest();
verify(mockQuest, times(1)).embark();
}
}
I really have no idea about why they need to verify the embark() function is called one time. Don't you think that embark() will certainly be invoked after embarkOnQuest() is called? Or some errors will occur, and I will notice error messages in the logs, which show the error line number, that can help me quickly locate the wrong code.
So what's the point of verifying like above?
The need is simple: to verify that the correct number of invocations were made. There are scenarios in which method calls should not happen, and others in which they should happen more or less than the default.
Consider the following modified version of embarkOnQuest:
public void embarkOnQuest() {
quest.embark();
quest.embarkAgain();
}
And suppose you are testing error cases for quest.embark():
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
Mockito.doThrow(RuntimeException.class).when(mockQuest).embark();
...
}
In this case you want to make sure that quest.embarkAgain is NOT invoked (or is invoked 0 times):
verify(mockQuest, times(0)).embarkAgain(); //or verifyZeroInteractions
Of course this is one other simple example. There are many other examples that could be added:
A database connector that should cache entries on first fetch, one can make multiple calls and verify that the connection to the database was called just once (per test query)
A singleton object that does initialization on load (or lazily), one can test that initialization-related calls are made just once.
Consider the following code:
public void saveFooIfFlagTrue(Foo foo, boolean flag) {
if (flag) {
fooRepository.save(foo);
}
}
If you don't check the number of times that fooRepository.save() is invoked , then how can you know whether this method is doing what you want it to?
This applies to other void methods. If there is no return to a method, and therefore no response to validate, checking which other methods are called is a good way of validating that the method is behaving correctly.
Good question. You raise a good point that mocking can be overly circuitous when you can just check the results. However, there are contexts where this does lead to more robust tests.
For example, if a method needs to make a call to an external API, there are several problems with simply testing the result:
Network I/O is slow. If you have many checks like this, it will slow down your test case
Any round-trip like this would have to rely on the code making the request, the API, and the code interpreting the API's response all to work correctly. This is a lot of failure points for a single test.
If something stupid happens and you accidentally make multiple requests, this could cause performance issues with your program.
To address your sub-questions:
Don't you think that embark() will certainly be invoked after embarkOnQuest() called?
Tests also have value in letting you refactor without worry about breaking things. This is obvious now, yes. Will it be obvious in 6 months?
I really have no idea about why they need to verify the embark()
function is called one time
Verifying an invocation on a mock for a specific number of times is the standard way how Mockito works as you invoke Mockito.verify().
In fact this :
verify(mockQuest, times(1)).embark();
is just a verbose way to write :
verify(mockQuest).embark();
In a general way, the verification for a single call on the mock is what you need.
In some uncommon scenarios you may want to verify that a method was invoked a specific number of times (more than one).
But you want to avoid using so specific verifications.
In fact you even want to use verifying as few as possible.
If you need to use verifying and besides the number of invocation on the mock, it generally means two things : the mocked dependency is too much coupled to the class under
test and or the method under test performs too many unitary tasks that produce only side effects.
The test is so not necessary straight readable and maintainable. It is like if you coded the mock flow in the verifying invocations.
And as a consequence it also makes the tests more brittle as it checks invocation details not the overall logic and states.
In most of cases, a refactoring is the remedy and cancel the requirement to specify a number of invocation.
I don't tell that it is never required but use it only as it happens to be the single decent choice for the class under test.
I'm test-driving some code for practice and spotted strange situation.
There is a ChannelRegistry that contains all communication channels references, and PrimaryConsumer who needs to attach itself to one of those channels choosen in runtime when initialize() called.
So I've done my first test as follows:
#RunWith(MockitoJUnitRunner.class)
public class PrimaryConsumerTest {
private #Mock ChannelsRegistry communicationRegistry;
private PrimaryConsumer consumer;
#Before
public void setup() {
consumer = new PrimaryConsumer(communicationRegistry);
}
#Test
public void shouldAttachToChannel() throws Exception {
consumer.initialize();
verify(communicationRegistry).attachToChannel("channel", consumer);
}
}
I'm checking if attaching method is called. To get it green I put impl like that:
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
Now next test: get channel id by name and attach to this specific channel. I want my test to describe class' behavior instead of its internals so I don't want my test to be "shouldGetSpecificChannel". Instead I check if it can attach to channel selected in runtime:
#Test
public void shouldAttachToSpecificChannel() throws Exception {
String channelName = "channel";
when(communicationRegistry.getChannel("channel_name")).thenReturn(channelName);
consumer.initialize();
verify(communicationRegistry).attachToChannel(channelName, consumer);
}
This test passes immediately, but implementation is screwed ("channel" hardcoded).
2 questions here:
is it ok to have 2 tests for such behavior? Maybe I should stub getting channel immediately in first test? If so, how does it map to testing single thing in single test?
how to cope with such situation: tests green, impl "hardcoded"? Should I write another test with different channel's name? If so, should I remove it after correcting impl (as it gets useless?)
UPDATE:
Just some clarifications.
I've hardcoded "channel" here
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
just to make first test pass quickly. But then, when running second test it passes immediately. I don't verify if stubbed method was called as I think stubs should not be verified explicitly.
Is this what you Rodney mean saying test are redundant? If yes shoud I make stub at the very beginning in the first test?
More tests is usually preferred to too few, so two tests is fine. A better question is whether the two tests are redundant: is there any situation or combination of inputs that would make one of the tests fail, but not the other? Then the two tests are both needed. If they always fail or succeed together, then you probably need only one of them.
When would you need a different value for channelName? It sounds like this is a configuration setting that is irrelevant to these particular tests. That's fine, perhaps you would test that configuration at a higher level, in your integration tests. A bigger concern I would have is why it's hard-coded in the first place: it should be injected into your class (probably via the constructor). Then you can test different channel names -- or not. Either way, you don't want to be changing your code just for testing if it means changing it back when you're done.
Basically ditto Rodney for the question of multiple tests. I would suggest, based on your update, one or two things.
First off, you have used the same data for both tests. In the Kent Beck book on TDD he mentions the use of "Triangulation". If you used different reference data in the second case then your code would not have passed without any additional work on your part.
On the other hand, he also mentions removing all duplication, and duplication includes duplication between the code and the tests. In this scenario you could have left both of your tests as is, and refactored out the duplication between the string "channel" in the code and the same in the test by replacing the literal in your class under test with the call to your communicationRegistry.getChannel(). After this refactoring you now have the string literal in one and only one place: The test.
Two different approaches, same result. Which one you use comes down to personal preference. In this scenario I would have taken the second approach, but that's just me.
Reminder to check out Rodney's answer to the question of multiple tests or not. I'm guessing you could delete the first one.
Thanks!
Brandon
For my assignment, I have to develop several jUnit tests for the method:
addAll(int index, Collection c)
This method is part of the class ArrayList - built in to java.
I figured out how to create the tests and run them in Eclipse IDE, but I'm a little confused as to exactly how I'm supposed to develop the tests. Could I get an example that includes what I should have in a #before, #beforeClass, #after, #test method?
To make this clear, I understand the format of the methods...I just don't understand HOW to test this method.
http://download.oracle.com/javase/1.4.2/docs/api/java/util/ArrayList.html#addAll(java.util.Collection)
So, what you need to test is that the method inserts all the elements in the passed Collection and that it inserts them into the appropriate position in the ArrayList.
Think of all the possible senerios and test each one.
start with empty arraylist
pass empty collection
use index of 0
use index > 0
etc, etc...
After the add, verify / assert that the ArrayList now has all the elements that it should have in the correct order / locations.
Also test the error cases with #Test(expected=...).
index < 0
index > arrayList.size()
null collection
come up with other ideas
Also... assertThat in combination with Hamcrest's IsIterableContainingInOrder would be helpful for verification.
You will need to think of the behaviour you want to test, and then create a number of methods annotated with #Test to test aspects of that behaviour. Think about how should addAll behave. For example, if a collection passed to it, and an index of 0, then it should add all those elements in index 0 of ArrayList, followed by previously existing objects. So, in your test method, create an ArrayList, stick some objects in it, create a collection (c), stick some objects in it, call addAll on your arrayList with index 0 and collection (c), and assert that it has done what it was supposed to do...
(That's just an example, I am not sure what is the exact expected behaviour of addAll in your case)
Even better take a look at the wikipedia article Vakimshaar posted :)
Stick with #Test and #Before Methods, the rest is probably not what you need.
Methods annotated with #Before get called every time before a method annotated with #Test is executed. You can use it to initialized stuff to a clean state which might be shared among multiple tests.
A starting point could be the following code (for more cases to test take a look at John's answer) and implement them yourself.
public class ArrayListTest {
public ArrayList<String> list;
#Before
public void before() {
// This the place where everything should be done to ensure a clean and
// consistent state of things to test
list = new ArrayList<String>();
}
#Test
public void testInsertIntoEmptyList() {
// do an insert at index 0 an verify that the size of the list is 1
}
#Test
public void testInsertIntoListWithMultipleItems() {
list.addAll(Arrays.asList("first", "second"));
list.addAll(1, Arrays.asList("afterFirst", "beforeSecond"));
// Verify that the list now contains the following elements
// "first", "afterFirst", "beforeSecond", "second"
List<String> theCompleteList = // put the expected list here
// Use Assert.assertEquals to ensure that list and theCompleteList are indeed the same
}
}
Use annotation #Test to mark test method that should return void and should not accept arguments.
For most usages this is enough. But if you want to run specific code before or after each test use implement appropriate method and mark them with #Before and #After annotation.
If you need some code that runs before or after test case (the class where you implement several tests) annotate appropriate methods using #BeforeClass and #AfterClass method. Pay attention that these methods must be static, so "before" method is executed event before the default constructor of your test case. You can use this fact if you wish too.
Functions (side-effect free ones) are such a fundamental building block, but I don't know of a satisfying way of testing them in Java.
I'm looking for pointers to tricks that make testing them easier. Here's an example of what I want:
public void setUp() {
myObj = new MyObject(...);
}
// This is sooo 2009 and not what I want to write:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
// I don't want to repeat/write the following checks to see
// that myFunction is behaving functionally.
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
// The following two tests are more in spirit of what I'd like
// to write, but they don't test that myFunction is functional:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
}
public void testThatSomeOtherInputGivesExpectedOutput () {
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
I'm looking for some annotation I can put on the test(s), MyObject or myFunction to make the test framework automatically repeat invocations to myFunction in all possible permutations for the given input/output combinations I've given, or some subset of the possible permutations in order to prove that the function is functional.
For example, above the (only) two possible permutations are:
myObj = new MyObject();
myObj.myFunction(someInput);
myObj.myFunction(someOtherInput);
and:
myObj = new MyObject();
myObj.myFunction(someOtherInput);
myObj.myFunction(someInput);
I should be able to only provide the input/output pairs (someInput, expectedOutput), and (someOtherInput, someOtherOutput), and the framework should do the rest.
I haven't used QuickCheck, but it seems like a non-solution. It is documented as a generator. I'm not looking for a way to generate inputs to my function, but rather a framework that lets me declaratively specify what part of my object is side-effect free and invoke my input/output specification using some permutation based on that declaration.
Update: I'm not looking to verify that nothing changes in the object, a memoizing function is a typical use-case for this kind of testing, and a memoizer actually changes its internal state. However, the output given some input always stays the same.
If you are trying to test that the functions are side-effect free, then calling with random arguments isn't really going to cut it. The same applies for a random sequence of calls with known arguments. Or pseudo-random, with random or fixed seeds. There's a good chance are that a (harmful) side-effect will only occur with any of the sequence of calls that your randomizer selects.
There is also a chance that the side-effects won't actually be visible in the outputs of any of the calls that you are making ... no matter what the inputs are. They side-effects could be on some other related objects that you didn't think to examine.
If you want to test this kind of thing, you really need to implement a "white-box" test where you look at the code and try and figure out what might cause (unwanted) side-effects and create test cases based on that knowledge. But I think that a better approach is careful manual code inspection, or using an automated static code analyser ... if you can find one that would do the job for you.
OTOH, if you already know that the functions are side-effect free, implementing randomized tests "just in case" is a bit of a waste of time, IMO.
I'm not quite sure I understand what you are asking, but it seems like Junit Theories (http://junit.sourceforge.net/doc/ReleaseNotes4.4.html#theories) could be an answer.
In this example, you could create a Map of key/value pairs (input/output) and call the method under test several times with values picked from the map. This will not prove, that the method is functional, but will increase the probability - which might be sufficient.
Here's a quick example of such an additional probably-functional test:
#Test public probablyFunctionalTestForMethodX() {
Map<Object, Object> inputOutputMap = initMap(); // this loads the input/output values
for (int i = 0; i < maxIterations; i++) {
Map.Entry test = pickAtRandom(inputOutputMap); // this picks a map enty randomly
assertEquals(test.getValue(), myObj.myFunction(test.getKey());
}
}
Problems with a higher complexity could be solved based on the Command pattern: You could wrap the test methods in command objects, add the command object to a list, shuffle the list and execute the commands (= the embedded tests) according to that list.
It sounds like you're attempting to test that invoking a particular method on a class doesn't modify any of its fields. This is a somewhat odd test case, but it's entirely possible to write a clear test for it. For other "side effects", like invoking other external methods, it's a bit harder. You could replace local references with test stubs and verify that they weren't invoked, but you still won't catch static method calls this way. Still, it's trivial to verify by inspection that you're not doing anything like that in your code, and sometimes that has to be good enough.
Here's one way to test that there are no side effects in a call:
public void test_MyFunction_hasNoSideEffects() {
MyClass systemUnderTest = makeMyClass();
MyClass copyOfOriginalState = systemUnderTest.clone();
systemUnderTest.myFunction();
assertEquals(systemUnderTest, copyOfOriginalState); //Test equals() method elsewhere
}
It's somewhat unusual to try to prove that a method is truly side effect free. Unit tests generally attempt to prove that a method behaves correctly and according to contract, but they're not meant to replace examining the code. It's generally a pretty easy exercise to check whether a method has any possible side effects. If your method never sets a field's value and never calls any non-functional methods, then it's functional.
Testing this at runtime is tricky. What might be more useful would be some sort of static analysis. Perhaps you could create a #Functional annotation, then write a program that would examine the classes of your program for such methods and check that they only invoke other #Functional methods and never assign to fields.
Randomly googling around, I found somebody's master's thesis on exactly this topic. Perhaps he has working code available.
Still, I will repeat that it is my advice that you focus your attention elsewhere. While you CAN mostly prove that a method has no side effects at all, it may be better in many cases to quickly verify this by visual inspection and focus the remainder of your time on other, more basic tests.
have a look at http://fitnesse.org/: it is used often for Acceptance Test but I found it is a easy way to run the same tests against huge amount of data
In junit you can write your own test runner. This code is not tested (I'm not sure if methods which get arguments will be recognized as test methods, maybe some more runner setup is needed?):
public class MyRunner extends BlockJUnit4ClassRunner {
#Override
protected Statement methodInvoker(final FrameworkMethod method, final Object test) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
Iterable<Object[]> permutations = getPermutations();
for (Object[] permutation : permutations) {
method.invokeExplosively(test, permutation[0], permutation[1]);
}
}
};
}
}
It should be only a matter of providing getPermutations() implementation. For example it can take data from some List<Object[]> field annotated with some custom annotation and produce all the permutations.
I think the term you're missing is "Parametrized Tests". However it seems to be more tedious in jUnit that in the .Net flavor. In NUnit, the following test executes 6 times with all combinations.
[Test]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
For Java, your options seem to be:
JUnit supports this with version 4. However it's a lot of code (it seems, jUnit is adamant about test methods not taking parameters). This is the least invasive.
DDSteps, a jUnit plugin. See this video that takes values from appropriately named excel spreadsheet. You also need to write a mapper/fixture class that maps values from the spreadsheet into members of the fixture class, that are then used to invoke the SUT.
Finally, you have Fit/Fitnesse. It's as good as DDSteps, except for the fact that the input data is in HTML/Wiki form. You can paste from an excel sheet into Fitnesse and it formats it correctly at the push of a button. You need to write a fixture class here too.
Im afraid that I dont find the link anymore, but Junit 4 has some help functions to generate testdata. Its like:
public void testData() {
data = {2, 3, 4};
data = {3,4,5 };
...
return data;
}
Junit will then thest your methods will this data. But as I said, I cant' find the link anymore (forgot the keywords) for a detailed (and correct) example.