I'm working on testing a method that is of the data type int. It accepts 2 ints as its parameters, and return which is the larger number. How do I properly call this from my test package? Here is my code so far:
public void testFindMaxNum()
{
assertEquals("findMaxNum(FAILS)", ProgrammingProject4.findMaxNum(1,2),1);
assertEquals("findMaxNum(PASS)", ProgrammingProject4.findMaxNum(1,2),;
}
Without the code that this is calling (findMaxNum) it is hard to be completely accurate in my answer. However, it looks like you are missing some annotations from your Test class.
I am going to assume that the signature of your method findMaxNum looks like the following:
public int findMaxNum(int i, int j)
Based on this your test code should look something like the following:
import static org.junit.Assert.assertEquals;
import org.junit.Test;
public class ProgrammingProject4Test {
#Test
public void testFindMaxNumFirst() {
// Arrange
ProgrammingProject4 programmingProject4 = new ProgrammingProject4();
// Act and Assert
assertEquals("First number should be bigger than second",2, programmingProject4.findMaxNum(1,2));
}
#Test
public void testFindMaxNumSecond() {
// Arrange
ProgrammingProject4 programmingProject4 = new ProgrammingProject4();
// Act and Assert
assertEquals("Second number should be bigger,2, programmingProject4.findMaxNum(2,1));
}
}
Couple of things to note from your code:
You are missing the annotations for the testing framework e.g. #Test. This means your build tool won't be able to find and run the tests.
You have the expected result after the result from the method. In JUnit asserts they should be the other way (as per my example).
Your code runs two tests in one method. This is bad practice because it makes debugging the tests harder. Also if the first assert fails the second one won't be run. Therefore splitting the two tests into two different methods is better.
You can take a look at Diffblue's playground (full disclosure I work for Diffblue), this is a tool that will write unit tests for a small sample piece of Java code. https://playground.diffblue.com
Hope this all makes sense. There are quite a few points in the answer. Feel free to comment if anything needs clarifying.
Related
I ran into a problem when I had to test a if else clause that only had method calls inside of them.
public CLI(String[] input){
cliCheck(input);
}
public static void cliCheck(String[] input){
if (input.length == 0) {
System.out.println("No input");
Help.help();
System.exit(0);
}
if(input.length == 1){
if(input[0] .equals("help") || input[0] == "-h") {
Help.help();
System.exit(0);
}
}
inputParser(input);
}
The code of from the beginning portion of a Command Line Interface program.
The first if is true when there is no input.
The second if is true when user types in "help" or "-h".
If not, then the input String is sent as the parameter of the inputParser method.
This is what I have so far...
#Test
public void cliCheckTest_Help(){
String[] input = {"help"};
CLI cli = new CLI(input);
Help help = mock(Help.class);
cli.cliCheck(input);
verify(help, times(1)).help();
}
(please tell me there is a better way to test for 100% branch coverage)
The problem is you created hard-to-test code here.
You see, that call to System.exit() will tear down your unit test in a very unpleasant manner.
You could do something like that instead:
public interface ShutdownService {
public void systemExit();
}
and then create a "default" implementation that simply calls System.exit().
But for your unit tests, you could instead "insert" a mocked version of that interface; and use that to verify that the expected call took place.
Beyond that, your code is also doing other things that make testing much harder than necessary - starting with the fact that you have static methods there; which then use a static field inputParser.
Long story short: static might look convenient, but it very much kills your ability to write reasonable unit tests.
So, my advise: learn how to create testable code; for example by watching these videos. And then improve the design of your production code. Because then you will find that writing tests becomes much easier!
And beyond that: reasonable handling of command line options is much more complicated than your naive implementation here. Unless this is for learning purposes: do not re-invent the wheel. There are libraries out there that do this kind of work for you. Use one of them!
What I am trying to do is to test some Lucene code I wrote and would like some information on best practices when using JUnit for testing. Lucene, BTW, is a search engine which you can use to create a flat file to index a bunch of data.
So what I would like to test is the creation of this inverted index, then search through the index to verify that some data is present.
My question is in the code:
public class IndexTest {
#Test
public void testWriteIndexFromDB() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testWriteIndexFromExcelFile() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
#Test
public void testRefreshIndexWithNewData() {
//run test
assertTrue(something in this test); // some test for this method
// Is using a function like so a proper way of writing a test?
checkSomeDataIsReturned();
}
// this function checks that data is returned after writing an index
public void checkSomeDataIsReturned(){ // not a test but does a check anyways
results = myIndex.searchForStuff(some input);
assertTrue(results.length > 0); // if length is zero, there is no data and something went wrong
}
}
To summarize, I have three options to write an index, I am testing that each of them writes. Is the re-usable function that is not a test the proper way to write a test? Or is there a better practice?
Is off course a good thing write reusable code in test, but more important than that is writing that are easy to understand. In general the asserts are in the test method itself, moving the asserts to helpers methods can make your test difficult to understand.
One way to write reusable code for checking expectations its using Hamcrest (https://code.google.com/p/hamcrest/wiki/Tutorial) and build a matchers (the library also comes with some very useful matchers for collections and stuff like that).
for example, you can write something like that:
public void
test_can_index_from_database() {
// create your index from database
assertThat(myIndex, containsWord('expected_word_in_index'));
}
The matchers "containsWord(String)" its a matcher that you write using hamcrest and you can re-use this logic in all your test. And with hamcrest you can write really easy to understand test.
Well, good practices such as reusable code are to be used in unit tests, too.
However, please consider that if you are in need to repeat code in unit tests, it may (and often does) mean that your tested methods take too many responsibilities.
I don't know if it really is your case but think about refactoring your code (here splitting your tested methods into more smaller methods) so you don't feel the need to repeat the same tests all over.
When each method takes a single responsibility only, and delegate the shared code to another method/class, you test this functionality somewhere else and here just test (using mocking and spying) if your methods calls the corresponding method/object.
I have a Parametrized test class with bunch of unit tests that generally control the creation of custom email messages. Right now class has a lot of test which depend on factor(s) used in parametrized class, the flow of the tests is the same for every test. The example of a test:
#Test
public void testRecipientsCount() {
assertEquals(3, recipientsCount);
}
I had to add extra funcionality to my email class that adds some extra internal emails to the list of recipients, and that only happens for some of the cases and that leads to my problem.
Lets say I want to assert the amount of messages created. For the old test it was the same for each case, but now its different depending on cases. The most intuitive way for me was to add if statements:
#Test
public void testRecipientsCount(){
if(something) {
assertEquals(3, recipientsCount);
}
else {
assertEquals(4, recipientsCount);
}
}
...
but my more experienced co-worker says we should avoid ifs in test classes (and I kinda agree on that).
I thought that splitting test on two test classess may work, but that would lead to redundant code in both classes (I still have to check if non-iternal messages were created, their size, content etc.), and a few lines added for one of them.
My question is: how do I do this so I don't use if's or loads of redundant code (not using parametrized class would produce even more redundant code)?
In my opinion a Junit should be read like a protocol.
That means you can write redundant code to make the test case better readable.
Write a testcase for each possible if-statement in your business logic as well as the negative cases. Thats the only way to get a 100% test coverage.
I use the structure:
- testdata preparation
- executing logic
- check results
- clear data
Furthermore you should write complex asserts of big objects in own abstract classes:
abstract class YourBusinessObjectAssert{
public static void assertYourBussinessObjectIsValid(YourBusinessObject pYourBusinessObject, Collection<YourBusinessObject> pAllYourBusinessObject) {
for (YourBusinessObject lYourBusinessObject : pAllYourBusinessObject) {
if (lYourBusinessObject.isTechnicalEqual(pYourBusinessObject)) {
return;
}
}
assertFalse("Could not find requested YourBusinessObject in List<YourBusinessObject>!", true);
}
}
it will reduce the complexity of your code and you're making it available to other developers.
A unit test should, in my opinion, test only one thing if possible. As such I'd say that if you need an if statement then you probably need more than one unit test - one for each block in the if/else code.
If possible I'd say a test should read like a story - my preferred layout (and its not my idea :-) - its fairly widely used) is:
- given: do setup etc
- when: the place you actually execute/call the thing under test
- expect: verify the result
Another advantage of a unit test testing only one thing is that when a failure occurs its unambiguous what the cause was - if you have a long test with many possible outcomes it becomes much harder to reason why a test has failed.
I'm not sure if it's possible to cleanly do what you're after in a parametrized test. If you need different test case behavior based on which parameter for some features, you might just be better off testing those features separately - in different test classes that are not parametrized.
If you really do want to keep everything in the parametrized test classes, I would be inclined to make a helper function so that your example test at least reads as a simple assertion:
#Test
public void testRecipientsCount(){
assertEquals(expectedCount(something), recipientsCount)
}
private int expectedCount(boolean which) {
if (something){
return 3;
}
else {
return 4;
}
}
Why not have a private method that tests the things that are common for each method? Something like (but probably with some input parameter for the testCommonStuff() method):
#Test
public void testRecipientsCountA(){
testCommonStuff();
// Assert stuff for test A
}
#Test
public void testRecipientsCountB(){
testCommonStuff();
// Assert stuff for test B
}
private void testCommonStuff() {
// Assert common stuff
}
This way you don't get redundant code and you can split your test into smaller tests. Also you make your tests less error prone IF they should actually test the same things. You will still know which test that failed, so traceability should be no problem.
I'm test-driving some code for practice and spotted strange situation.
There is a ChannelRegistry that contains all communication channels references, and PrimaryConsumer who needs to attach itself to one of those channels choosen in runtime when initialize() called.
So I've done my first test as follows:
#RunWith(MockitoJUnitRunner.class)
public class PrimaryConsumerTest {
private #Mock ChannelsRegistry communicationRegistry;
private PrimaryConsumer consumer;
#Before
public void setup() {
consumer = new PrimaryConsumer(communicationRegistry);
}
#Test
public void shouldAttachToChannel() throws Exception {
consumer.initialize();
verify(communicationRegistry).attachToChannel("channel", consumer);
}
}
I'm checking if attaching method is called. To get it green I put impl like that:
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
Now next test: get channel id by name and attach to this specific channel. I want my test to describe class' behavior instead of its internals so I don't want my test to be "shouldGetSpecificChannel". Instead I check if it can attach to channel selected in runtime:
#Test
public void shouldAttachToSpecificChannel() throws Exception {
String channelName = "channel";
when(communicationRegistry.getChannel("channel_name")).thenReturn(channelName);
consumer.initialize();
verify(communicationRegistry).attachToChannel(channelName, consumer);
}
This test passes immediately, but implementation is screwed ("channel" hardcoded).
2 questions here:
is it ok to have 2 tests for such behavior? Maybe I should stub getting channel immediately in first test? If so, how does it map to testing single thing in single test?
how to cope with such situation: tests green, impl "hardcoded"? Should I write another test with different channel's name? If so, should I remove it after correcting impl (as it gets useless?)
UPDATE:
Just some clarifications.
I've hardcoded "channel" here
public void initialize() {
communicationRegistry.attachToChannel("channel", this);
}
just to make first test pass quickly. But then, when running second test it passes immediately. I don't verify if stubbed method was called as I think stubs should not be verified explicitly.
Is this what you Rodney mean saying test are redundant? If yes shoud I make stub at the very beginning in the first test?
More tests is usually preferred to too few, so two tests is fine. A better question is whether the two tests are redundant: is there any situation or combination of inputs that would make one of the tests fail, but not the other? Then the two tests are both needed. If they always fail or succeed together, then you probably need only one of them.
When would you need a different value for channelName? It sounds like this is a configuration setting that is irrelevant to these particular tests. That's fine, perhaps you would test that configuration at a higher level, in your integration tests. A bigger concern I would have is why it's hard-coded in the first place: it should be injected into your class (probably via the constructor). Then you can test different channel names -- or not. Either way, you don't want to be changing your code just for testing if it means changing it back when you're done.
Basically ditto Rodney for the question of multiple tests. I would suggest, based on your update, one or two things.
First off, you have used the same data for both tests. In the Kent Beck book on TDD he mentions the use of "Triangulation". If you used different reference data in the second case then your code would not have passed without any additional work on your part.
On the other hand, he also mentions removing all duplication, and duplication includes duplication between the code and the tests. In this scenario you could have left both of your tests as is, and refactored out the duplication between the string "channel" in the code and the same in the test by replacing the literal in your class under test with the call to your communicationRegistry.getChannel(). After this refactoring you now have the string literal in one and only one place: The test.
Two different approaches, same result. Which one you use comes down to personal preference. In this scenario I would have taken the second approach, but that's just me.
Reminder to check out Rodney's answer to the question of multiple tests or not. I'm guessing you could delete the first one.
Thanks!
Brandon
Functions (side-effect free ones) are such a fundamental building block, but I don't know of a satisfying way of testing them in Java.
I'm looking for pointers to tricks that make testing them easier. Here's an example of what I want:
public void setUp() {
myObj = new MyObject(...);
}
// This is sooo 2009 and not what I want to write:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
// I don't want to repeat/write the following checks to see
// that myFunction is behaving functionally.
assertEquals(expectedOutput, myObj.myFunction(someInput);
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
// The following two tests are more in spirit of what I'd like
// to write, but they don't test that myFunction is functional:
public void testThatSomeInputGivesExpectedOutput () {
assertEquals(expectedOutput, myObj.myFunction(someInput);
}
public void testThatSomeOtherInputGivesExpectedOutput () {
assertEquals(expectedOtherOutput, myObj.myFunction(someOtherInput);
}
I'm looking for some annotation I can put on the test(s), MyObject or myFunction to make the test framework automatically repeat invocations to myFunction in all possible permutations for the given input/output combinations I've given, or some subset of the possible permutations in order to prove that the function is functional.
For example, above the (only) two possible permutations are:
myObj = new MyObject();
myObj.myFunction(someInput);
myObj.myFunction(someOtherInput);
and:
myObj = new MyObject();
myObj.myFunction(someOtherInput);
myObj.myFunction(someInput);
I should be able to only provide the input/output pairs (someInput, expectedOutput), and (someOtherInput, someOtherOutput), and the framework should do the rest.
I haven't used QuickCheck, but it seems like a non-solution. It is documented as a generator. I'm not looking for a way to generate inputs to my function, but rather a framework that lets me declaratively specify what part of my object is side-effect free and invoke my input/output specification using some permutation based on that declaration.
Update: I'm not looking to verify that nothing changes in the object, a memoizing function is a typical use-case for this kind of testing, and a memoizer actually changes its internal state. However, the output given some input always stays the same.
If you are trying to test that the functions are side-effect free, then calling with random arguments isn't really going to cut it. The same applies for a random sequence of calls with known arguments. Or pseudo-random, with random or fixed seeds. There's a good chance are that a (harmful) side-effect will only occur with any of the sequence of calls that your randomizer selects.
There is also a chance that the side-effects won't actually be visible in the outputs of any of the calls that you are making ... no matter what the inputs are. They side-effects could be on some other related objects that you didn't think to examine.
If you want to test this kind of thing, you really need to implement a "white-box" test where you look at the code and try and figure out what might cause (unwanted) side-effects and create test cases based on that knowledge. But I think that a better approach is careful manual code inspection, or using an automated static code analyser ... if you can find one that would do the job for you.
OTOH, if you already know that the functions are side-effect free, implementing randomized tests "just in case" is a bit of a waste of time, IMO.
I'm not quite sure I understand what you are asking, but it seems like Junit Theories (http://junit.sourceforge.net/doc/ReleaseNotes4.4.html#theories) could be an answer.
In this example, you could create a Map of key/value pairs (input/output) and call the method under test several times with values picked from the map. This will not prove, that the method is functional, but will increase the probability - which might be sufficient.
Here's a quick example of such an additional probably-functional test:
#Test public probablyFunctionalTestForMethodX() {
Map<Object, Object> inputOutputMap = initMap(); // this loads the input/output values
for (int i = 0; i < maxIterations; i++) {
Map.Entry test = pickAtRandom(inputOutputMap); // this picks a map enty randomly
assertEquals(test.getValue(), myObj.myFunction(test.getKey());
}
}
Problems with a higher complexity could be solved based on the Command pattern: You could wrap the test methods in command objects, add the command object to a list, shuffle the list and execute the commands (= the embedded tests) according to that list.
It sounds like you're attempting to test that invoking a particular method on a class doesn't modify any of its fields. This is a somewhat odd test case, but it's entirely possible to write a clear test for it. For other "side effects", like invoking other external methods, it's a bit harder. You could replace local references with test stubs and verify that they weren't invoked, but you still won't catch static method calls this way. Still, it's trivial to verify by inspection that you're not doing anything like that in your code, and sometimes that has to be good enough.
Here's one way to test that there are no side effects in a call:
public void test_MyFunction_hasNoSideEffects() {
MyClass systemUnderTest = makeMyClass();
MyClass copyOfOriginalState = systemUnderTest.clone();
systemUnderTest.myFunction();
assertEquals(systemUnderTest, copyOfOriginalState); //Test equals() method elsewhere
}
It's somewhat unusual to try to prove that a method is truly side effect free. Unit tests generally attempt to prove that a method behaves correctly and according to contract, but they're not meant to replace examining the code. It's generally a pretty easy exercise to check whether a method has any possible side effects. If your method never sets a field's value and never calls any non-functional methods, then it's functional.
Testing this at runtime is tricky. What might be more useful would be some sort of static analysis. Perhaps you could create a #Functional annotation, then write a program that would examine the classes of your program for such methods and check that they only invoke other #Functional methods and never assign to fields.
Randomly googling around, I found somebody's master's thesis on exactly this topic. Perhaps he has working code available.
Still, I will repeat that it is my advice that you focus your attention elsewhere. While you CAN mostly prove that a method has no side effects at all, it may be better in many cases to quickly verify this by visual inspection and focus the remainder of your time on other, more basic tests.
have a look at http://fitnesse.org/: it is used often for Acceptance Test but I found it is a easy way to run the same tests against huge amount of data
In junit you can write your own test runner. This code is not tested (I'm not sure if methods which get arguments will be recognized as test methods, maybe some more runner setup is needed?):
public class MyRunner extends BlockJUnit4ClassRunner {
#Override
protected Statement methodInvoker(final FrameworkMethod method, final Object test) {
return new Statement() {
#Override
public void evaluate() throws Throwable {
Iterable<Object[]> permutations = getPermutations();
for (Object[] permutation : permutations) {
method.invokeExplosively(test, permutation[0], permutation[1]);
}
}
};
}
}
It should be only a matter of providing getPermutations() implementation. For example it can take data from some List<Object[]> field annotated with some custom annotation and produce all the permutations.
I think the term you're missing is "Parametrized Tests". However it seems to be more tedious in jUnit that in the .Net flavor. In NUnit, the following test executes 6 times with all combinations.
[Test]
public void MyTest(
[Values(1,2,3)] int x,
[Values("A","B")] string s)
{
...
}
For Java, your options seem to be:
JUnit supports this with version 4. However it's a lot of code (it seems, jUnit is adamant about test methods not taking parameters). This is the least invasive.
DDSteps, a jUnit plugin. See this video that takes values from appropriately named excel spreadsheet. You also need to write a mapper/fixture class that maps values from the spreadsheet into members of the fixture class, that are then used to invoke the SUT.
Finally, you have Fit/Fitnesse. It's as good as DDSteps, except for the fact that the input data is in HTML/Wiki form. You can paste from an excel sheet into Fitnesse and it formats it correctly at the push of a button. You need to write a fixture class here too.
Im afraid that I dont find the link anymore, but Junit 4 has some help functions to generate testdata. Its like:
public void testData() {
data = {2, 3, 4};
data = {3,4,5 };
...
return data;
}
Junit will then thest your methods will this data. But as I said, I cant' find the link anymore (forgot the keywords) for a detailed (and correct) example.