I know how to make sure that a function has been invoked, using:
mockito.verify
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
I can basically write unit test for every case, but I want to make sure that if any further cases will be added - there will also be invocation to that method.
Unit testing alone will not do that. You have to look into using coverage in order to get there.
Unit testing can only tell you if the paths that were taken resulted in a "valid" result; but there is no knowledge of "all paths" that exist; and if they were all hit.
So you want to turn here for example and learn which coverage tool would work for you.
When you are working with eclipse or intellij, those things work out of the box; you can install plugins like cobertura or eclemma within eclipse; and then do a "run unit test with coverage".
But of course: that only results in a number. You then have to look carefully at your code to understand if you are happy with that number (where those IDEs make that really easy; they can show you your source code, and which paths were taken).
Meaning: coverage is a whole concept, and you have to understand what that means; and in which way you can make that concept helpful for your daily work. For example, you the last thing you want is your boss giving you a specific target goal for coverage.
And just to be sure: there is no tooling that tells you: you added new code, and now this specific method invocation is no longer coming through all parts. What coverage gives you is that you had 75.32% coverage before your change; and afterwards, it went down to 74.01% ... the rest is then up to you.
now, I want to make sure that on every path of the function (every 'if', 'if else' and 'else') - the function was invoked.
You don't want that.
The misunderstanding is that you don't test "the code". You test public observable behavior. In your case the behavior is that your unit under test (UuT) (after doing other stuff) calls a method on a dependency (I hope).
You don't want to test "the code" because it may change for becoming cleaner and/or support more behavior. But then you don't want to change your existing tests since they will guarantee that the desired behavior is preserved during your refactoring.
On the other hand each test method should verify exactly one expectation about the UuTs behavior. This means that you should already have one test method for each execution path though your if/else cascade. So all you have to do is to add the verify() instruction to each of this test methods.
Finally you may have an easier job testing your code if you force Single Layer of Abstraction principle which basically says that a method either calls methods on some dependencies (aka "dispatching"), calls internal methods or does low level operations. This principle may lead to a design, where the "low level" stuff your UuT currently does moves to a new dependency so that your UuT only needs to do two calls on some dependencies in a certain order...
I have encountered this issue before where I needed to tightly test bound the switch cases, and I was desperate enough to do that.
I am asuming test coverage analysis is not enough for you. For me, the if-else conditions part was so critical that changing something unintentionally could have proved greatly disastrous, so I could not afford to leave a failure prone code and I needed a test case to satisfy myself.
How I satisfied myself is here:
1: Changed the conditions such as if..else, etc to a swich case variable - TaskSwitcherEnum, say taskSwitcher, and performed all sorts of operations under various possible values of TaskSwitcherEnum.
switch (taskSwitcher){
case TaskSwitcherEnum.Task_Type_1:
//do Something before break
break;
case TaskSwitcherEnum.Task_Type_2:
//do Something again
break;
...
}
2: Tightly tested the desired method for all possible values of taskSwitcherEnum. Mockito.verify(), whether required task method is called once, for each given TaskSwitcherEnum value.
3: Finally did a junit like this:
assertEquals("Task performance strategy is designed to handle only five cases.", 5, taskSwitcherEnum.values().length);
Doing this made sure [at least test covered] following things:
1: That my code has only desired branches, and any other code branch addition/deletion is caught by a test case.
2: That each branch does it's desired job by calling a method that I want, through testing on every particular Enum values against a called method.
Gist of the whole answer is, sometimes a little design change helps a lot.
Related
Why would one use JUnit's assumingThat() method instead of a plain old simple if clause? If one can use simple thing why would you complicate it with something else that does it the same way.
Is it just a expressionality thing, or what's the advantage, I don't see other benefits.
Junit's assume is not a new feature in version 5, it has been there since v4.4 and it has other applications.
You could skip testing with if, but with assume you can tag failure lifecycle method to it, using a Listener.
Example Situation (Most Common) - You could have a listener, which creates reports of the test. And there could be a code to add the failed tests, passed tests and assume failed tests to the report. If you want to achieve this without using listener or testAssumptionFailure method, then you would have to repeatedly call it everywhere.
Instead adding a listener makes it modular and maintainable.
You have many varities of assume methods which you could use to stop repeatedly write if, else and messages.
My company wants to move off of JUnit 3 and start using only JUnit 4. The other intern and I have been given the task of converting the older JUnit 3 tests to use JUnit 4 conventions. However, I'm having a problem converting the testfile I'm working on right now.
From what I can tell, there is a generateTest method that returns a SSlTest (SSlTest is a subclass of TestCase). The returned SslTest overrides runTest. runTest contains a try-catch block that starts two threads, clientThread and serverThread (these are both subclasses of Thread that are defined within the testfile). It looks like the actual testing is being done inside the threads, since the rest of runTest is used for catching exceptions from the two threads.
generateTest is called by another method, generateSuite (returns a TestSuite). generateSuite contains an outer for-loop that adds suites to a main suite. The inner for-loop uses generateTest to add tests to each suite within the main suite. The main suite is what is returned by the method.
Finally, inside the suite() method that is called in the main method of the test file, a while-loop is setup to generate suites using generateSuite and add them to a bigger suite.
The only guides I've found on migrating to JUnit 4 are for much simpler test cases. I'm very lost right now and no one else at my company knows enough JUnit 4 to help me, so any tips would be much appreciated!
The very first thing I would do is try to convince whomever gave me the task that it is unnecessary. I know that is hard as an intern, but it is worth making sure that person understands this isn't necessary.
Facts for convincing:
The JUnit 4 jar contains both the junit.framework and org.junit package structures so it is backward compatible.
JUnit has broad adoption. They owners of the JUnit project are well aware of this and aren't going to ask people to rewrite all the tests. In other words, they aren't going to just drop compatibility.
Actually try it. Seriously. Try running your existing test code as is with the JUnit 4 jar. You'll see if you get any compiler errors. If you do, those are the areas to focus. If you don't, you have great evidence to show to the person who gave you the task.
This doesn't mean you won't have to change anything. It means you won't have to change the majority of your code. If you have custom runners, you'll want to use the JUnit 4 style. You also might need classpath suite to collect the tests.
There is also value in converting a few of the tests to the JUnit 4 so developers on the team have some examples to use. But converting them all isn't a good use of time.
On not being able to post code
Getting help on the internet is extremely difficult without code. I can understand your employer not wanting you to post code. (But then they probably don't want you posting class and method names either - which you did.) Luckily, there is an alternative. Create a SSCCE instead. (Read the link - it will help you a lot as you progress in your jobs.) In addition for the smaller example being easier to read, it will allow you to change the class/method/etc names and then your employer won't have their code online.
Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".
So I've read the official JUnit docs, which contain a plethora of examples, but (as with many things) I have Eclipse fired up and I am writing my first JUnit test, and I'm choking on some basic design/conceptual issues.
So if my WidgetUnitTest is testing a target called Widget, I assume I'll need to create a fair number of Widgets to use throughout the test methods. Should I be constructing these Widgets in the WidgetUnitTest constructor, or in the setUp() method? Should there be a 1:1 ratio of Widgets to test methods, or do best practices dictate reusing Widgets as much as possible?
Finally, how much granularity should exist between asserts/fails and test methods? A purist might argue that 1-and-only-1 assertions should exist inside a test method, however under that paradigm, if Widget has a getter called getBuzz(), I'll end up with 20 different test methods for getBuzz() with names like
#Test
public void testGetBuzzWhenFooIsNullAndFizzIsNonNegative() { ... }
As opposed to 1 method that tests a multitude of scenarios and hosts a multitude of assertions:
#Test
public void testGetBuzz() { ... }
Thanks for any insight from some JUnit maestros!
Pattern
Interesting question. First of all - my ultimate test pattern configured in IDE:
#Test
public void shouldDoSomethingWhenSomeEventOccurs() throws Exception
{
//given
//when
//then
}
I am always starting with this code (smart people call it BDD).
In given I place test setup unique for each test.
when is ideally a single line - the thing you are testing.
then should contain assertions.
I am not a single assertion advocate, however you should test only single aspect of a behavior. For instance if the the method should return something and also has some side effects, create two tests with same given and when sections.
Also the test pattern includes throws Exception. This is to handle annoying checked exceptions in Java. If you test some code that throws them, you won't be bothered by the compiler. Of course if the test throws an exception it fails.
Setup
Test setup is very important. On one hand it is reasonable to extract common code and place it in setup()/#Before method. However note that when reading a test (and readability is the biggest value in unit testing!) it is easy to miss setup code hanging somewhere at the beginning of the test case. So relevant test setup (for instance you can create widget in different ways) should go to test method, but infrastructure (setting up common mocks, starting embedded test database, etc.) should be extracted. Once again to improve readability.
Also are you aware that JUnit creates new instance of test case class per each test? So even if you create your CUT (class under test) in the constructor, the constructor is called before each test. Kind of annoying.
Granularity
First name your test and think what use-case or functionality you want to test, never think in terms of:
this is a Foo class having bar() and buzz() methods so I create FooTest with testBar() and testBuzz(). Oh dear, I need to test two execution paths throughout bar() - so let us create testBar1() and testBar2().
shouldTurnOffEngineWhenOutOfFuel() is good, testEngine17() is bad.
More on naming
What does the testGetBuzzWhenFooIsNullAndFizzIsNonNegative name tell about the test? I know it tests something, but why? And don't you think the details are too intimate? How about:
#Test shouldReturnDisabledBuzzWhenFooNotProvidedAndFizzNotNegative`
It both describes the input in a meaningful manner and your intent (assuming disabled buzz is some sort of buzz status/type). Also note we no longer hardcode getBuzz() method name and null contract for Foo (instead we say: when Foo is not provided). What if you replace null with null object pattern in the future?
Also don't be afraid of 20 different test methods for getBuzz(). Instead think of 20 different use cases you are testing. However if your test case class grows too big (since it is typically much larger than tested class), extract into several test cases. Once again: FooHappyPathTest, FooBogusInput and FooCornerCases are good, Foo1Test and Foo2Test are bad.
Readability
Strive for short and descriptive names. Few lines in given and few in then. That's it. Create builders and internal DSLs, extract methods, write custom matchers and assertions. The test should be even more readable than production code. Don't over-mock.
I find it useful to first write a series of empty well-named test case methods. Then I go back to the first one. If I still understand what was I suppose to test under what conditions, I implement the test building a class API in the meantime. Then I implement that API. Smart people call it TDD (see below).
Recommended reading:
Growing Object-Oriented Software, Guided by Tests
Unit Testing in Java: How Tests Drive the Code
Clean Code: A Handbook of Agile Software Craftsmanship
You would create a new instance of the class under test in your setup method. You want each test to be able to execute independently without having to worry about any unwanted state in the object under test from another previous test.
I would recommend having separate test for each scenario/behavior/logic flow that you need to test, not one massive test for everything in getBuzz(). You want each test to have a focused purpose of what you want to verify in getBuzz().
Rather than testing methods try to focus on testing behaviors. Ask the question "what should a widget do?" Then write a test that affirms the answer. Eg. "A widget should fidget"
public void setUp() throws Exception {
myWidget = new Widget();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
compile, see "no method fidget defined " errors, fix the errors, recompile the test and repeat. Next ask the question what should the result of each behavior be, in our case what happens as the result of fidget? Maybe there is some observable output like a new 2D coordinate position. In this case our widget would be assumed to be in a given position and when it fidgets it's position is altered some way.
public void setUp() throws Exception {
//Given a widget
myWidget = new Widget();
//And it's original position
Point initialWidgetPosition = widget.position();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
public void testAWidgetPositionShouldChangeWhenItFidgets() throws Exception {
myWidget.fidget();
assertNotEquals(initialWidgetPosition, widget.position());
}
Some would argue against both tests exercising the same fidget behavior but it makes sense to single out the behavior of fidget independent of how it impacts widget.position(). If one behavior breaks the single test will pinpoint the cause of failure. Also it is important to state that the behavior can be exercised on its own as a fulfillment of the spec (you do have program specs don't you?) that suggests you need a fidgety widget. In the end it's all about realizing your program specs as code that exercises your interfaces which demonstrate both that you've completed the spec and secondly how one interacts with your product. This is in essence how TDD should work. Any attempt to resolve bugs or test the product usually results in a frustrating pointless debate over which framework to use, level of coverage and how fine grained your suite should be. Each test case should be an exercise of breaking down your spec into a component where you can begin phrasing with Given/When/Then. Given {some application state or precondition} When {a behavior is invoked} Then {assert some observable output}.
I completely second Tomasz Nurkiewicz answer, so I'll say that rather than repeating everything he said.
A couple more points:
Don't forget to test error cases. You can consider something like that:
#Test
public void throwExceptionWhenConditionOneExist() {
// setup
// ...
try {
classUnderTest.doSomething(conditionOne);
Assert.fail("should have thrown exception");
} catch (IllegalArgumentException expected) {
Assert.assertEquals("this is the expected error message", expected.getMessage());
}
}
Also, it has GREAT value to start writing your tests before even thinking about the design of your class under test. If you're a beginner on unit-testing, I cannot emphasize enough learning this technique at the same time (this is called TDD, test-driven development) which proceeds like that:
You think about what user case you have for your user requirements
You write a basic first test for it
You make it compile (by creating needed classes -including your class under test-, etc.)
You run it: it should fail
Now you implement the functionality of the class under test that will make it pass (and nothing more)
Rinse, and repeat with a new requirement
When all your requirements have passing tests, then you're done. You NEVER write anything in your production code that doesn't have a test before (exceptions to that is logging code, and not much more).
TDD is invaluable in producing good quality code, not over-engineering requirements, and making sure you have a 100% functional coverage (rather than line coverage, which is usually meaningless). It requires a change in the way you consider coding, that's why it's valuable to learn the technique at the same time as testing. Once you get it, it will become natural.
Next step is looking into Mocking strategies :)
Have fun testing.
First of all, the setUp and the tearDown Methods will be called before and after each Test, so the setUp Method should create the objects, if you need them in every test, and test-specific things may be done in the test itself.
Second, it is up to you how you want to test your program. Obviously you could write a test for every possible situation in your program and end up with a gazillion tests for every method. Or you could write just one test for every method, which checks every possible scenario. I would recommend a mixture between both ways. You really don't need test for trivial getters/setters, but writing just one test for a method may result in confusion if the test fails. You should decide, which Methods are worth testing, and which scenarios are worth testing. But in principle every scenario should have its own Test.
Mostly I end up with a code coverage of 80 to 90 percent with my tests.
I'm writing a little library for movies for myself. It's partly for learning TDD. Now I have a problem I can't solve.
The code is in here https://github.com/hasanen/MovieLibrary/blob/master/movielibrary-core/src/test/java/net/pieceofcode/movielibrary/service/MovieLibraryServiceITC.java
The problem is that when I run the whole class (right click above class name in eclipse), the second test fails because removing doesn't succeed. But when right clicking the method (getMovieGenres_getAllGenresAndRemoveOne_returnsTwoGenreAndIdsAreDifferent) and choosing Run as Junit Test, it works.
I don't necessarily need the fix, but at least some advice on how to find why junit is acting like this.
From the way you explain the problem, the problem appears to be in the setUp class. The setUp class runs before every test case invocation. This is the general sequence.
1- Add three movies.
2- Test if three movies exists.
3- Add three movies
4- remove movie item # 1.
Since sequence 1-4 works, the problem is sequence 3. Either sequence 3 swallows some exception or mutates the underlying object. (may be changes the sequence.) Without knowing how addMovie changes the underlying object, its hard to tell.
Something outside your test class (likely a superclass) is creating movieLibraryService, and it's not being recreated as often as it needs to be for independent testing.
If you add the line
movieLibraryService = new MovieLibraryService();
at the top of your testSetUp() method, this service will be properly reset before the running of each test method, and they will likely work properly.
As it is, I suspect you're getting a failure on the assertions about size, as the size is becoming 6 instead of 3.
Alternatively, you could add a teardown method (annotated with #After) which removes the contents of the movie library so that it always starts empty.
IMHO the problem is your test isn't real unit test but integration one. So while testing your service you're testing all the layers it uses. I recommend yo to use mocks for lower layers dependencies (EasyMock or something) and use integration tests only for your repository layer. This way you can avoid persistence layer influences while testing service layer.