logging in TDD with Junit - java

Recently some one told me that we should not have logging in JUnit test cases or TDD in general. As I have started my self on TDD and it has helped me a lot, I see TDD as part of the code and I used log4j for my test methods. My reason: I was writing code in TDD test cases in JUnit so i should use logging for it as well.
What is the general prevalent opinion on this, i searched on this but couldn't find anything related to this on google. my sample test method will look something like this where destinationFileName is a class instance variable which is initialized in #BeforeTest method... Tell me if this logging is good or should not be added to test methods.
#Test
public void testProcessDestinationByStAX() throws Exception {
logger.info("Testing processDestinationByStAX");
DestinationProcessor destinationProcessor = new DestinationProcessor();
int expResult = numOfDestinations;
logger.info("parsing " + destinationFileName);
List<Destination> result = destinationProcessor.processDestinationByStAX(destinationFileName);
logger.info("Successfully Parsed : " + result.size() + " destinations ...");
assertEquals(expResult, result.size());
}

IHMO, if logging is useful to determine what it going on it should be in the code not the tests. JUnit will log the test name (which should provide information about what is under test) and the code should have debug logging to help determine flow. Given these two things, I would normally suggest that logging in the tests cluttered and also clutters the log.
I also find that tests should be kept as simple as possible. This is because people don't like to maintain then. The more lines of code (including logging statements) in your tests, the more complex they become. People will spend time trying to figure out what you code does. They will not spend time trying to figure out what you test does. For this reason KISS is key to tests.
All that said, I think it would be hard for someone to suggest that there is anything inherently wrong with adding logging to your tests.
Another question might be, should you test logging? Should your unit test verify that expected logging takes place. I have not seen much discussion of this but IMHO, logging at the levels of WARNING or above should be verified.
On a slight side note, looking at your test it is not immediately obvious what is the method under test. Consider formatting your test as follows:
#Test
public void testProcessDestinationByStAX() throws Exception {
// setup
logger.info("Testing processDestinationByStAX");
DestinationProcessor destinationProcessor = new DestinationProcessor();
int expResult = numOfDestinations;
logger.info("parsing " + destinationFileName);
// test
List<Destination> result = destinationProcessor.processDestinationByStAX(destinationFileName);
// verify
logger.info("Successfully Parsed : " + result.size() + " destinations ...");
assertEquals(expResult, result.size());
}
This make your test clearer in that there should be exactly one invocation of the method under test and doing the formatting make it easy to find that invocation.

I think logging is useful in most case. For instance, a test time out and didn't throw exceptions. The log may help you find the problem. When we run a test case, you may see the log to see which process it is in and also we can check something that is hard to check with assert.
The opinion below is a little different. The key point of it would be sometimes the log information can be tracking and also the log in an automatic tool will be maintained for review or something.
In my opinion, most test cases don't need log4j to maintain the log. System.out.println() is enough for most scenario. Assert message is much more important.
But there are some special cases which log4j would be very useful.
For instance,
Where do I configure log4j in a JUnit test class?
I still need to run a test case by itself some times, in which case I
want log4j configured.
In this case, he need it run by itself sometimes and need to maintain a log for it.
Of course, if you have a teamcity to maintain every build log, you can use System.out.println as well. Because teamcity or other automatic tools help you maintain the log. If you don't use those tools and still want to run the test case many times, also need to know when a test case failed in which build with which check-in. Then the log4j would be useful.
In a word, I think we should avoid using log4j in a general Junit test case if we don't have some special requirement.
I think this is not related with TDD, or in a TDD, this is not the most important things.

Related

mocking Logger.getLogger() using jmock

I am working on legacy code and writing some junit tests (I know, wrong order, never mind) using jmock (also wasn't my choice, nothing I can change about that) and I have class which does some elaborate logging, messing with Strings in general. We are using log4j for logging and I want to test those logged messages. I thought about mocking Logger class, but I don't know how to do it.
As usually we have Logger done like this:
private static final Logger LOG = Logger.getLogger(SomeClass.class);
Does anyone have any idea how to mock method .getLogger(class) or any other idea how to check what exactly has been logged?
You can write own appender and redirect all output to it.
If you really think you need to do this, then you need to take a look at PowerMock. More specifically, it's ability to mock static methods. PowerMock integrates with EasyMock and Mockito, but some hunting about might result in you finding a JMock integration too if you have to stick with that.
Having said that, I think that setting up your test framework so that it logs nicely without affecting your tests, and ensuring your tests do not depend upon what gets logged is a better approach. I once had to maintain some unit tests that checked what had been logged, and they were the most brittle and useless unit tests I have ever seen. I rewrote them as soon as I had the time available to do it.
Check this similar question:
How to mock with static methods?
And by the way, it is easier to search for an existing qusetion to your problem than to post a question and wait for answers.
The easiest way I have found is using the mock log4j objects in JMockit.
You just need to the add the annotation
#UsingMocksAndStubs(Log4jMocks.class)
to your test class and all code touched by the tester class will be using a mock Logger object.
See this
But this wont log the messages. You can get away from the hassle of dealing with static objects with this.

Why does JUnit run test cases for Theory only until the first failure?

Recently a new concept of Theories was added to JUnit (since v4.4).
In a nutshell, you can mark your test method with #Theory annotation (instead of #Test), make your test method parametrized and declare an array of parameters, marked with #DataPoints annotation somewhere in the same class.
JUnit will sequentially run your parametrized test method passing parameters retrieved from #DataPoints one after another. But only until the first such invocation fails (due to any reason).
The concept seems to be very similar to #DataProviders from TestNG, but when we use data providers, all the scenarios are run inspite of their execution results. And it's useful because you can see how many scenarious work/don't work and you can fix your program more effectively.
So, I wonder what's the reason not to execute #Theory-marked method for every #DataPoint? (It appears not so difficult to inherit from Theories runner and make a custom runner which will ignore failures but why don't we have such behaviour out of the box?)
UPD: I have created a fault-tolerant version of Theories runner and made it available for a public access: https://github.com/rgorodischer/fault-tolerant-theories
In order to compare it with the standard Theories runner run StandardTheoriesBehaviorDemo then FaultTolerantTheoriesBehaviorDemo which are placed under src/test/... folder.
Reporting multiple failures in a single test is generally a sign that
the test does too much, compared to what a unit test ought to do.
Usually this means either that the test is really a
functional/acceptance/customer test or, if it is a unit test, then it
is too big a unit test.
JUnit is designed to work best with a number of small tests. It
executes each test within a separate instance of the test class. It
reports failure on each test. Shared setup code is most natural when
sharing between tests. This is a design decision that permeates JUnit,
and when you decide to report multiple failures per test, you begin to
fight against JUnit. This is not recommended.
Long tests are a design smell and indicate the likelihood of a design
problem. Kent Beck is fond of saying in this case that "there is an
opportunity to learn something about your design." We would like to
see a pattern language develop around these problems, but it has not
yet been written down.
Source: http://junit.sourceforge.net/doc/faq/faq.htm#tests_12
To ignore assertion failures you can also use a JUnit error collector rule:
The ErrorCollector rule allows execution of a test to continue after
the first problem is found (for example, to collect all the incorrect
rows in a table, and report them all at once)
For example you can write a test like this.
public static class UsesErrorCollectorTwice {
#Rule
public ErrorCollector collector= new ErrorCollector();
#Test
public void example() {
String x = [..]
collector.checkThat(x, not(containsString("a")));
collector.checkThat(y, containsString("b"));
}
}
The error collector uses hamcrest Matchers. Depending on your preferences this is positive or not.
AFAIK, the idea is the same as with asserts, the first failure stops the test. This is the difference between Parameterized & Theories.
Parameterized takes a set of data points and runs a set of test methods with each of them. Theories does the same, but fails when the first assert fails.
Try looking at Parameterized. Maybe it provides what you want.
A Theory is wrong if a single test in it is wrong, according to the definition of a Theory. If your test cases don't follow this rule, it would be wrong to call them a "Theory".

Writing first JUnit test

So I've read the official JUnit docs, which contain a plethora of examples, but (as with many things) I have Eclipse fired up and I am writing my first JUnit test, and I'm choking on some basic design/conceptual issues.
So if my WidgetUnitTest is testing a target called Widget, I assume I'll need to create a fair number of Widgets to use throughout the test methods. Should I be constructing these Widgets in the WidgetUnitTest constructor, or in the setUp() method? Should there be a 1:1 ratio of Widgets to test methods, or do best practices dictate reusing Widgets as much as possible?
Finally, how much granularity should exist between asserts/fails and test methods? A purist might argue that 1-and-only-1 assertions should exist inside a test method, however under that paradigm, if Widget has a getter called getBuzz(), I'll end up with 20 different test methods for getBuzz() with names like
#Test
public void testGetBuzzWhenFooIsNullAndFizzIsNonNegative() { ... }
As opposed to 1 method that tests a multitude of scenarios and hosts a multitude of assertions:
#Test
public void testGetBuzz() { ... }
Thanks for any insight from some JUnit maestros!
Pattern
Interesting question. First of all - my ultimate test pattern configured in IDE:
#Test
public void shouldDoSomethingWhenSomeEventOccurs() throws Exception
{
//given
//when
//then
}
I am always starting with this code (smart people call it BDD).
In given I place test setup unique for each test.
when is ideally a single line - the thing you are testing.
then should contain assertions.
I am not a single assertion advocate, however you should test only single aspect of a behavior. For instance if the the method should return something and also has some side effects, create two tests with same given and when sections.
Also the test pattern includes throws Exception. This is to handle annoying checked exceptions in Java. If you test some code that throws them, you won't be bothered by the compiler. Of course if the test throws an exception it fails.
Setup
Test setup is very important. On one hand it is reasonable to extract common code and place it in setup()/#Before method. However note that when reading a test (and readability is the biggest value in unit testing!) it is easy to miss setup code hanging somewhere at the beginning of the test case. So relevant test setup (for instance you can create widget in different ways) should go to test method, but infrastructure (setting up common mocks, starting embedded test database, etc.) should be extracted. Once again to improve readability.
Also are you aware that JUnit creates new instance of test case class per each test? So even if you create your CUT (class under test) in the constructor, the constructor is called before each test. Kind of annoying.
Granularity
First name your test and think what use-case or functionality you want to test, never think in terms of:
this is a Foo class having bar() and buzz() methods so I create FooTest with testBar() and testBuzz(). Oh dear, I need to test two execution paths throughout bar() - so let us create testBar1() and testBar2().
shouldTurnOffEngineWhenOutOfFuel() is good, testEngine17() is bad.
More on naming
What does the testGetBuzzWhenFooIsNullAndFizzIsNonNegative name tell about the test? I know it tests something, but why? And don't you think the details are too intimate? How about:
#Test shouldReturnDisabledBuzzWhenFooNotProvidedAndFizzNotNegative`
It both describes the input in a meaningful manner and your intent (assuming disabled buzz is some sort of buzz status/type). Also note we no longer hardcode getBuzz() method name and null contract for Foo (instead we say: when Foo is not provided). What if you replace null with null object pattern in the future?
Also don't be afraid of 20 different test methods for getBuzz(). Instead think of 20 different use cases you are testing. However if your test case class grows too big (since it is typically much larger than tested class), extract into several test cases. Once again: FooHappyPathTest, FooBogusInput and FooCornerCases are good, Foo1Test and Foo2Test are bad.
Readability
Strive for short and descriptive names. Few lines in given and few in then. That's it. Create builders and internal DSLs, extract methods, write custom matchers and assertions. The test should be even more readable than production code. Don't over-mock.
I find it useful to first write a series of empty well-named test case methods. Then I go back to the first one. If I still understand what was I suppose to test under what conditions, I implement the test building a class API in the meantime. Then I implement that API. Smart people call it TDD (see below).
Recommended reading:
Growing Object-Oriented Software, Guided by Tests
Unit Testing in Java: How Tests Drive the Code
Clean Code: A Handbook of Agile Software Craftsmanship
You would create a new instance of the class under test in your setup method. You want each test to be able to execute independently without having to worry about any unwanted state in the object under test from another previous test.
I would recommend having separate test for each scenario/behavior/logic flow that you need to test, not one massive test for everything in getBuzz(). You want each test to have a focused purpose of what you want to verify in getBuzz().
Rather than testing methods try to focus on testing behaviors. Ask the question "what should a widget do?" Then write a test that affirms the answer. Eg. "A widget should fidget"
public void setUp() throws Exception {
myWidget = new Widget();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
compile, see "no method fidget defined " errors, fix the errors, recompile the test and repeat. Next ask the question what should the result of each behavior be, in our case what happens as the result of fidget? Maybe there is some observable output like a new 2D coordinate position. In this case our widget would be assumed to be in a given position and when it fidgets it's position is altered some way.
public void setUp() throws Exception {
//Given a widget
myWidget = new Widget();
//And it's original position
Point initialWidgetPosition = widget.position();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
public void testAWidgetPositionShouldChangeWhenItFidgets() throws Exception {
myWidget.fidget();
assertNotEquals(initialWidgetPosition, widget.position());
}
Some would argue against both tests exercising the same fidget behavior but it makes sense to single out the behavior of fidget independent of how it impacts widget.position(). If one behavior breaks the single test will pinpoint the cause of failure. Also it is important to state that the behavior can be exercised on its own as a fulfillment of the spec (you do have program specs don't you?) that suggests you need a fidgety widget. In the end it's all about realizing your program specs as code that exercises your interfaces which demonstrate both that you've completed the spec and secondly how one interacts with your product. This is in essence how TDD should work. Any attempt to resolve bugs or test the product usually results in a frustrating pointless debate over which framework to use, level of coverage and how fine grained your suite should be. Each test case should be an exercise of breaking down your spec into a component where you can begin phrasing with Given/When/Then. Given {some application state or precondition} When {a behavior is invoked} Then {assert some observable output}.
I completely second Tomasz Nurkiewicz answer, so I'll say that rather than repeating everything he said.
A couple more points:
Don't forget to test error cases. You can consider something like that:
#Test
public void throwExceptionWhenConditionOneExist() {
// setup
// ...
try {
classUnderTest.doSomething(conditionOne);
Assert.fail("should have thrown exception");
} catch (IllegalArgumentException expected) {
Assert.assertEquals("this is the expected error message", expected.getMessage());
}
}
Also, it has GREAT value to start writing your tests before even thinking about the design of your class under test. If you're a beginner on unit-testing, I cannot emphasize enough learning this technique at the same time (this is called TDD, test-driven development) which proceeds like that:
You think about what user case you have for your user requirements
You write a basic first test for it
You make it compile (by creating needed classes -including your class under test-, etc.)
You run it: it should fail
Now you implement the functionality of the class under test that will make it pass (and nothing more)
Rinse, and repeat with a new requirement
When all your requirements have passing tests, then you're done. You NEVER write anything in your production code that doesn't have a test before (exceptions to that is logging code, and not much more).
TDD is invaluable in producing good quality code, not over-engineering requirements, and making sure you have a 100% functional coverage (rather than line coverage, which is usually meaningless). It requires a change in the way you consider coding, that's why it's valuable to learn the technique at the same time as testing. Once you get it, it will become natural.
Next step is looking into Mocking strategies :)
Have fun testing.
First of all, the setUp and the tearDown Methods will be called before and after each Test, so the setUp Method should create the objects, if you need them in every test, and test-specific things may be done in the test itself.
Second, it is up to you how you want to test your program. Obviously you could write a test for every possible situation in your program and end up with a gazillion tests for every method. Or you could write just one test for every method, which checks every possible scenario. I would recommend a mixture between both ways. You really don't need test for trivial getters/setters, but writing just one test for a method may result in confusion if the test fails. You should decide, which Methods are worth testing, and which scenarios are worth testing. But in principle every scenario should have its own Test.
Mostly I end up with a code coverage of 80 to 90 percent with my tests.

Is there an alternative to mock objects in unit testing?

It's a Java (using JUnit) enterprise Web application with no mock objects pre-built, and it would require a vast amount of time not estimated to create them. Is there a testing paradigm that would give me "some" test coverage, but not total coverage?
Have you tried a dynamic mocking framework such as EasyMock? It does not require you to "create" a Mock object in that you would have to write the entire class - you specify the behavior you want within the test itself.
An example of a class that uses a UserService to find details about a User in order to log someone in:
//Tests what happens when a username is found in the backend
public void testLoginSuccessful() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andReturn(new User(...));
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertTrue(isLoggedIn);
}
//Tests what happens when the user does not exist
public void testLoginFailure() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andThrow(new UserNotFoundException());
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertFalse(isLoggedIn);
}
(1) Alternatives to unit-testing (and mocks) include integration testing (with dbUnit) and FIT testing. For more, see my answer here.
(2) The mocking framework Mockito is outstanding. You wouldn't have to "pre-build" any mocks. It is relatively easy to introduce into a project.
I would echo what others are saying about EasyMock. However, if you have a codebase where you need to mock things like static method calls, final classes or methods, etc., then give JMockit a look.
Well, one easy, if not the easiest, way to get an high level of code coverage is to write the code test-first, following Test-Driven Development (TDD). Now that the code exists, without unit tests, it can be deemed as legacy code.
You could either write end-to-end test, external to your application, those won't be unit tests, but they can be written without resorting to any kind of mock. Or you could write unit tests that span over multiple classes, and only mock the classes that gets in the way of your unit tests.
Do you have real world data you can import into your testbed to use as your 'mock objects' that would be quick
I think the opposite is hard - to find a testing methodology that gives you total coverage, if at all possible in most cases.
You should give EasyMock a try.

Force JUnit to run one test case at a time

I have a problematic situation with some quite advanced unit tests (using PowerMock for mocking and JUnit 4.5). Without going into too much detail, the first test case of a test class will always succeed, but any following test cases in the same test class fails. However, if I select to only run test case 5 out of 10, for example, it will pass. So all tests pass when being run individually. Is there any way to force JUnit to run one test case at a time? I call JUnit from an ant-script.
I am aware of the problem of dependant test cases, but I can't pinpoint why this is. There are no saved variables across the test cases, so nothing to do at #Before annotation. That's why I'm looking for an emergency solution like forcing JUnit to run tests individually.
I am aware of all the recommendations, but to finally answer your question here is a simple way to achieve what you want. Just put this code inside your test case:
Lock sequential = new ReentrantLock();
#Override
protected void setUp() throws Exception {
super.setUp();
sequential.lock();
}
#Override
protected void tearDown() throws Exception {
sequential.unlock();
super.tearDown();
}
With this, no test can start until the lock is acquired, and only one lock can be acquired at a time.
It seems that your test cases are dependent, that is: the execution of case-X affects the execution of case-Y. Such a testing system should be avoided (for instance: there's no guarantee on the order at which JUnit will run your cases).
You should refactor your cases to make them independent of each other. Many times the use of #Before and #After methods can help you untangle such dependencies.
Your problem is not that JUnit runs all the tests at once, you problem is that you don't see why a test fails. Solutions:
Add more asserts to the tests to make sure that every variable actually contains what you think
Download an IDE from the Internet and use the built-in debugger to look at the various variables
Dump the state of your objects just before the point where the test fails.
Use the "message" part of the asserts to output more information why it fails (see below)
Disable all but a handful of tests (in JUnit 3: replace all strings "void test" with "void dtest" in your source; in JUnit 4: Replace "#Test" with "//D#TEST").
Example:
assertEquals(list.toString(), 5, list.size());
Congratulations. You have found a bug. ;-)
If the tests "shouldn't" effect each other, then you may have uncovered a situation where your code can enter a broken state. Try adding asserts and logging to figure out where the code goes wrong. You may even need to run the tests in a debugger and check your code's internal values after the first test.
Excuse me if I dont answer your question directly, but isn't your problem exactly what TestCase.setUp() and TestCase.tearDown() are supposed to solve? These are methods that the JUnit framework will always call before and after each test case, and are typically used to ensure you begin each test case in the same state.
See also the JavaDoc for TestCase.
You should check your whole codebase that there are no static variables which refer to mutable state. Ideally the program should have no static mutable state (or at least they should be documented like I did here). Also you should be very careful about cleaning up what you write, if the tests write to the file system or database. Otherwise running the tests may leak some side-effects, which makes it hard to make the tests independent and repeatable.
Maven and Ant contain a "forkmode" parameter for running JUnit tests, which specifies whether each test class gets its own JVM or all tests are run in the same JVM. But they do not have an option for running each test method in its own JVM.
I am aware of the problem of dependant
test cases, but I can't pinpoint why
this is. There are no saved variables
across the test cases, so nothing to
do at #Before annotation. That's why
I'm looking for an emergency solution
like forcing JUnit to run tests
individually.
The #Before statement is harmless, because it is called for every test case. The #BeforeClass is dangerous, because it has to be static.
It sounds to me that perhaps it isn't that you are not setting up or tearing down your tests properly (although additional setup/teardown may be part of the solution), but that perhaps you have shared state in your code that you are not aware of. If an early test is setting a static / singleton / shared variable that you are unaware of, the later tests will fail if they are not expecting this. Even with Mocks this is very possible. You need to find this cause. I agree with the other answers in that your tests have exposed a problem that should not be solved by trying to run the tests differently.
Your description shows me, that your unit tests depend each other. That is strongly not recommended in unit tests.
Unit test must be independent and isolated. You have to be able to execute them alone, all of them (in which order, it does not matter).
I know, that does not help you. The problem will be in your #BeforeClass or #Before statements. There will be dependencies. So refactor them and try to isolate the problem.
Probably your mocks are created in your #BeforeClass. Consider to put it into the #Before statement. So there's no instance that last longer than a test case.

Categories

Resources