mocking Logger.getLogger() using jmock - java

I am working on legacy code and writing some junit tests (I know, wrong order, never mind) using jmock (also wasn't my choice, nothing I can change about that) and I have class which does some elaborate logging, messing with Strings in general. We are using log4j for logging and I want to test those logged messages. I thought about mocking Logger class, but I don't know how to do it.
As usually we have Logger done like this:
private static final Logger LOG = Logger.getLogger(SomeClass.class);
Does anyone have any idea how to mock method .getLogger(class) or any other idea how to check what exactly has been logged?

You can write own appender and redirect all output to it.

If you really think you need to do this, then you need to take a look at PowerMock. More specifically, it's ability to mock static methods. PowerMock integrates with EasyMock and Mockito, but some hunting about might result in you finding a JMock integration too if you have to stick with that.
Having said that, I think that setting up your test framework so that it logs nicely without affecting your tests, and ensuring your tests do not depend upon what gets logged is a better approach. I once had to maintain some unit tests that checked what had been logged, and they were the most brittle and useless unit tests I have ever seen. I rewrote them as soon as I had the time available to do it.

Check this similar question:
How to mock with static methods?
And by the way, it is easier to search for an existing qusetion to your problem than to post a question and wait for answers.

The easiest way I have found is using the mock log4j objects in JMockit.
You just need to the add the annotation
#UsingMocksAndStubs(Log4jMocks.class)
to your test class and all code touched by the tester class will be using a mock Logger object.
See this
But this wont log the messages. You can get away from the hassle of dealing with static objects with this.

Related

JUnit5 - execute a test method without #Test annotation

I have a super class that defines #BeforeEach and #AfterEach methods. The class also has a method that should run only when a system property is set - basically #EnabledIfSystemPropertyCondition. Normal tests with #Test annotation are in subclasses.
This works well, but the conditional test is shown as skipped in test report - which I want to avoid. Is there any way I can run a test method on a condition but not consider it as a test in normal situations?
This question is not related to inheritance of #Test annotations. The basic question is Is there any way I can run a test method on a condition but not consider it as a test in normal situations?
The easiest way to achieve that you want is to check clause manually inside the test case (just as #Paizo advised in comments). JUnit has several features which allow you skip tests execution such as Junit-ext project with #RunIf annotation or special Assume clause which will force test to be skipped if this assumption is not met. But these features also mark test as skipped which is not desired in your case. Another possibility you might think about is modifying your code with the magic of reflection to add/remove annotations at runtime, but this is not possible. Theoretically I can imagine a difficult way using cglib to subclass your test class at runtime and manage annotations just like Spring do, but first of all ask yourself if it worth?
My personal feeling is that you're trying to fix the thing which is working perfectly and not broken. Skipped test in the report is not a failed test. It's cool that you can understand from report was test executed or no.
Hope it helps!

logging in TDD with Junit

Recently some one told me that we should not have logging in JUnit test cases or TDD in general. As I have started my self on TDD and it has helped me a lot, I see TDD as part of the code and I used log4j for my test methods. My reason: I was writing code in TDD test cases in JUnit so i should use logging for it as well.
What is the general prevalent opinion on this, i searched on this but couldn't find anything related to this on google. my sample test method will look something like this where destinationFileName is a class instance variable which is initialized in #BeforeTest method... Tell me if this logging is good or should not be added to test methods.
#Test
public void testProcessDestinationByStAX() throws Exception {
logger.info("Testing processDestinationByStAX");
DestinationProcessor destinationProcessor = new DestinationProcessor();
int expResult = numOfDestinations;
logger.info("parsing " + destinationFileName);
List<Destination> result = destinationProcessor.processDestinationByStAX(destinationFileName);
logger.info("Successfully Parsed : " + result.size() + " destinations ...");
assertEquals(expResult, result.size());
}
IHMO, if logging is useful to determine what it going on it should be in the code not the tests. JUnit will log the test name (which should provide information about what is under test) and the code should have debug logging to help determine flow. Given these two things, I would normally suggest that logging in the tests cluttered and also clutters the log.
I also find that tests should be kept as simple as possible. This is because people don't like to maintain then. The more lines of code (including logging statements) in your tests, the more complex they become. People will spend time trying to figure out what you code does. They will not spend time trying to figure out what you test does. For this reason KISS is key to tests.
All that said, I think it would be hard for someone to suggest that there is anything inherently wrong with adding logging to your tests.
Another question might be, should you test logging? Should your unit test verify that expected logging takes place. I have not seen much discussion of this but IMHO, logging at the levels of WARNING or above should be verified.
On a slight side note, looking at your test it is not immediately obvious what is the method under test. Consider formatting your test as follows:
#Test
public void testProcessDestinationByStAX() throws Exception {
// setup
logger.info("Testing processDestinationByStAX");
DestinationProcessor destinationProcessor = new DestinationProcessor();
int expResult = numOfDestinations;
logger.info("parsing " + destinationFileName);
// test
List<Destination> result = destinationProcessor.processDestinationByStAX(destinationFileName);
// verify
logger.info("Successfully Parsed : " + result.size() + " destinations ...");
assertEquals(expResult, result.size());
}
This make your test clearer in that there should be exactly one invocation of the method under test and doing the formatting make it easy to find that invocation.
I think logging is useful in most case. For instance, a test time out and didn't throw exceptions. The log may help you find the problem. When we run a test case, you may see the log to see which process it is in and also we can check something that is hard to check with assert.
The opinion below is a little different. The key point of it would be sometimes the log information can be tracking and also the log in an automatic tool will be maintained for review or something.
In my opinion, most test cases don't need log4j to maintain the log. System.out.println() is enough for most scenario. Assert message is much more important.
But there are some special cases which log4j would be very useful.
For instance,
Where do I configure log4j in a JUnit test class?
I still need to run a test case by itself some times, in which case I
want log4j configured.
In this case, he need it run by itself sometimes and need to maintain a log for it.
Of course, if you have a teamcity to maintain every build log, you can use System.out.println as well. Because teamcity or other automatic tools help you maintain the log. If you don't use those tools and still want to run the test case many times, also need to know when a test case failed in which build with which check-in. Then the log4j would be useful.
In a word, I think we should avoid using log4j in a general Junit test case if we don't have some special requirement.
I think this is not related with TDD, or in a TDD, this is not the most important things.

How to test "add" in DAO without using "find" etc.?

In following code the issue is, that I cannot test dao.add() without using dao.list().size() and vice versa.
Is this approach normal or incorrect? If incorrect, how can it be improved?
public class ItemDaoTest {
// dao to test
#Autowired private ItemDao dao;
#Test
public void testAdd() {
// issue -> testing ADD but using LIST
int oldSize = dao.list().size();
dao.add(new Item("stuff"));
assertTrue (oldSize < dao.list().size());
}
#Test
public void testFind() {
// issue -> testing FIND but using ADD
Item item = new Item("stuff")
dao.add(item);
assertEquals(item, dao.find(item.getId()));
}
}
I think your test are valid integration tests as stated above, but I would use Add to aid in the testing of of Find and vice verse..
At some level you have to allow yourself to place trust in your lowest level of integration to your external dependency. I realize there is a dependency to other methods in your tests, but I find that Add and Find methods are "low level" methods that are very easy to verify.
They essentially test each other as they are basically inverse methods.
Add can easily build preconditions for find
Find can verify that an add was successful.
I can't think of a scenario where a failure in either wouldn't be caught by your test
Your testAdd method has a problem: it depends on the assumption that ItemDao.list functions properly, and yet ItemDao is the Class that you're testing. Unit tests are meant to be independent, so a better approach is use plain JDBC -as #Amir said- to verify if the record was introduced in the database.
If you're using Spring, you can relay on AbstractTransactionalDataSourceSpringContextTests to access JDBCTemplate facilities and assure a rollback after the test was executed.
I use direct JDBC (using Spring's JdbcTemplate) to test the DAO methods. I mean I call the DAO methods (which are Hibernate base), and then confirm them using JDBC direct SQL calls.
The smallest unit under test for class-based unit testing is a class.
To see why, consider that you could, in theory, test each method of the class in isolation from all other methods by bypassing, stubbing or mocking them. Some tools may not support that; this is theory not practice, assume they do.
Even so, doing things that way would be a bad idea. The specification of an individual function by itself will vary between vaguely meaningless and verbosely incomprehensible. Only in the pattern of interaction between different functions will there exist a specification simpler than the code that you can profitably use to test it.
If you add an item and the number of items reported increases, things are working. If there is some way things could not be working, but nevertheless all the patterns of interaction hold, then you are missing some needed test.

Is there a better way to test the following methods without mocks returning mocks?

Assume the following setup:
interface Entity {}
interface Context {
Result add(Entity entity);
}
interface Result {
Context newContext();
SpecificResult specificResult();
}
class Runner {
SpecificResult actOn(Entity entity, Context context) {
return context.add(entity).specificResult();
}
}
I want to see that the actOn method simply adds the entity to the context and returns the specificResult. The way I'm testing this right now is the following (using Mockito)
#Test
public void testActOn() {
Entity entity = mock(Entity.class);
Context context = mock(Context.class);
Result result = mock(Result.class);
SpecificResult specificResult = mock(SpecificResult.class);
when(context.add(entity)).thenReturn(result);
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
}
However this seems horribly white box, with mocks returning mocks. What am I doing wrong, and does anybody have a good "best practices" text they can point me to?
Since people requested more context, the original problem is an abstraction of a DFS, in which the Context collects the graph elements and calculates results, which are collated and returned. The actOn is actually the action at the leaves.
It depends of what and how much you want your code to be tested. As you mentionned the tdd tag, I suppose you wrote your test contracts before any actual production code.
So in your contract what do you want to test on the actOn method:
That it returns a SpecificResult given both a Context and an Entity
That add(), specificResult() interactions happen on respectively the Context and the Entity
That the SpecificResult is the same instance returned by the Result
etc.
Depending on what you want to be tested you will write the corresponding tests. You might want to consider relaxing your testing approach if this section of code is not critical. And the opposite if this section can trigger the end of the world as we know it.
Generally speaking whitebox tests are brittle, usually verbose and not expressive, and difficult to refactor. But they are well suited for critical sections that are not supposed to change a lot and by neophytes.
In your case having a mock that returns a mock does look like a whitebox test. But then again if you want to ensure this behavior in the production code this is ok.
Mockito can help you with deep stubs.
Context context = mock(Context.class, RETURNS_DEEP_STUBS);
given(context.add(any(Entity.class)).specificResult()).willReturn(someSpecificResult);
But don't get used to it as it is usually considered bad practice and a test smell.
Other remarks :
Your test method name is not precise enough testActOn does tell the reader what behavior your are testing. Usually tdd practitioners replace the name of the method by a contract sentence like returns_a_SpecificResult_given_both_a_Context_and_an_Entity which is clearly more readable and give the practitioner the scope of what is being tested.
You are creating mock instances in the test with Mockito.mock() syntax, if you have several tests like that I would recommend you to use a MockitoJUnitRunner with the #Mock annotations, this will unclutter a bit your code, and allow the reader to better see what's going on in this particular test.
Use the BDD (Behavior Driven Dev) or the AAA (Arrange Act Assert) approach.
For example:
#Test public void invoke_add_then_specificResult_on_call_actOn() {
// given
... prepare the stubs, the object values here
// when
... call your production code
// then
... assertions and verifications there
}
All in all, as Eric Evans told me Context is king, you shall take decisions with this context in mind. But you really should stick to best practice as much as possible.
There's many reading on test here and there, Martin Fowler has very good articles on this matter, James Carr compiled a list of test anti-patterns, there's also many reading on using well the mocks (for example the don't mock types you don't own mojo), Nat Pryce is the co-author of Growing Object Oriented Software Guided by Tests which is in my opinion a must read, plus you have google ;)
Consider using fakes instead of mocks. It's not really clear what the classes in question are meant to to, but if you can build a simple in-memory (not thread-safe, not persistent etc) implementation of both interfaces, you can use that for flexible testing without the brittleness that sometimes comes from mocking.
I like to use names beginning mock for all my mock objects. Also, I would replace
when(result.specificResult()).thenReturn(specificResult);
Assert.assertTrue(new Runner().actOn(entity,context) == specificResult);
with
Runner toTest = new Runner();
toTest.actOn( mockEntity, mockContext );
verify( mockResult ).specificResult();
because all you're trying to assert is that specificResult() gets run on the right mock object. Whereas your original assert doesn't make it quite so clear what is being asserted. So you don't actually need a mock for SpecificResult. That cuts you down to just one when call, which seems to me to be about right for this kind of test.
But yes, this does seem frightfully white box. Is Runner a public class, or some hidden implementation detail of a higher level process? If it's the latter, then you probably want to write tests around the behaviour at the higher level; rather than probing implementation details.
Not knowing much about the context of the code, I would suggest that Context and Result are likely simple data objects with very little behavior. You could use a Fake as suggested in another answer or, if you have access to the implementations of those interfaces and construction is simple, I'd just use the real objects in lieu of Fakes or Mocks.
Although the context would provide more information, I don't see any problems with your testing methodology myself. The whole point of mock objects is to verify calling behavior without having to instantiate the implementations. Creating stub objects or using actual implementing classes just seems unnecessary to me.
However this seems horribly white box, with mocks returning mocks.
This may be more about the class design than the testing. If that is the way the Runner class works with the external interfaces then I don't see any problem with having the test simulate that behavior.
First off, since nobody's mentioned it, Mockito supports chaining so you can just do:
when(context.add(entity).specificResult()).thenReturn(specificResult);
(and see Brice's comment for how to do enable this; sorry I missed it out!)
Secondly, it comes with a warning saying "Don't do this except for legacy code." You're right about the mock-returning-mock being a bit strange. It's OK to do white-box mocking generally because you're really saying, "My class ought to collaborate with a helper like <this>", but in this case it's collaborating across two different classes, coupling them together.
It's not clear why the Runner needs to get the SpecificResult, as opposed to whatever other result comes out of context.add(entity), so I'm going to make a guess: the Result contains a result with some messages or other information and you just want to know whether it's a success or failure.
That's like me saying, "Don't tell me all about my shopping order, just tell me that I made it successfully!" The Runner shouldn't know that you only want that specific result; it should just return everything that came out, the same way that Amazon shows you your total, postage and all the things you bought, even if you've shopped there lots and are perfectly aware of what you're getting.
If some classes regularly use your Runner just to get a specific result while others require more feedback then I'd make two methods to do it, maybe called something like add and addWithFeedback, the same way that Amazon let you do one-click shopping by a different route.
However, be pragmatic. If it's readable the way you've done it and everyone understands it, use Mockito to chain them and call it a day. You can change it later if you have need.

Is there an alternative to mock objects in unit testing?

It's a Java (using JUnit) enterprise Web application with no mock objects pre-built, and it would require a vast amount of time not estimated to create them. Is there a testing paradigm that would give me "some" test coverage, but not total coverage?
Have you tried a dynamic mocking framework such as EasyMock? It does not require you to "create" a Mock object in that you would have to write the entire class - you specify the behavior you want within the test itself.
An example of a class that uses a UserService to find details about a User in order to log someone in:
//Tests what happens when a username is found in the backend
public void testLoginSuccessful() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andReturn(new User(...));
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertTrue(isLoggedIn);
}
//Tests what happens when the user does not exist
public void testLoginFailure() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andThrow(new UserNotFoundException());
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertFalse(isLoggedIn);
}
(1) Alternatives to unit-testing (and mocks) include integration testing (with dbUnit) and FIT testing. For more, see my answer here.
(2) The mocking framework Mockito is outstanding. You wouldn't have to "pre-build" any mocks. It is relatively easy to introduce into a project.
I would echo what others are saying about EasyMock. However, if you have a codebase where you need to mock things like static method calls, final classes or methods, etc., then give JMockit a look.
Well, one easy, if not the easiest, way to get an high level of code coverage is to write the code test-first, following Test-Driven Development (TDD). Now that the code exists, without unit tests, it can be deemed as legacy code.
You could either write end-to-end test, external to your application, those won't be unit tests, but they can be written without resorting to any kind of mock. Or you could write unit tests that span over multiple classes, and only mock the classes that gets in the way of your unit tests.
Do you have real world data you can import into your testbed to use as your 'mock objects' that would be quick
I think the opposite is hard - to find a testing methodology that gives you total coverage, if at all possible in most cases.
You should give EasyMock a try.

Categories

Resources