Executing a specific method from a source class using unit test - java

Is there an unit testing annotation or feature flag that I can insert in my java source code so that the method will only be triggered when running unit test?
For Example:
public class A {
#Annotation
public void fooA() { }
public void fooB() { }
}
My Unit test class:
public class TestA {
...
}
In this scenario, when I run my unit test class TestA, I want to execute fooA() and fooB().
But if I would like to run my prod source code that includes classA, then only function fooB() will be executed and not fooA().

In general: don't go there.
Your production code has one responsibility; and one responsibility only: to do its production job. You do not factor in complex test-related aspects.
Occasionally, it might be required or helpful to have a constructor that takes more arguments (for dependency injection). Then just make that thing package protected, and put an informal "/** unit test only */" comment there.
What I mean is: there is no such annotation. And it also doesn't make sense to have one. You describe your external interface with clear javadoc, you write your classes so that it becomes obvious that a user should only use/call fooB().
Thing is: the last thing you want to happen is that some test-only artifact in your code causes an issue in your production environment. And the best way to avoid that risk: don't create such artifacts.

You can declare your test package to be the same as as your unit-under-test (UUT).
package com.example.fubar.arglebargle;
public class A {
// package-private, TEST ONLY!
void fooA() { }
public void fooB() { }
}
Unit test class:
package com.example.fubar.arglebargle;
public class TestA {
// etc...
}
My IDE does this automatically with JUnit testing. All my tests are declared in the same package as my UUT (although they're obviously in a completely different source tree; don't put tests and production source in the same sub-dir!).
This is handy for accessing some internals that wouldn't be available otherwise. It also seems to be standard practice, so I think there's nothing against doing it.
This won't stop everyone from calling fooA(), but it will limit potential problems to classes in the same package as your class. That's a much smaller set of code to worry about, and often you can rely on people modifying code to read the comments in other classes in the same package carefully.
I think some test frameworks may be able to access private methods (maybe by disabling the security manager?) but I don't know off hand. If it's important, I'd research test frameworks.

Why a code that is used only for tests should be available in production ?
1) it doesn't validate the applicative code as the application doesn't use it.
2)It is error prone for clients of the class that use it.
3)It makes harder the maintainability and the readability of the code.
For example a developer could wonder why this method is present if it has no caller in the applicative code.
Considering it as dead code, he could remove the method and remove also the unit test that uses it.
To solve your problem, I think that this method that is required in non production environments should be included in the packaging of the application in non production environments but it should be excluded from the packaging in the production environment.
Tools as Maven and Gradle handle very well this requirement.
Actually, you cannot do it as the method makes part of a class that is required in production. So, a first step would be to extract the method for testing purpose in a specific class.
In this way, filtering it would be possible.

Related

Why should I not be mocking File or Path

I've been struggling with understanding the minefield that is classes supported by the jmockit Mocking API. I found what I thought was a bug with File, but the issue was simply closed saying "Don't mock a File or Path" which no explanation. Can someone on here help me understand why certain classes should not be mocked and how I'm supposed to work around that. I'm trying to contribute to the library with bug reports, but each one I file just leaves me more confused. Pardon my ignorance, but if anyone can point me to the rationale for forbidden mocks etc. I'd greatly appreciate it.
I'm not certain that there's a fool proof list of rules. There's always a certain degree of knowledge and taste in these matters. Dogma is usually not a good idea.
I voted to close this question because I think a lot of it is opinion.
But with that said, I'll make an attempt.
Unit tests should be about testing individual classes, not interactions between classes. Those are called integration tests.
If your object is calling out to other remote objects, like services, you should mock those to return the data needed for your test. The idea is that services and their clients should also be tested individually in their own unit tests. You need not repeat those when testing a class that depends on them.
One exception to this rule, in my opinion, are data access objects. There is no sense in testing one of those without connecting to a remote database. Your test needs to prove the proper operation of your code. That requires a database connection in the case of data access objects. These should be written to be transactional: seed the database, perform the test, and reverse the actions of the test. The database should be in the same state when you're done.
Once your data access objects are certified as working correctly, all clients that use them should mock them. No need to re-test.
You should not be mocking classes in the JVM.
You asked for a why about File or Stream in particular - here's one. Take it or ignore it.
You don't have to test JVM classes because Sun/Oracle have already done that. You know they work. You want your class to use those classes because a failing test will expose the fact that the necessary file isn't available. A mock won't tell me that I've neglected to put a required file in my CLASSPATH. I want to find out during testing, not in production.
Another reason is that unit tests are also documentation. It's a live demonstration for others to show how to properly use your class.
In unit test you have to test that your code is doing the right thing. You can mock out any external pieces of code that is not the direct code being tested. This is a strict definition of unit test and it assumes there is another form of testing called integration testing that will be done later on. In integration testing you test how your code interacts with external elements, like a DB or another web service, or the network, or the hard drive.
If I have a piece of code that interacts with an object, like a File, and my code does 3 things to that file, then in my unit test I am going to test that my code has done those three things.
For example:
public void processFile(File f) {
if (f.exists()) {
//perform some tasks
} else {
//perform some other tasks
}
}
To properly unit-test the code above I would run at least two unit tests. One to test if the file exists, and the other to test that my code does the correct thing when the file does not exist. Because unit testing, IMHO, is only testing my code and not doing integration tests, then it is perfectly fine to mock File so that you can test both branches of this method.
During integration testing you can then test with a real File as your application will be interacting with its surroundings.
Try Mockito:
I do not know why jmockit does not allow you to mock the File class. In Mockito it can be done. Example below.
import java.io.File;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
public class NewMain {
public static void main(String[] args) {
File f = mock(File.class);
when(f.exists()).thenReturn(true);
System.out.println("f.exists = " + f.exists());
}
}

How to test "add" in DAO without using "find" etc.?

In following code the issue is, that I cannot test dao.add() without using dao.list().size() and vice versa.
Is this approach normal or incorrect? If incorrect, how can it be improved?
public class ItemDaoTest {
// dao to test
#Autowired private ItemDao dao;
#Test
public void testAdd() {
// issue -> testing ADD but using LIST
int oldSize = dao.list().size();
dao.add(new Item("stuff"));
assertTrue (oldSize < dao.list().size());
}
#Test
public void testFind() {
// issue -> testing FIND but using ADD
Item item = new Item("stuff")
dao.add(item);
assertEquals(item, dao.find(item.getId()));
}
}
I think your test are valid integration tests as stated above, but I would use Add to aid in the testing of of Find and vice verse..
At some level you have to allow yourself to place trust in your lowest level of integration to your external dependency. I realize there is a dependency to other methods in your tests, but I find that Add and Find methods are "low level" methods that are very easy to verify.
They essentially test each other as they are basically inverse methods.
Add can easily build preconditions for find
Find can verify that an add was successful.
I can't think of a scenario where a failure in either wouldn't be caught by your test
Your testAdd method has a problem: it depends on the assumption that ItemDao.list functions properly, and yet ItemDao is the Class that you're testing. Unit tests are meant to be independent, so a better approach is use plain JDBC -as #Amir said- to verify if the record was introduced in the database.
If you're using Spring, you can relay on AbstractTransactionalDataSourceSpringContextTests to access JDBCTemplate facilities and assure a rollback after the test was executed.
I use direct JDBC (using Spring's JdbcTemplate) to test the DAO methods. I mean I call the DAO methods (which are Hibernate base), and then confirm them using JDBC direct SQL calls.
The smallest unit under test for class-based unit testing is a class.
To see why, consider that you could, in theory, test each method of the class in isolation from all other methods by bypassing, stubbing or mocking them. Some tools may not support that; this is theory not practice, assume they do.
Even so, doing things that way would be a bad idea. The specification of an individual function by itself will vary between vaguely meaningless and verbosely incomprehensible. Only in the pattern of interaction between different functions will there exist a specification simpler than the code that you can profitably use to test it.
If you add an item and the number of items reported increases, things are working. If there is some way things could not be working, but nevertheless all the patterns of interaction hold, then you are missing some needed test.

Dummy data and unit testing strategies in a modular application stack

How do you manage dummy data used for tests? Keep them with their respective entities? In a separate test project? Load them with a Serializer from external resources? Or just recreate them wherever needed?
We have an application stack with several modules depending on another with each containing entities. Each module has its own tests and needs dummy data to run with.
Now a module that has a lot of dependencies will need a lot of dummy data from the other modules. Those however do not publish their dummy objects because they are part of the test resources so all modules have to setup all dummy objects they need again and again.
Also: most fields in our entities are not nullable so even running transactions against the object layer requires them to contain some value, most of the time with further limitations like uniqueness, length, etc.
Is there a best practice way out of this or are all solutions compromises?
More Detail
Our stack looks something like this:
One Module:
src/main/java --> gets jared (.../entities/*.java contains the entities)
src/main/resources --> gets jared
src/test/java --> contains dummy object setup, will NOT get jared
src/test/resources --> not jared
We use Maven to handle dependencies.
module example:
Module A has some dummy objects
Module B needs its own objects AND the same as Module A
Option a)
A Test module T can hold all dummy objects and provide them in a test scope (so the loaded dependencies don't get jared) to all tests in all Modules. Will that work? Meaning: If I load T in A and run install on A will it NOT contain references introduced by T especially not B? Then however A will know about B's datamodel.
Option b)
Module A provides the dummy objects somewhere in src/main/java../entities/dummy allowing B to get them while A does not know about B's dummy data
Option c)
Every module contains external resources which are serialized dummy objects. They can be deserialized by the test environment that needs them because it has the dependency to the module to which they belong. This will require every module to create and serialize its dummy objects though and how would one do that? If with another unit test it introduces dependencies between unit tests which should never happen or with a script it'll be hard to debug and not flexible.
Option d)
Use a mock framework and assign the required fields manually for each test as needed. The problem here is that most fields in our entities are not nullable and thus will require setters or constructors to be called which would end us up at the start again.
What we don't want
We don't want to set up a static database with static data as the required objects' structure will constantly change. A lot right now, a little later. So we want hibernate to set up all tables and columns and fill those with data at unit testing time. Also a static data base would introduce a lot of potential errors and test interdependencies.
Are my thoughts going in the right direction? What's the best practice to deal with tests that require a lot of data? We'll have several interdependent modules that will require objects filled with some kind of data from several other modules.
EDIT
Some more info on how we're doing it right now in response to the second answer:
So for simplicity, we have three modules: Person, Product, Order.
Person will test some manager methods using a MockPerson object:
(in person/src/test/java:)
public class MockPerson {
public Person mockPerson(parameters...) {
return mockedPerson;
}
}
public class TestPerson() {
#Inject
private MockPerson mockPerson;
public testCreate() {
Person person = mockPerson.mockPerson(...);
// Asserts...
}
}
The MockPerson class will not be packaged.
The same applies for the Product Tests:
(in product/src/test/java:)
public class MockProduct() { ... }
public class TestProduct {
#Inject
private MockProduct mockProduct;
// ...
}
MockProduct is needed but will not be packaged.
Now the Order Tests will require MockPerson and MockProduct, so now we currently need to create both as well as MockOrder to test Order.
(in order/src/test/java:)
These are duplicates and will need to be changed every time Person or Product changes
public class MockProduct() { ... }
public class MockPerson() { ... }
This is the only class that should be here:
public class MockOrder() { ... }
public class TestOrder() {
#Inject
private order.MockPerson mockPerson;
#Inject
private order.MockProduct mockProduct;
#Inject
private order.MockOrder mockOrder;
public testCreate() {
Order order = mockOrder.mockOrder(mockPerson.mockPerson(), mockProduct.mockProduct());
// Asserts...
}
}
The problem is, that now we have to update person.MockPerson and order.MockPerson whenever Person is changed.
Isn't it better to just publish the Mocks with the jar so that every other test that has the dependency anyway can just call Mock.mock and get a nicely setup object? Or is this the dark side - the easy way?
This may or may not apply - I'm curious to see an example of your dummy objects and the setup code related. (To get a better idea of whether it applies to your situation.) But what I've done in the past is not even introduce this kind of code into the tests at all. As you describe, it's hard to produce, debug, and especially package and maintain.
What I've usaully done (and AFAIKT in Java this is the best practice) is try to use the Test Data Builder pattern, as described by Nat Pryce in his Test Data Builders post.
If you think this is somewhat relevant, check these out:
Does a framework like Factory Girl exist for Java?
make-it-easy, Nat's framework that implements this pattern.
Well, I read carefully all evaluations so far, and it is very good question. I see following approaches to the problem:
Set up (static) test data base;
Each test has it's own set up data that creates (dynamic) test data prior to running unit tests;
Use dummy or mock object. All modules know all dummy objects, this way there is no duplicates;
Reduce the scope of the unit test;
First option is pretty straight forward and has many drawbacks, somebody has to reproduce it's once in while, when unit tests "mess it up", if there are changes in the data-module, somebody has to introduce corresponding changes to the test data, a lot of maintenance overhead. Not to say that generation of this data on the first hand maybe tricky. See aslo second option.
Second option, you write your test code that prior to the testing invokes some of your "core" business methods that creates your entity. Ideally, your test code should be independent from the production code, but in this case, you will end up with duplicate code, that you should support twice. Sometimes, it is good to split your production business method in order to have entry point for your unit test (I makes such methods private and use Reflection to invoke them, also some remark on the method is needed, refactoring is now a bit tricky). The main drawback that if you must change your "core" business methods it suddenly effects all of your unit test and you can't test. So, developers should be aware of it and not make partials commits to the "core" business methods, unless they works. Also, with any change in this area, you should keep in your mind "how it will affect my unit test". Sometimes also, it is impossible to reproduce all the required data dynamically (usually, it is because of the third-parties API, for example, you call another application with it's own DB from which you required to use some keys. This keys (with the associated data) is created manually through third-party application. In such a case, this data and only this data, should be created statically. For example, your created 10000 keys starting from 300000.
Third option should be good. Options a) and d) sounds for me pretty good. For your dummy object you can use the mock framework or you can not to use it. Mock Framework is here only to help you. I don't see problem that all of your unit know all your entities.
Fourth option means that you redefine what is "unit" in your unit test. When you have couple of modules with interdependence than it can be difficult to test each module in isolation. This approach says, that what we originally tested was integration test and not unit test. So, we split our methods, extract small "units of works" that receives all it's interdependences to another modules as parameters. This parameters can be (hopefully) easily mocked up. The main drawback of this approach, that you don't test all of your code, but only so to say, the "focal points". You need to make integration test separately (usually by QA team).
I'm wondering if you couldn't solve your problem by changing your testing approach.
Unit Testing a module which depends on other modules and, because of that, on the test data of other modules is not a real unit test!
What if you would inject a mock for all of the dependencies of your module under test so you can test it in complete isolation. Then you don't need to setup a complete environment where each depending module has the data it needs, you only setup the data for the module your actually testing.
If you imagine a pyramid, then the base would be your unit tests, above that you have functional tests and at the top you have some scenario tests (or as Google calls them, small, medium and big tests).
You will have a huge amount of Unit Tests that can test every code path because the mocked dependencies are completely configurable. Then you can trust in your individual parts and the only thing that your functional and scenario tests will do is test if each module is wired correctly to other modules.
This means that your module test data is not shared by all your tests but only by a few that are grouped together.
The Builder Pattern as mentioned by cwash will definitely help in your functional tests.
We are using a .NET Builder that is configured to build a complete object tree and generate default values for each property so when we save this to the database all required data is present.

Writing first JUnit test

So I've read the official JUnit docs, which contain a plethora of examples, but (as with many things) I have Eclipse fired up and I am writing my first JUnit test, and I'm choking on some basic design/conceptual issues.
So if my WidgetUnitTest is testing a target called Widget, I assume I'll need to create a fair number of Widgets to use throughout the test methods. Should I be constructing these Widgets in the WidgetUnitTest constructor, or in the setUp() method? Should there be a 1:1 ratio of Widgets to test methods, or do best practices dictate reusing Widgets as much as possible?
Finally, how much granularity should exist between asserts/fails and test methods? A purist might argue that 1-and-only-1 assertions should exist inside a test method, however under that paradigm, if Widget has a getter called getBuzz(), I'll end up with 20 different test methods for getBuzz() with names like
#Test
public void testGetBuzzWhenFooIsNullAndFizzIsNonNegative() { ... }
As opposed to 1 method that tests a multitude of scenarios and hosts a multitude of assertions:
#Test
public void testGetBuzz() { ... }
Thanks for any insight from some JUnit maestros!
Pattern
Interesting question. First of all - my ultimate test pattern configured in IDE:
#Test
public void shouldDoSomethingWhenSomeEventOccurs() throws Exception
{
//given
//when
//then
}
I am always starting with this code (smart people call it BDD).
In given I place test setup unique for each test.
when is ideally a single line - the thing you are testing.
then should contain assertions.
I am not a single assertion advocate, however you should test only single aspect of a behavior. For instance if the the method should return something and also has some side effects, create two tests with same given and when sections.
Also the test pattern includes throws Exception. This is to handle annoying checked exceptions in Java. If you test some code that throws them, you won't be bothered by the compiler. Of course if the test throws an exception it fails.
Setup
Test setup is very important. On one hand it is reasonable to extract common code and place it in setup()/#Before method. However note that when reading a test (and readability is the biggest value in unit testing!) it is easy to miss setup code hanging somewhere at the beginning of the test case. So relevant test setup (for instance you can create widget in different ways) should go to test method, but infrastructure (setting up common mocks, starting embedded test database, etc.) should be extracted. Once again to improve readability.
Also are you aware that JUnit creates new instance of test case class per each test? So even if you create your CUT (class under test) in the constructor, the constructor is called before each test. Kind of annoying.
Granularity
First name your test and think what use-case or functionality you want to test, never think in terms of:
this is a Foo class having bar() and buzz() methods so I create FooTest with testBar() and testBuzz(). Oh dear, I need to test two execution paths throughout bar() - so let us create testBar1() and testBar2().
shouldTurnOffEngineWhenOutOfFuel() is good, testEngine17() is bad.
More on naming
What does the testGetBuzzWhenFooIsNullAndFizzIsNonNegative name tell about the test? I know it tests something, but why? And don't you think the details are too intimate? How about:
#Test shouldReturnDisabledBuzzWhenFooNotProvidedAndFizzNotNegative`
It both describes the input in a meaningful manner and your intent (assuming disabled buzz is some sort of buzz status/type). Also note we no longer hardcode getBuzz() method name and null contract for Foo (instead we say: when Foo is not provided). What if you replace null with null object pattern in the future?
Also don't be afraid of 20 different test methods for getBuzz(). Instead think of 20 different use cases you are testing. However if your test case class grows too big (since it is typically much larger than tested class), extract into several test cases. Once again: FooHappyPathTest, FooBogusInput and FooCornerCases are good, Foo1Test and Foo2Test are bad.
Readability
Strive for short and descriptive names. Few lines in given and few in then. That's it. Create builders and internal DSLs, extract methods, write custom matchers and assertions. The test should be even more readable than production code. Don't over-mock.
I find it useful to first write a series of empty well-named test case methods. Then I go back to the first one. If I still understand what was I suppose to test under what conditions, I implement the test building a class API in the meantime. Then I implement that API. Smart people call it TDD (see below).
Recommended reading:
Growing Object-Oriented Software, Guided by Tests
Unit Testing in Java: How Tests Drive the Code
Clean Code: A Handbook of Agile Software Craftsmanship
You would create a new instance of the class under test in your setup method. You want each test to be able to execute independently without having to worry about any unwanted state in the object under test from another previous test.
I would recommend having separate test for each scenario/behavior/logic flow that you need to test, not one massive test for everything in getBuzz(). You want each test to have a focused purpose of what you want to verify in getBuzz().
Rather than testing methods try to focus on testing behaviors. Ask the question "what should a widget do?" Then write a test that affirms the answer. Eg. "A widget should fidget"
public void setUp() throws Exception {
myWidget = new Widget();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
compile, see "no method fidget defined " errors, fix the errors, recompile the test and repeat. Next ask the question what should the result of each behavior be, in our case what happens as the result of fidget? Maybe there is some observable output like a new 2D coordinate position. In this case our widget would be assumed to be in a given position and when it fidgets it's position is altered some way.
public void setUp() throws Exception {
//Given a widget
myWidget = new Widget();
//And it's original position
Point initialWidgetPosition = widget.position();
}
public void testAWidgetShouldFidget() throws Exception {
myWidget.fidget();
}
public void testAWidgetPositionShouldChangeWhenItFidgets() throws Exception {
myWidget.fidget();
assertNotEquals(initialWidgetPosition, widget.position());
}
Some would argue against both tests exercising the same fidget behavior but it makes sense to single out the behavior of fidget independent of how it impacts widget.position(). If one behavior breaks the single test will pinpoint the cause of failure. Also it is important to state that the behavior can be exercised on its own as a fulfillment of the spec (you do have program specs don't you?) that suggests you need a fidgety widget. In the end it's all about realizing your program specs as code that exercises your interfaces which demonstrate both that you've completed the spec and secondly how one interacts with your product. This is in essence how TDD should work. Any attempt to resolve bugs or test the product usually results in a frustrating pointless debate over which framework to use, level of coverage and how fine grained your suite should be. Each test case should be an exercise of breaking down your spec into a component where you can begin phrasing with Given/When/Then. Given {some application state or precondition} When {a behavior is invoked} Then {assert some observable output}.
I completely second Tomasz Nurkiewicz answer, so I'll say that rather than repeating everything he said.
A couple more points:
Don't forget to test error cases. You can consider something like that:
#Test
public void throwExceptionWhenConditionOneExist() {
// setup
// ...
try {
classUnderTest.doSomething(conditionOne);
Assert.fail("should have thrown exception");
} catch (IllegalArgumentException expected) {
Assert.assertEquals("this is the expected error message", expected.getMessage());
}
}
Also, it has GREAT value to start writing your tests before even thinking about the design of your class under test. If you're a beginner on unit-testing, I cannot emphasize enough learning this technique at the same time (this is called TDD, test-driven development) which proceeds like that:
You think about what user case you have for your user requirements
You write a basic first test for it
You make it compile (by creating needed classes -including your class under test-, etc.)
You run it: it should fail
Now you implement the functionality of the class under test that will make it pass (and nothing more)
Rinse, and repeat with a new requirement
When all your requirements have passing tests, then you're done. You NEVER write anything in your production code that doesn't have a test before (exceptions to that is logging code, and not much more).
TDD is invaluable in producing good quality code, not over-engineering requirements, and making sure you have a 100% functional coverage (rather than line coverage, which is usually meaningless). It requires a change in the way you consider coding, that's why it's valuable to learn the technique at the same time as testing. Once you get it, it will become natural.
Next step is looking into Mocking strategies :)
Have fun testing.
First of all, the setUp and the tearDown Methods will be called before and after each Test, so the setUp Method should create the objects, if you need them in every test, and test-specific things may be done in the test itself.
Second, it is up to you how you want to test your program. Obviously you could write a test for every possible situation in your program and end up with a gazillion tests for every method. Or you could write just one test for every method, which checks every possible scenario. I would recommend a mixture between both ways. You really don't need test for trivial getters/setters, but writing just one test for a method may result in confusion if the test fails. You should decide, which Methods are worth testing, and which scenarios are worth testing. But in principle every scenario should have its own Test.
Mostly I end up with a code coverage of 80 to 90 percent with my tests.

Is there an alternative to mock objects in unit testing?

It's a Java (using JUnit) enterprise Web application with no mock objects pre-built, and it would require a vast amount of time not estimated to create them. Is there a testing paradigm that would give me "some" test coverage, but not total coverage?
Have you tried a dynamic mocking framework such as EasyMock? It does not require you to "create" a Mock object in that you would have to write the entire class - you specify the behavior you want within the test itself.
An example of a class that uses a UserService to find details about a User in order to log someone in:
//Tests what happens when a username is found in the backend
public void testLoginSuccessful() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andReturn(new User(...));
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertTrue(isLoggedIn);
}
//Tests what happens when the user does not exist
public void testLoginFailure() {
UserService mockUserService = EasyMock.createMock(UserService.class);
EasyMock.expect(mockUserService.getUser("aUsername")).andThrow(new UserNotFoundException());
EasyMock.replay(mockUserService);
classUnderTest.setUserService(mockUserService);
boolean isLoggedIn = classUnderTest.login("username");
assertFalse(isLoggedIn);
}
(1) Alternatives to unit-testing (and mocks) include integration testing (with dbUnit) and FIT testing. For more, see my answer here.
(2) The mocking framework Mockito is outstanding. You wouldn't have to "pre-build" any mocks. It is relatively easy to introduce into a project.
I would echo what others are saying about EasyMock. However, if you have a codebase where you need to mock things like static method calls, final classes or methods, etc., then give JMockit a look.
Well, one easy, if not the easiest, way to get an high level of code coverage is to write the code test-first, following Test-Driven Development (TDD). Now that the code exists, without unit tests, it can be deemed as legacy code.
You could either write end-to-end test, external to your application, those won't be unit tests, but they can be written without resorting to any kind of mock. Or you could write unit tests that span over multiple classes, and only mock the classes that gets in the way of your unit tests.
Do you have real world data you can import into your testbed to use as your 'mock objects' that would be quick
I think the opposite is hard - to find a testing methodology that gives you total coverage, if at all possible in most cases.
You should give EasyMock a try.

Categories

Resources