I'm working on a legacy project that have many external dependencies and classes are so tightly coupled that it's almost impossible to unit test. I know the best way to address this would be doing a major refactor but at the moment we do not have the luxury to do so as the project is virtually 0 tests so we are very concerned about breaking stuffs.
What we are trying to achieve at the moment is to quickly come up with unit / component tests and progressively refactor as we work on the project. For component test I'm thinking to have some kind of wrapper on the existing classes to 'record' the input and output then persist it to a physical file. When we are running tests then it will return output based on the input.
How I'm thinking to achieve that is to store the method name and input parameters as the key and the output as the value. And output will be serialized upon 'record' and deserialized during test.
This approach seems to be able to cater for some cases.. but I foresee there will be a lot of complications later on. Eg: I might face several issues serializing certain objects. And I might also experience difficulties passing object reference from "out" parameters.
So here comes my question. Is there any libraries that does all these things? I would have never considered doing this manually if there was a library for it. By the way I'm using Java.
Thanks
Instead of doing low level unit tests, I would start with tests for the minimum units you can divide now, i.e. possibly just one. You can capture the data input and out using a serialization library which doesn't require the objects be marked as Serializable e.g. Jackson.
Related
Hello all :) In the JVM debug mode you can inspect the objects that are present when you run your code.
This means one can create a utility that can generate a dump of those objects as ready to use mocks (or close enough, hopefully). Those mocks would span the entire scope of the program run, and this would help greatly in building an extensive test coverage.
Because lazy is good, I was wondering if such a utility is currently available.
Best regards
I don't know of a way to do this from a memory/heap dump or from debug mode but...
If you want to serialize arbitrary java objects to and from files for use in tests then you can use XStream to do so. You could then use them in your unit tests with ease.
You can also use standard Java serialization if your objects are all serializable.
To collect the data in the first place you can create an aspect using AspectJ or Spring-AOP or similar. I've done something similar in the past and it worked really well.
One word of caution though: If you are doing this then any refactoring of your objects require refactoring of the test data. This is easier with XStream as it's XML files you are dealing with.
Here's the scenario. I have VO (Value Objects) or DTO objects that are just containers for data. When I take those and split them apart for saving into a DB that (for lots of reasons) doesn't map to the VO's elegantly, I want to test to see if each field is successfully being created in the database and successfully read back in to rebuild the VO.
Is there a way I can test that my tests cover every field in the VO? I had an idea about using reflection to iterate through the fields of the VO's as part of the solution, but maybe you guys have solved the problem before?
I want this test to fail when I add fields in the VO, and don't remember to add checks for it in my tests.
dev environment:
Using JUnit, Hibernate/Spring, and Eclipse
Keep it simple: write one test per VO/DTO:
fill the VO/DTO with test data
save it
(optional: check everything has been correctly save at the database level, using pure JDBC)
load it
check that the loaded VO/DTO and the original one matches
Productive code will evolve and tests will need to be maintained as well. Making tests the simplest as possible, even if they are repetitive, is IMHO the best approach. Over-engineering the tests or testing framework itself to make tests generic (e.g. by reading fields with reflection and filling VO/DTO automatically) leads to several problems:
time spent to write the test is higher
bug might be introduced in the test themselves
maintenance of the test is harder because they are more sophisticated
tests are harder to evolve, e.g. the generic code will maybe not work for new kinds of VO/DTO that differ slightly from the other and will be introduced later (it's just an example)
tests can not be used easily as example of how the productive code works
Test and productive code are very different in nature. In productive code, you try to avoid duplication and maximize reuse. Productive code can be complicated, because it is tested. On the other hand, you should try to have tests as simple as possible, and duplication is ok. If a duplicated portion is broken, the test will fail anyway.
When productive code change, this may require several tests to be trivially changed. With the problem that tests are seen as boring piece of code. But I think that's the way they should be.
If I however got your question wrong, just let me know.
I would recommend cobertura for this task.
You will get a complete code coverage report after you run your tests and if you use the cobertura-check ant task you can add checks for the coverage and stop the ant call with the property haltonfailure.
You could make it part of the validation of the VO. If the fields aren't set when you use a getter it can throw an exception.
I'm currently debugging some fairly complex persistence code, and trying to increase test coverage whilst I'm at it.
Some of the bugs I'm finding against the production code require large, and very specific object graphs to reproduce.
While technically I could sit and write out buckets of instantiation code in my tests to reproduce the specific scenarios, I'm wondering if there are tools that can do this for me?
I guess specifically I'd like to be able to dump out an object as it is in my debugger frame (probably to xml), then use something to load in the XML and create the object graph for unit testing (eg, xStream etc).
Can anyone recommend tools or techniques which are useful in this scenario?
I've done this sort of thing using ObjectOutputStream, but XML should work fine. You need to be working with a serializable tree. You might try JAXB or xStream, etc., too. I think it's pretty straightforward. If you have a place in your code that builds the structure in a form that would be good for your test, inject the serialization code there, and write everything to a file. Then, remove the injected code. Then, for the test, load the XML. You can stuff the file into the classpath somewhere. I usually use a resources or config directory, and get a stream with Thread.currentThread().getContextClassLoader().getResourceAsStream(name). Then deserialize the stuff, and you're good to go.
XStream is of use here. It'll allow you to dump practically any POJO to/from XML without having to implement interfaces/annotate etc. The only headache I've had is with inner classes (since it'll try and serialise the referenced outer class).
I guess all you data is persisted in database. You can use some test data generation tool to get your database filled with test data, and then export that data in form of SQL scripts, and then preload before your intergration test starts.
You can use DBUnit to preload data in your unit test, it has also a number of options to verify database structure/data before test starts. http://www.dbunit.org/
For test data generation in database there is a number of comercial tools you can use. I don't know about any good free tool that can handle features like predefined lists of data, random data with predefined distribution, foreign key usage from other table etc.
I don't know about Java but if you change the implementations of your classes then you may no longer be able to deserialize old unit tests (which were serialized from older versions of the classes). So in the future you may need to put some effort into migrating your unit test data if you change your class definitions.
I have been working on a comparatively large system on my own, and it's my first time working on a large system(dealing with 200+ channels of information simultaneously). I know how to use Junit to test every method, and how to test boundary conditions. But still, for system test, I need to test all the interfacing and probably so some stress test as well (maybe there are other things to do, but I don't know what they are). I am totally new to the world of testing, and please give me some suggestions or point me to some info on how a good code tester would do system testing.
PS: 2 specific questions I have are:
how to test private functions?
how to testing interfaces and avoid side effects?
Here are two web sites that might help:
The first is a list of open source Java tools. Many of the tools are addons to JUnit that allow either easier testing or testing at a higher integration level.
Depending on your system, sometimes JUnit will work for system tests, but the structure of the test can be different.
As for private methods, check this question (and the question it references).
You cannot test interfaces (as there is no behavior), but you can create an abstract base test classes for testing that implementations of an interface follow its contract.
EDIT: Also, if you don't already have unit tests, check out Working Effectivly with Legacy Code; it is a must for testing code that is not set up well for testing.
Mocking is a good way to be able to simulate system tests in unit testing; by replacing (mocking) the resources upon which the other component depends, you can perform unit testing in a "system-like" environment without needing to have the entire system constructed to do it.
As to your specific questions: generally, you shouldn't be using unit testing to test private functions; if they're private, they're private to the class. If you need to test something, test a public method which uses that private method to do something. Avoiding side effects that can be potentially problematic is best done using either a complete test environment (which can easily be wiped back to a "virgin" state) or using mocking, as described above. And testing interfaces is done by, well, testing the interface methods.
Firstly, if you already have a large system that doesn't have any unit tests, and you're planning on adding some, then allow me to offer some general advice.
From maintaining the system and working with it, you'll probably already know the areas of the system which tend to be buggiest, which tend to change often and which tend not to change very much. If you don't, you can always look through the source control logs (you are using source control, right?) to find out where most of the bug fixes and changes are concentrated. Focus your testing efforts on these classes and methods. There's a general rule called the 80/20 rule which is applicable to a whole range of things, this being one of them.
It says that, roughly on average, you should be able to cover 80 percent of the offending cases by doing just 20% of the work. That is, by writing tests for just 20% of the code, you can probably catch 80% of the bugs and regressions. That's because most of the fragile code, commonly changed code and worst offending code makes up just 20% of the codebase. In fact, it may be even less.
You should use junit to do this and you should use something like JMock or some other mocking library to ensure you're testing in isolation. For system testing/integration testing, that is, testing things while they're working together, I can recommend FitNesse. I've had good experience with it in the past. It allows you to write your test in a web browser using simple table-like layouts, where you can easily define your inputs and expected outputs. All you have to do is write a small backing class called a Fixture, which handles the creation of the components.
Private functions will be tested when the public functions that call them. Your testing of the public function only cares that the result returned is correct.
When dealing with API (to other packages or URLS or even to file/network/database) you should mock them. A good unit test should run in a few milliseconds not in seconds. Mocking is the only way to do that. It means that bugs between packages can be dealt with a lot easier than logical bugs at the functional level. For Java easymock is a very good mocking framework.
You may have a look on this list : Tools for regression testing / test automation of database centric java application? for a list of interesting tools.
As you seem to already use Junit extensively it means that you're already "test infected", that is a good point...
In my personal experience, the most difficult thing to manage is data. I mean, controlling very acutely the data agaisnt which the tests are runned.
The lists of tools given before are useful. From personal experience these are the tools I find useful:
Mocking - Mockito is an excellent implementation and has clever techniques to ensure you only have to mock the methods you really care about.
Database testing - DBunit is indespensible for setting up test data and verifying database interactions.
Stress testing - Jmeter - once you see passed the slightly clunky gui this is a very robust tool for setting up scenarios and running stress tests.
As for general approach start by trying to get tests running for the usual "happy paths" through your application these can form a basis for regression testing and performance testing. Once this is complete you can start looking at edge cases and error scenarios.
Although this level of testing should be secondary to good unit testing.
Good luck!
I've been asked to work on changing a number of classes that are core to the system we work on. The classes in question each require 5 - 10 different related objects, which themselves need a similiar amount of objects.
Data is also pulled in from several data sources, and the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
I'm beginning to get overwhelmed with this task. I have tried unit testing with JUnit and Easymock, but as soon as I mock or stub one thing, I find it needs lots more. Everything seems to be quite tightly coupled such that I'm reaching about 3 or 4 levels out with my stubs in order to prevent NullPointerExceptions.
Usually with this type of task, I would simply make changes and test as I went along. But the shortest build cycle is about 10 minutes, and I like to code with very short iterations between executions (probably because I'm not very confident with my ability to write flawless code).
Anyone know a good strategy / workflow to get out of this quagmire?
As you suggest, it sounds like your main problem is that the API you are working with is too tightly coupled. If you have the ability to modify the API, it can be very helpful to hide immediate dependencies behind interfaces so that you can cut off your dependency graph at the immediate dependency.
If this is not possible, an Auto-Mocking Container may be of help. This is basically a container that automatically figures out how to return a mock with good default behavior for nested abstractions. As I work on the .NET framework, I can't recommend any for Java.
If you would like to read up on unit testing patterns and best practices, I can only recommend xUnit Test Patterns.
For strategies for decoupling tightly coupled code I recommend Working Effectively with Legacy Code.
First thing I'd try to do is shorting the build cycle. Maybe add in the options to only build and test the components currently under development.
Next I'd look at decoupling some of the dependencies by introducing interfaces to sit between each component. I'd also want to move the coupling out in the open most likely using Dependency Injection. If I could notmove to DI I would have two ctors, on no-arg ctor that used the service locator (or what have thee) and one injectable ctor.
the project uses EJB2 so when testing, I'm running without a container to pull in the dependencies I need!
Is that without meant to be a with? I would look at moving as much into POJOs as you can so it can be tested without needing to know anything EJB-y.
If you project can compile with Java 1.5 you shoul look at JMock? Things can get stubbed pretty quickly with 2.* version of this framework.
1.* version will work with 1.3+ Java compiler but the mocking is much more verbose, so I would not recommend it.
As for the strategy, my advice to you is to embrace interfaces. Even if you have a single implementation of the given interface, always create an interface. They can be mocked very easily and will allow you much better decoupling when testing your code.